Skip to main content
Search
Menu
Computer screens displaying CT-scan images of a brain.

The AI Act is changing the landscape of artificial intelligence – are you aware of what this means?

The AI Act covers anyone who develops AI solutions, uses them in their business or puts AI systems on the Swedish or European market. In this article, researcher and legal expert Susanne Stenberg explains everything you need to know about the AI Regulation.

An algorithm that suggests new movies based on your streaming history, or a language model that drafts emails, seems fairly harmless. It wouldn't seem to matter too much if it happened to be wrong. However, when it comes to reviewing cancer patients' X-rays or deciding on loans, it suddenly becomes critical that the models are secure and reliable. 

The need for reliability is why the EU has passed the AI Act: a new set of rules that will be introduced in stages between February 2025 and summer 2027.

What is the purpose of the AI Act, and what will it mean in practice?

– The new regulation is intended to protect people’s health, safety, and fundamental rights. It is structured so that the rules fit into a product safety approach.’ In this way, the AI regulation is similar to consumer protection legislation or requirements for a minimum level of quality. In practice, it is a form of market regulation that means AI systems are considered products, explains Susanne Stenberg, a senior researcher and legal expert specialising in digital systems and AI at RISE.

CE marking will be introduced for AI systems

You are subject to the AI Act if you run or work in a company or organisation that develops, imports, distributes or implements AI systems for business, products or services. Certain applications are considered higher-risk and are therefore subject to specific safety, transparency and documentation requirements. One example is healthcare, where AI can affect patient health and diagnostics.

How can we support?

Do you need guidance on how the AI Act affects your business and how you can adapt to the new requirements? Contact us by filling out the form:

* Mandatory information By submitting the form, RISE will process your personal data.

– Product categories such as machinery, radio equipment and toys are already covered by EU product safety legislation. Here, the CE marking is used as a quality stamp. With the AI Regulation, the CE marking will also cover certain AI systems, explains Susanne Stenberg.

In practice, it is a form of market regulation that means AI systems are considered products.

Ban on 'unacceptable AI systems'.

Since February 2025, certain AI applications have been banned as they are considered unacceptable. These include social scoring and certain types of biometric surveillance, which involve evaluating or classifying individuals or groups based on their social behaviour or personal characteristics. For example, if someone crosses a red light, this should not affect their chances of getting a job. The purpose of these bans is to prevent the introduction of mass surveillance and scoring systems like those in China in the EU.

How will compliance with the AI Act be ensured among Swedish companies and organisations?

The regulation states that market surveillance authorities shall carry out supervision, and it is up to each Member State to decide which authority will be responsible for this. A Swedish report from autumn 2025 suggests that responsibility should be given to several authorities. In the case of AI in machines, for example, the proposal is that supervisory responsibility should remain with the authority that controls the machine itself — the Swedish Work Environment Authority. Susanne Stenberg continues: 

– The Swedish Post and Telecom Authority (PTS) is proposed to have primary responsibility for the AI Regulation.

– If you work with chemicals, you will already be familiar with the Chemicals Agency. However, many industries affected by the AI Regulation have not previously had this type of relationship with a regulatory authority. In such cases, the section on market surveillance may seem unfamiliar. However, support is available, including from RISE, to help companies fulfil their regulatory obligations.

Business opportunities linked to the AI Regulation

Susanne Stenberg believes that companies which can demonstrate with confidence that their products feature robust and reliable AI systems will enjoy a significant competitive advantage in the future.

I interpret the EU regulation as meaning that the market is not yet mature. The introduction of these regulations indicates that we are moving towards the technology being used more widely. There is also much to suggest that, as a society, we have much to gain from using various AI technologies in the right context. At the same time, however, we do not want products on the market that cannot be trusted. 'If you cannot explain why customers can trust your product, you do not have a place on the market, says Susanne Stenberg.

What is the most important thing to consider when trying to do the right thing?

Start with something that does not risk harming anyone. Learn how the technology works, and explore ways to use AI that do not negatively impact people's health, safety or fundamental rights.

The AI Regulation also benefits small and medium-sized enterprises, as it requires each Member State to have an AI regulatory sandbox, where suppliers and contractors who want to use AI can obtain legal guidance in a test environment. Susanne Stenberg also highlights the importance of larger companies using their influence to coordinate efforts and share how they translate the rules into technical requirements.

– Having been involved in developing technical standards, I have realised that interdisciplinary expertise is required, and that the results must be made available to everyone, she says, continuing:

– It is in the interests of larger companies for smaller companies to deliver products that comply with regulations. That is why I believe everyone benefits from collaboration: together, we can develop the best practices for AI systems. This enables us to work together to build trust in AI and ensure the market is full of safe products.

That is why I believe everyone benefits from collaboration: together, we can develop the best practices for AI systems. This enables us to work together to build trust in AI and ensure the market is full of safe products.

What support can I get from RISE?

RISE works in several key areas of AI to promote innovation, drive transformation, and contribute to sustainable social development. Its work covers both practical applications and strategic issues that will pave the way for future AI. This includes developing ethical guidelines, policies and standards for AI. 

Susanne's 4 tips for getting to grips with the AI Act

  1. Take a look at the software you use today. Is it classified as an AI system under the AI Act? Regardless of whether you refer to a digital solution as AI or not, it may still fall under the EU's definition of an AI system.
  2. The next step is to investigate whether the AI system needs to be CE marked. If you already have a CE marking process in place, this will give you a head start, as your organisation will already understand the process.
  3. Be bold and use the regulation as a guide. The AI Regulation covers human oversight, robustness and accuracy. Use these as a starting point and consider how you can implement AI with reliability at its core.
  4. Join a research project! This will give you the opportunity to learn from experts and share experiences with other companies and organisations. 
Susanne Stenberg

Contact person

Susanne Stenberg

Senior Forskare/Rättslig expert

+46 73 398 73 41

Read more about Susanne

Contact Susanne
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.

* Mandatory By submitting the form, RISE will process your personal data.

Related