Skip to main content
RISE logo
AI Act

New EU regulation on the use of AI in Europe

The EU’s forthcoming regulation on products and services that utilise artificial intelligence is detailed and far-reaching. Experts at RISE have reviewed the technical and legal aspects of the proposal and see new burdens on those whose AI systems are suddenly classified as high risk.

“The new regulation, called the AI Act, is the world’s first specific regulation pertaining to AI and may be passed in the spring of 2023 under Sweden’s EU presidency,” says Susanne Stenberg, a researcher at RISE and a legal expert in new technologies. “It would then begin to be applied two years later.”

So, why this is needed? Firstly, the EU wants to demonstrate what “Good AI” should be; that it is guided by our Western society and fundamental rights. Then there is the realisation that AI systems inherently pose a risk and can cause damage, and therefore need to be regulated.

“The regulation is under development and more compromise proposals may be included,” explains Stenberg. “Therefore, once it comes into force, the rules may not be exactly as proposed by the Commission, and we must be prepared for that.”

Applicable in all EU countries

Once the regulation is approved, no further national legislation will be required since the new rules are applicable in all EU countries. AI systems introduced after the proposal becomes law are covered, as are existing systems that are updated or substantially modified.

What is an AI system? Many things, according to the regulation wording, which lists technologies such as machine learning, logic- and knowledge-based systems, and statistical methods.

“They have defined AI systems very broadly,” says Håkan Burden, a researcher in AI and IT systems at RISE.

An AI system receives input and then does some sort of analysis in relation to a goal set by a human. The system then produces output, which can affect its surroundings.

“One example is a control system in a mining machine. Sensor data shows that there is an obstacle to the right. That’s not good, a human-set goal is to avoid collision and the output of the system tells the wheel axle to turn left.”

Another example is a system that ranks preschool applicants according to their journey and whether they have a sibling in the preschool. This involves a support system for carrying out municipal functions. An algorithm uses the position of the school and the distance from the applicant’s home address according to the population register to obtain a value, which is then used to create a queue according to established terms and conditions.

“It’s a very simple algorithm, but because the right to attend school is a fundamental right, it could be classified as a high-risk system,” says Burden.

If the AI Act is passed in 2023 and goes into effect two years later, you need to think about this yesterday

Different risk categories

The proposal for a new regulation assigns AI systems to different risk categories:

  • Unacceptable risk – technologies that threaten people’s safety, livelihoods and rights are prohibited. Specifically mentioned are systems that states can use for social scoring and toys that, via voice assistance, encourage dangerous behaviour.
  • Limited risk – minimal transparency requirements for systems such as chatbots in order to make people aware that they are interacting with an algorithm.
  • High risk – this covers a range of activities* as well as some physical products. The EU Commission also proposes being able to add areas when needs arise.

CE marking of AI systems

High-risk AI systems are proposed to be subject to specific requirements for oversight and registration to be able to be used within the EU. These include technical documentation requirements and the implementation of a risk management system, as well as data management and quality requirements. When an AI system provider asserts that it has satisfied the requirements in the regulation, CE marking can be carried out. However, to prevent CE marking on incorrect grounds, hefty fines of up to 6 percent of global turnover are proposed.

Third parties in the AI value chain, i.e. producers of software or pre-trained models and data, network service providers, etc., are implored in the regulation text to cooperate “as appropriate” to facilitate certification of AI systems.

“Reaching consensus on ‘as appropriate’ will not be a trivial matter,” says Stenberg.

Functioning processes must be in place

Stenberg and her colleague Håkan Burden recognise that different operators have different needs, while authorities and product developers have a common interest in getting functioning processes for documentation and certification in place.

“Consider a municipal administration where you’re going to procure a software system for water, roads, electricity – these are high-risk systems. And when a purchasing department must set requirements for what is needed, how do you obtain information that the data used to train the system is relevant, representative, flawless and complete?”

“If the AI Act is passed in 2023 and goes into effect two years later, you need to think about this yesterday.”

* Examples of operations where AI systems are considered to be high risk:

  • Critical infrastructure (e.g. transport)
  • Education (e.g. test scoring)
  • Employment (e.g. software for sorting CVs)
  • Private and public services (e.g. credit assessment)
  • Law enforcement (e.g. assessment of evidence reliability)
  • Border control (e.g. checking authenticity of travel documents)
Susanne Stenberg

Contact person

Susanne Stenberg

Senior Forskare/Rättslig expert

+46 73 398 73 41

Read more about Susanne

Contact Susanne
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.

* Mandatory By submitting the form, RISE will process your personal data.

Contact person

Håkan Burden

Senior forskare

+46 73 052 02 29

Read more about Håkan

Contact Håkan
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.

* Mandatory By submitting the form, RISE will process your personal data.