Skip to main content
RISE logo

The How and Why important for AI system certification

Products and services enhanced with artificial intelligence will be more stringently regulated. As a consequence of the EU’s forthcoming legislation, describing how and why the algorithms achieve a certain result will be a prerequisite for getting AI systems certified by independent operators such as RISE.

The new EU AI Act is expected to be passed next year. It will enforce far-reaching requirements on manufacturers and operators to document the inner workings of their AI technology. This applies to so-called high-risk systems*, which are often seemingly uncontroversial algorithms.

Is the data on which the system was trained of sufficiently high quality? In EU terms: relevant, representative, accurate, and complete. Are systems in place that ensure quality at all stages of the development of the AI system? Has the strategy employed to comply with regulations been documented? How have the design and quality of the system been verified? What standards are relevant to data management?

To be CE-marked and released on the market, third-party certification is required. Fredrik Warg, a researcher in transportation safety, says RISE is highly accustomed to analysing similar processes in the automotive industry:

“The specific thing to AI and specifically machine learning is that data is very important. The way you specify your training and test data is very challenging.”

Garbage in, garbage out

Flawed training data results in a flawed AI system according to the well-known principle: garbage in, garbage out. Markus Borg, Research Fellow at RISE, says that, in practical terms, a data processing chain for a security system, for example, can require a lot of manual input:

“Depending on which sensors are used, a varying degree of hands-on work is needed. The data must be annotated in some way – ‘that’s a pedestrian, that’s a tree’ and so on. Work that today is still largely carried out manually.”

Then why is certification important?

“We can, of course, choose to not use AI,” says Warg. “For example, we can choose to not further develop autonomy in vehicles and keep relying on manual driving. But the industry doesn’t want this; there’s demand for autonomy and AI solutions, such as driver systems and self-driving passenger cars. Machine learning is the only technical approach we know of at present that can offer this solution.

“Why is certification important? Well, otherwise you would have companies cutting a lot of corners in order to be first on the market. That would be unsafe for us in society.”

He compares it with the development of the ISO 26262 Function Safety Standard, which regulates software and electronics, and which was advanced by automotive manufacturers. Similarly, a basic standard is needed for AI systems, which are frequently also integrated into machines and thus subject to the new EU Machinery Directive.

Why is certification important? Well, otherwise you would have companies cutting a lot of corners in order to be first on the market

Tools for explaining decisions

Markus Borg, who researches software development and machine learning, says that in the field of Explainable AI there are tools to explain why an AI system takes a certain decision. This is particularly important for the definition of high-risk systems, where considerations for safety, fairness, and reliability are prioritised. The area comprises an array of techniques and methods for understanding an AI model and the characteristics of the training data used.

“What will be the benchmark for ‘good enough’ in the certification of AI applications?” asks Borg. “That’s the big question.

“With such a huge focus on data, there is a different kind of vulnerability across the chain. You need to be able to reproduce what has happened. To perform root cause analyses on things that have gone wrong, you need to be able to explain design decisions and show why you have chosen a specific architecture in the models. There are numerous minor aspects that you need to keep track of.”

Both researchers emphasise that having full control of the toolchain will be crucial for successful certification once the AI Act is in force.

“The work process, evidence, documentation of the entire journey,” says Warg.

“We will keep on trying to help customers and partners design good AI systems and produce new standards. RISE will be involved in multiple areas.”

* Examples of operations where AI systems are considered to be high risk:

Critical infrastructure, education, employment, private and public services (e.g. credit assessment), law enforcement (e.g. assessment of evidence), border control (e.g. inspection of travel documents).

Fredrik Warg

Contact person

Fredrik Warg

Forskare

+46 10 516 54 12

Read more about Fredrik

Contact Fredrik
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.

* Mandatory By submitting the form, RISE will process your personal data.