Contact person
Anders Gavler
Projektledare
Contact Anders
The company Gimic develops AI-based systems for automated quality control in the manufacturing industry and is the first to undergo RISE’s new risk assessment service for evaluating systems in relation to the EU AI Act. "We received a clear report on where we stand today and what steps we need to take going forward," says Anders Cederlund, Project Manager at Gimic.
The new risk assessment service has been developed within the framework of CitCom.ai TEF, one of three Testing and Experimental Facilities (TEFs) in which Sweden participates. The goal of the service is to support decision-making and strategic planning for AI-based products and services, focusing on analysis and compliance with the AI Act and other AI-related regulations.
"Our ambition is to contribute to a fully automated and quality-assured industry. I am convinced that AI regulations will become mandatory in the near future. Companies developing AI solutions will need to demonstrate compliance with directives and standards. By engaging early, we are better prepared and strengthen the trust of our customers,” says Henrik Arvsell, CEO of Gimic.
As a company, it is crucial to understand which risk category your AI product belongs to according to the AI Act.
Gimic develops AI-based systems for automated quality control in industrial production. Instead of an operator manually inspecting, for example, a gear to identify defects, cameras and AI models are used.
“We see our technology as the final puzzle piece in fully automated factories. Today, almost the entire production flow is automated, but the final inspection is often still manual. With our solution, the process can become fully automated—from pallet to pallet—while also becoming more traceable,” says Henrik Arvsell, says.
The work began with the company filling out a form with technical and functional descriptions of the system. They then participated in a workshop together with CitCom partner RISE to clarify details and areas of application. After the workshop, RISE produced a report containing an analysis of the risk classification and recommendations for the next step.
"As a company, it is crucial to understand which risk category your AI product belongs to according to the AI Act in order to know which legal requirements apply, what responsibilities you have and how to navigate the regulatory framework. The same AI model can end up in different risk categories depending on its area of use, and the data quality requirements vary accordingly", says Kateryna Mishchenko, senior researcher at RISE.
As a small organisation, we found it valuable to receive structure and guidance.
The assessment showed that the company’s AI model currently falls within a low-risk level. However, it also became clear that the risk level could increase if the system were delivered to clients in more sensitive sectors—such as critical infrastructure—or if it were used as part of a safety system.
“As a small organization, it was valuable for us to get structure and guidance. It would have taken much longer to interpret all the legal and technical requirements on our own,” says Anders Cederlund.
“The important thing is that we now have a map to navigate by. We can already demonstrate to customers and partners that we work systematically with risk management, and we know what will be required if we enter more regulated markets,” continues Cederlund.
With its technology, Gimic envisions a future where quality control is not only fully automated but also more reliable and traceable, leading to safer and more efficient industrial processes. Ensuring compliance with AI regulations at an early stage is crucial.
CitComTEF is offering test and experimentation facilities for AI startups and SMEs to test or experiment AI models and/or robots for the smart and sustainable cities and communities. This TEF is consisting of 32 partners in 11 member states. Some partners are research technology organisations (RTOs), some are cities, some are universities and some are companies.
CitComTEF is offering subsidised access for AI startups and SMEs to real virtual and physical test environments.
In a TEF (Testing and Experimental Facilities) you can test your AI models and robots for legal compliance (AI Act, Data Act, Machine Act and more). TEFs are specialised in performning tests of AI and robotics solutions. They are open to all European technology providers and public sector.
The other three TEFs are: