David Eklund
Senior Forskare
Contact David
Robots coupled with AI constitute complex and adaptable systems that hold great potential for society and industry. But how do we ensure that these systems are safe, secure and trustworthy? In this blog we will have a closer look at these issues.
This year at the Center for Cybersecurity at RISE we have carried out the project CyRA - Cyber risks of AI systems. The project surveys the state-of-the art in AI related security threats and conducts case-studies in industrial AI applications. One of the areas we have looked into is AI models for robotic control.
Reinforcement learning (RL) is an AI technique that can be used to improve robotic control by enabling systems to learn complex behaviors through interaction with their environment, rather than explicit programming. This adaptability makes RL useful for tasks in logistics, manufacturing, healthcare and disaster response and applications may even extend to social and household robots. However, as machines become more autonomous, it is important to also ensure that they are safe, secure and comply with laws and regulations.
At RISE we perform research in robotic control and reinforcement learning and we develop tools and methods to validate that trained AI models behave as expected. We also emphasize the importance of sound cybersecurity practices in robotics.
Traditional control systems rely on transparent models and rules, which makes them attractive from a safety standpoint but limits their flexibility in dynamic environments. RL changes this by allowing robots to learn policies that map observations to actions, often using AI and neural networks. These policies enable robots to adapt to uncertainty and optimize their performance. Yet, flexibility comes at a cost in that AI controllers can behave unpredictably, which raises serious safety concerns and may lead to damaged equipment, violation of operational constraints, or even endanger humans.
Modern robots are rarely isolated, rather they communicate with networks, cloud services, and other devices. This connectivity introduces vulnerabilities that attackers can exploit. Remote hacking is one of the most severe threats, where poor network security may allow malicious actors to take control of a robot, access its sensors, or install persistent malware. Another threat is sensor spoofing, which means that someone tricks the robot into misinterpreting its surroundings by feeding it false data. Yet another potential disruption is denial-of-service attacks which can disrupt communication. This emphasizes the need to follow established cybersecurity practices when developing robotic systems.
Cybersecurity alone is not enough. We must also ensure that RL-trained controllers behave safely and reliably under a wide variety of operating conditions. Simulation and testing are good steps toward this end. However, AI models such as neural networks can display adverse behavior in scenarios that are very hard to find manually or with random tests. At RISE we conduct research on counterexample-guided optimization which is a powerful technique to find problematic scenarios where the robot controller does not behave as expected. In addition to this, formal verification can provide mathematical proof of correctness, although it should be said that this is a computationally demanding process.
Safety and security are both technical regulatory challenges. Under the EU Artificial Intelligence Act, many applications of robotic control systems incorporating AI are classified as high-risk AI systems. This imposes strict requirements on for example risk management, technical documentation, robustness, accuracy, cybersecurity, and human oversight.
Reinforcement learning makes possible adaptability for robotic systems, but it also amplifies the challenges of safety, security, and regulatory compliance. Addressing these issues requires a holistic approach that combines rigorous testing, optimization, formal methods, and real-time monitoring with robust cybersecurity practices and adherence to emerging legislation. As robots become more autonomous, ensuring their reliability is essential for safe operation of the next generation of intelligent machines.