To increase the level of autonomy, future vehicles will depend on functions based on Deep machine learning (DML) whose correct behavior cannot be guaranteed by traditional methods. To enable DML-based functions to take decisions in autonomous vehicles, SMILE II focuses on methods that allow such functions to be included.
Increase knowledge around methods that can be used for verification and validation of deep learning-based system within safety critical applications. The following research questions are studied within the project:
How can model performance be monitored when pre-trained with different data sets?
How can new data be used to update models while maintaining model performance and security?
Critical part of the functional safety standard ISO26262 are not defined for autonomous systems and their process demands and recommendations are not applicable for the development of machinelearning-based systems specification, design and test.
The project studies the possibility to design a safety cage to monitor input signals to a DML-based model that is used in a vehicle to recognize its surroundings.
To realise autonomous vehicles a perception system is required that can interpret the surrounding of the vehicle. To perceive objects in a dynamic environment such as traffic, the systems need to incorporate machine learning, that have the capability to learn from historical data. This project develops technologies that makes it possible to trust the perception system, that it perceives the surrounding in a correct way.
Artificial intelligence, Mobility, Sensation and perception, Automated vehicles, Transport systems
2017-10-01 – 2019-09-30
9 455 000
10. Reduced inequalities
11. Sustainable cities and communities