Skip to main content
RISE logo

The AI agenda — interview Virginia Dignum

"It is important that the systems are developed and used in an ethical and lawful manner, and that their results are beneficial for people and environment."




Photo: Mattias Pettersson

Virginia Dignum, Umeå university

Representative Workgroup Public

Virginia Dignum is Professor of Responsible Artificial Intelligence at Umeå University, Sweden and associated with the TU Delft in the Netherlands. She is the director of WASP-HS, the Wallenberg Program on Humanities and Society for AI, Autonomous Systems and Software. She is a Fellow of the European Artificial Intelligence Association (EURAI), a member of the European Commission High Level Expert Group on Artificial Intelligence, of the working group on Responsible AI of the Global Partnership on AI (GPAI), of the World Economic Forum’s Global Artificial Intelligence Council, of the Executive Committee of the IEEE Initiative on Ethically A igned Design, and a founding member of ALLAI-NL, the Dutch AI Alliance. Her book “Responsible Artificial Intelligence: developing and using AI in a responsible way” was  published by Springer-Nature in 2019.

She studied at the university of Lisbon and the Free University of Amsterdam and has a PHD in Artificial Intelligence from Utrecht University in 2004. In 2006 she was awarded the prestigious Veni grant by the NWO (Dutch Organization for Scientific Research).

 ”Reliable AI in the service of man” – what does that mean?

My view is that we need to be able to trust the AI systems that we use or that are used by others to make decisions about us. This means that the system itself (the software and the hardware) must be reliable, that the organisations that develop and deploy the systems take the responsibility for the systems and are accountable in case things may go wrong. But also that the systems are developed and used in an ethical and lawful manner, and that their results are beneficial for people and environment.

In what way can we ensure that AI systems are shaped without conflict with human values ​​and ethical principles?

It is difficult, if not even impossible, to ensure that AI systems are never in conflict with ethical principles and human values. We, people, don't always agree on how to interpret human values and depending on the context, we give different priorities to different principles and values. What is important is that the principles and values underlying an AI system are transparent, and have been designed in a participatory and open way, so that it takes into account the human variety. For example, we can define 'fairness' both as the equal access to resources, or the equal access to opportunities. Depending in the context, the actual result can be very different. It is thus important to have transparency about which approach is being used by an AI system and why that is the choice

How do you think Sweden differs from other countries in how we work within the framework of AI?

I don't think that there is a very large difference, at least not with other European countries. The Swedish approach is very inclusive, taking into account a large number of groups and aspects into the participation in AI agendan. That reflects the social-democratic principles of Sweden. This is a good approach.

What do you consider to be particularly important in the work with Sweden's AI agenda?

I think that the social-democratic roots of Sweden are well reflected in the agenda. I would like to see that our work is able to bring the principles of democratic welfare that are so strong in Sweden, into the future in a way that AI is human-centred and beneficial.


"AI will make our lives easier"
The Akavia undion member magazine Akavia Aspekt, has interviewed Virginia in a long feature article about AI and worklife. The article (Swedish) is on page 10-13.

More about Virginia