Skip to main content
RISE logo
Selfdriving car
Photo: Adobe Stock

AI and values

02 October 2023, 10:37

How to ensure that powerful AI is properly aligned with human values? Social and ethical artificial intelligence is an important issue in AI. RISE works with value issues & AI in various group constellations.

A huge challenge with AI is the moral aspect. Who is the moral expert that AI should learn from? From what data should AI obtain its opinion on values, and how should this be decided? Are there neutral values? And what happens if the AI system optimises for something we don't want? Who is actually responsible for decisions, the human or the machine? There are many question marks when it comes to morality & ethics, system design and AI.

What RISE does

"RISE has several different group constellations working on AI and credibility, ethics, morals and laws," says Rami Mochaourab, research leader in AI & industrial systems. "Among other things, we are investigating the values enshrined in the GDPR that are important in the development of AI systems. It is about identifying how often AI systems make mistakes, how precise data-driven forecasting is ensured without compromising human integrity, and how decisions from AI systems can be best assessed, presented and explained," says Rami.

Human values vs machine values

When it comes to people, we have different values, which also affect the decisions we make, that´s why we live different lives. When a person makes a decision, it is based on what is morally important to that individual. A person can have high morals, as well as a lack of morals.

"Values can be described as inherent beliefs that inspire one's decisions, behaviour and actions."

So if we have different values, what are 'correct' values? A starting point for accepted human values are laws. When it comes to machine values, they differ from human values in that they are based on mathematical and statistical values. Thus, when implementing values into systems and technology, transforming human values based on laws into technical values is a major challenge.

The dilemma of embedding morality

AI learns from the data it is fed. It is instructed to perform operations, through programming. Advanced AI systems to make important decisions must be programmed with moral theories, or test and learn morality themselves. What premises should we give the AI for decision-making, and who decides this? This is important to discuss!

Values can become security problems

Decisions made by AI systems can affect our lives, ranging from decisions on medical diagnoses, sentencing, bank lending decisions, parental allowances, or personalised marketing. In situations where the stakes are high, it is particularly important that the AI makes reasonable and credible decisions.

"For example, when diagnosing diseases with the help of AI systems, there is a difficult balance between integrity, precision, and the quality of the prognosis. The more integrity, the more uncertain the AI-based diagnosis, and uncertain AI decisions require more robust explainability. What reduces the precision of AI decisions is the deliberately generated uncertainty required in the forecast, to protect the privacy of the individual in the data set," Rami continues.

How a simple rule can become complicated

Unlike humans, we cannot easily negotiate values with machines. Simple decisions can become complicated for AI. For example, how should a self-driving car make ethical decisions? A lot of power means a lot of responsibility. The more data, the more information the AI has to make decisions. But what should be valued the most? Could the car evaluate people based on health status, age, economy? How to make an unbiased decision and minimise loss?

One tool is something called "Boundary marking concepts", which can be used when social, ethical and political values are debated. It is described as an instrument to mark the "accepted limit" of the technology in relation to moral acceptability, i.e. what the technology is allowed and when negotiation is no longer open.

Consensus, and continuous discussion

The development of the AI Act also contributes to the consensus, a new European law that divides AI into different risk categories. The idea is to regulate how AI may be used from a risk and ethical perspective. When developing this type of law, it is important to have different types of expertise - lawyers, social scientists, moral philosophers, developers, etc. RISE also contributes its expertise here, in policy labs, etc.

Consensus and collaboration are so important for long-term sustainable and equal AI development in all areas of society that there is also a special network, the AI agenda for Sweden, coordinated by the Centre for Applied AI at RISE.

Contact us at RISE to find out more!

Sverker Janson
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.

* Mandatory By submitting the form, RISE will process your personal data.

2024-03-19

2024-02-06

2023-11-15

2023-10-02

2023-09-13

2023-08-24

2023-06-21

2023-06-19

2023-06-02

2023-05-17

2023-05-09

2023-04-27

2023-04-05

2023-04-04

2023-04-04

2023-03-29

2023-03-16

2023-01-31

2023-01-31

2022-12-06

2022-11-15

2022-10-24

2022-10-24

2022-10-20

2022-10-20