Skip to main content
RISE logo

Trustworthy AI: The art of getting users to dare to rely on technology

AI and machine learning can simplify and improve our everyday life in our free time, at work, and in global societal development. But how do we dare trust the machines?

As with human relationships, trust, reliance and reliability often take a long time to be built up – but can also be destroyed in seconds.

– "Imagine driving a car down a road at 70mph. Suddenly the vehicle brakes, for no apparent reason. You don't understand anything. What happened? After experiencing such inexplicable behaviour, it would be easy to distrust the technology," says Cristofer Englund, AI specialist and head of the Humanised Autonomy department at RISE.

Being able to explain the basis for decisions is an important part of creating basic trust. This has always been the case, in all relationships – whether it is a manager who has decided that your department is to be closed down, or a partner who unexpectedly announces that they are going away for the weekend. The question arises: Why?

Uncertainty and ambiguity will damage trust; transparency strengthens it.

– "So if the car tells you that it is braking due to icy patches on the road, you are more likely to have more confidence in the technology and dare to let the car take over more of the driving.

This is what 'trustworthy AI' is all about. Researchers have realised that trust in the technology is often as important as the technology itself, if it is to gain a foothold and widespread use."

Communication is an important part of trust

Trust can be damaged in several ways

There's a lot that can damage trust. There are often reports in the media of how AI has acted with a bias or is distorted. Everything from recruitment processes that favour men over women (Amazon), advertising tools that strengthen gender stereotypes (Facebook), algorithms that predict that people of colour would be in greater need of healthcare (U.S.) or facial recognition that misidentifies Asian or African-Americans between 10 and 100 times more often than whites.

"One of the challenges of technology is to be able to show what the algorithm sees, that is, the premise of its assessments. Today, when I use HR technology to recruit job candidates, or when a bank grants or rejects loan applications, I only see what the AI has come up with, not how it arrived at its decision. It’s reasonable for users to know what choices have been made along the way.

But then there is another difficulty that affects confidence in AI. If we want an explanation of the process, what form should that clarification take?"

Communication is important for trust

– "Communication is an important part of trust. But it comes with challenges. For example, explaining a legal decision to a lawyer is one thing, explaining the motives for that decision to a layman is quite another. The communication has to be adapted to the user. Not just by level of knowledge, but also by the varying degrees of interest in the recipient’s communication, and other personal preferences. In a self-driving car, communication might be through different strengths of sound, vibrations in the seat, visual markers, and so on.

And just to add even more complexity; the same person may want to communicate with the technology in different ways in different situations. On a particular day, or in a specific situation, the user has time, curiosity and patience; on another day or occasion their tolerance level is non-existent. Only if the technology manages to adapt to the user can the relationship be really good."

AI that knows its own limitations

One principle that researchers talk about when it comes to reliable AI is 'operational domain design'. The technology is trained to see its own limitations, and to be able to honestly say 'this is a new situation for me'.

– "If, for example, the system learns to drive the car on a summer road with nice weather, then you also want it to recognise a situation that’s different. Like if it starts to rain. The user should then be made aware that the system is not prepared for that situation, instead of it continuing as if nothing has changed."