Skip to main content
RISE logo

Strategic approach to transparency can increase trust in AI services

“Transparency is seen as a key to increasing trust in AI services. But it's not about turning up the information shower to high. Instead, think strategically about purpose and context,” says Jacob Dexe, researcher at RISE. “And that most users don't need to understand exactly how an AI system works.”

The benefits of streamlining the Swedish authorities' processes with AI are valued at around SEK 140 billion per year. The greatest potential is in administration – financial management, HR, legal, etc. – and in planning and diagnosis in healthcare.

But so far, Swedes seem hesitant about direct AI decision-making. In a report on the Swedish people and AI (Svenska folket och AI), only between 15 and 20 percent say that they are okay with the algorithms making decisions that affect such things as insurance premiums, parental allowances, pensions, loan promises and so on.

Could more transparency be the tool to increasing acceptance of this kind of data-driven decision-making?

“Transparency does not solve the issue on its own,” says Jacob Dexe, who in his research has among other things, looked at what the insurance industry sees as value with transparency and how authorities procure so-called decision robots.

“You don't get any trust by sharing an entire database. You need to explain how the information is created. Package, formulate, give transparency a direction to achieve a purpose.”

Set the right expectations

In practice, it comes down to knowing who you have as your customers. Knowing the values they have, what they see in the product. And then setting the right expectations.

"There are specialized services – such as fitness apps, where a narrow customer segment has completely different expectations than with other apps – that work in that particular context. But maybe I wouldn't think it's so great if my heart rate data were to be disseminated in a different context, outside of my fitness app,” he says.

“Here, its users have a perception of what the system is doing that does not match what is actually happening, they will be disappointed.”

In explaining context and technology, several professional categories can contribute, Jacob Dexe points out. Besides engineers and lawyers, there are probably others with more client experience of the questions and the expectations a user may have.

You don't get any trust by sharing an entire database

More advanced systems are becoming harder to explain

Jacob Dexe works at the RISE unit for societal transformation. The hope is to be able to help organizations create more responsible transparency and explainability in AI services.

As AI systems become more advanced, it's also becoming harder to explain exactly what's going on under the hood. Machine learning using neural networks is sometimes likened to a black box where the process itself is hidden. One consolation is that in many contexts it is enough for a user to know that something is happening and what it can entail.

“We have learned contexts as to how we explain things,” says Jacob Dexe, exemplifying how insurance companies explain pricing in a home insurance policy.

“You specify your age, where you live, the insurance value and so on. Because you've asked a question about insurance, you understand that the process involves some kind of calculation of risk based on these variables. You then do not need to explain in detail to attain a sufficient explanatory value.”

Regain trust

Jacob Dexe says transparency is a good tool for regaining trust in the event of negative AI decisions.

“If you get a positive decision from AI, it probably means that it matches what you expected. If the decision does not match what you expected, transparency in how the negative decision was reached can still lead to acceptance. One can return to the same level of trust. However, we do not see that transparency can increase trust in a positive decision to the same extent.”