Kristina Knaving
Fokusområdesledare Den uppkopplade individen-Senior forskare och interaktionsdesigner
Contact Kristina
In 2035, I sit at home in Göteborg looking for travel ideas with an AI tool built on internationally developed foundation models, delivered by European and Swedish providers. The AI works for me and protects my data. At work, similar systems handle routines and simulations. We work six-hour days, and I volunteer weekly at an elderly care home
I was invited to reflect on Sweden’s path toward inclusive AI—a future where technology strengthens human judgment and reflects our shared values. In this piece, I imagine a 2035 shaped by diverse, transparent, and culturally aligned AI systems developed through Swedish and European leadership. I outline how education, governance, and robust infrastructure can turn that vision into reality, ensuring AI becomes a trustworthy tool that empowers people and supports a more equitable society.
There's a reason future framing matters beyond abstract planning exercises. When we imagine our future selves we build empathy with the person we will be. This empathy makes us better decision-makers today. Instead of optimising for immediate convenience, we consider what our future selves will actually need and whether they will thank us for the choices we're making now. This helps us make decisions about our health, about sustainability, and about future technologies.
Right now, we're making choices that will shape what AI looks like in 2035. Will it be a handful of global systems we depend on entirely, with values and biases we cannot influence? Or will we have built something that reflects Swedish and European values - systems we can understand, modify, and trust?
I have argued consistently that we need many different AI models because diversity equals resilience. AI is not objective - it carries the values, biases, and knowledge gaps of its training data and developers. But just as biodiversity makes ecosystems more robust, having multiple AI systems from different countries and cultures makes our technological ecosystem more resilient.
By 2035, inclusive AI means Swedish researchers and organisations contribute to this diversity rather than simply consuming technology developed elsewhere. It means we've invested in foundation models adapted to Swedish languages and culture, trained on data that includes our perspectives and priorities. Many of these models are open source, not because openness is inherently good, but because transparency allows verification and democratic input into how systems are aligned, and because it means that all countries and people can benefit from them.
We cannot make AI entirely safe, just as we never made cars entirely safe. But we didn't give up on cars - we built robust systems around them. Car manufacturers aim to make vehicles as safe as possible, but we also test cars, require driving licenses, design roads for safety, support insurance, establish traffic rules, and create emergency response systems. The technology is just one part; societal robustness comes from the infrastructure we build around it.
The same principle applies to AI. In my vision of 2035, inclusive AI means we've invested not just in safer technology, but in education and lifelong learning so people can adapt as AI changes work and society. The people whose data is used for AI are compensated. We have strengthened safety nets for those whose jobs are transformed or eliminated. We test AI to ensure they don't discriminate. We've built robust cyber and information security infrastructure to handle deepfakes and synthetic content. We've created verification systems and media literacy programs that help people navigate an information landscape increasingly filled with AI-generated material.
This is unglamorous work - harder to fund than exciting technical breakthroughs - but absolutely essential. It includes rigorous testing of AI systems throughout their lifecycle, from development through deployment and ongoing use. Just as we don't expect Volvo and Toyota to solve all problems connected to car use, we cannot expect technology companies to solve all problems created by AI. We as a society must figure out what we want to demand, and those demands must be specific and rooted in how technologies actually work.
Getting to 2035 requires taking practical steps today. Most Swedes are already using generative AI - often as "shadow tech" that employers and institutions don't officially acknowledge. Rather than pretending this isn't happening, we need to bring these users into informed practice.
AI literacy in 2035 means people have learned to work with these systems critically - not as truth-telling oracles, but as tools with specific strengths and significant limitations.
The most dangerous AI applications are those where systems make decisions about people's lives without meaningful human oversight. By 2035, inclusive AI means we've established clear boundaries: AI can support decisions in healthcare, education, and public services, but the final decisions - and accountability - remain with humans who understand context, can exercise judgment, and can be held responsible.
This research must include humanities and social sciences. We need to understand how AI systems affect society, information ecosystems, and democratic processes, and how to enact meaningful human oversight.
The future I'm describing won't arrive if we don't work towards it. It requires sustained investment in AI research balanced across technical development and societal impact studies. It needs cross-disciplinary governance willing to challenge uncritical technology adoption and ask difficult questions about what we actually want AI to do.
Most importantly, it requires recognising that we're making choices right now - about what systems we use, what dependencies we accept, what values we encode, and what infrastructure we build around the technology. Our future selves in 2035 will live with these choices. Let's make them deliberately, with eyes open to both possibilities and risks, building toward an AI landscape we can actually trust and shape.
Fokusområdesledare Den uppkopplade individen-Senior forskare och interaktionsdesigner
Contact Kristina