Johan Linåker
Forskare
Contact Johan
By 2035, AI will form the backbone of — and be fully integrated across — public digital services and physical infrastructure. Open Source AI will be critical for nation states, industries, and citizens to ensure transparency, control, and sovereignty over models and data inputs, and to avoid ending up under the dominance of any third-party actor.
In April this year, I contributed to the State of AI by RISE newsletter, and this piece builds on the thoughts I shared then. Each issue features a researcher’s Futureframing — a brief reflection designed to spark fresh perspectives on AI. My contribution focused on Open Source AI and why it is essential for driving innovation and competition, as well as meeting the increasing demand for transparency and control in how AI technologies are developed and deployed.
AI and the development of AI systems have the potential to disrupt and transform society at large, both in Sweden and globally. However, much of these potential risks being lost because the technology is mainly being developed and locked in by dominant players outside Swedish and European borders. This limits access to and use of the technology, and by extension transparency, control, and downstream innovation.
Open Source AI provides a potential solution by enabling the Swedish government and industry to take control, develop skills and capabilities, and collaborate on key AI technologies — nationally and across borders — by being able to freely study, use, modify, and share implementations, much like Open Source Software and Open Data.
Open, collaborative development — including the collective sharing of data, software, and knowledge — has the potential to accelerate training and development of AI systems, increase transparency into how they were trained and built, reduce dependencies on specific technology providers, promote interoperability, and democratise access both in Sweden and in less-resourced governments and organisations.
At the same time, openness comes with several risks, including unethical or illegal use cases, particularly in the context of information and cybersecurity, propaganda, and disinformation. This prompts the need for guardrails governing how Open Source AI systems and their underpinning models are released and under what conditions. Any development and governance of Open Source AI must therefore balance the complexity of model generation — benefiting from the collective intelligence and force multiplier of a community — while minimising the potential harmful impacts that openness may entail.
The coming decade will be decisive for whether Sweden and Europe build the skills, infrastructure, and culture needed to shape AI on their own terms. If this opportunity is missed, there is a substantial risk of deepening dependence on foreign AI platforms and infrastructures, resulting in loss of control over critical systems and reduced industrial competitiveness. Investing in Open Source AI is therefore not only a technical or economic decision, but a strategic one tied to digital sovereignty and long-term societal resilience.
At RISE, work is underway to understand how Open Source AI can be engineered through open, collaborative development in a global and decentralised setting, building on lessons from Open Source Software and Open Data. This includes studying how communities organise, how models and datasets are shared, and what governance arrangements support responsible openness at scale.
The aim is to enable companies and public-sector organisations in Sweden and beyond to develop the skills, culture, and infrastructure needed to build and co-develop Open Source AI systems. Concretely, this means supporting an open innovation mindset in AI, advancing and disseminating practical know-how, and helping organisations reduce dependency on single technology providers while improving efficiency in their own development and production processes.
In a recent study, we explore how Open Source AI can be collaboratively developed across data collection and curation, software development, and model design, training, and evaluation. Specifically, we examine 14 different open Large Language Models (LLMs) across diverse geographical contexts, including ones developed by companies, public research institutes, and grassroots organisations.
We find that collaboration in open LLM projects extends far beyond the models themselves, encompassing datasets, benchmarks, open source frameworks, leaderboards, knowledge-sharing and discussion forums, and compute partnerships, among others. We also observe that open LLM developers have a variety of social, economic, and technological motivations — from democratising AI access and promoting open science to building regional ecosystems and expanding language representation. Furthermore, the sampled open LLM projects exhibit five distinct organisational models, ranging from single-company projects to non-profit-sponsored grassroots initiatives, differing in their centralisation of control and community engagement strategies across the development lifecycle.
Based on the report, we provide a number of recommendations for policy, research, and practice on how investments, policy interventions, and initiatives can support the adoption and skills development needed for Sweden and Europe to regain control over their digital future.
The time to act is now.