Sverker Janson
Enhetschef
Contact SverkerThe AI year 2025 is in full swing. It's clear that the pace isn’t slowing down. Momentum is increasing — new models, concepts, and technologies are emerging almost daily. For those unfamiliar with the field, it can feel overwhelming to keep up, and even those who are well-versed may struggle to maintain an overview of the shifting AI landscape.
Below, you’ll find a brief overview of three current trends that are setting the direction for AI development. They don’t tell the whole story — but by understanding them, you’ll have a strong starting point for navigating what’s happening right now. Let’s dive in!
What happens when AI doesn’t just give answers — but also shows how it thinks? Following in the wake of DeepSeek R1, we’ve seen a wave of increasingly capable models — even very small ones — built on the principle of generating explicit reasoning to solve complex tasks.
At the same time, OpenAI recently integrated image generation directly into GPT-4o, creating a truly multimodal model. This opens the door for systems that can understand and act on text, images, and sound combined — a necessary foundation for next-generation AI. It’s a shift that not only transforms the user experience but also challenges our views on understanding and trust in human-AI interaction.
A new AI role is emerging — shifting from passive assistant to active actor.
We’re now seeing a new generation of AI agents, with Manus as a notable example, where language models are combined with tools and memory functions to independently carry out long-term, multi-step tasks.
These tasks might include planning a trip, troubleshooting code, or coordinating purchases. In this way, AI steps into the role of a collaborator — not just a passive assistant or isolated problem-solver. The opportunities are huge, but they also raise important questions about roles, responsibilities, and how humans and machines interact.
Connecting AI to its surrounding environment is the next logical step — but what does that actually mean in practice?
The next phase in AI development involves giving models contextual understanding. Model Context Protocol — an initiative led by, among others, Anthropic and now also OpenAI — aims to standardize how AI models access relevant context from their computing and sensor environments.
This is a crucial building block for AI systems that are not only smart on their own, but also understand and influence the world around them. It paves the way for systems that are both more situationally aware and more action-oriented.
We are seeing a steady stream of increasingly capable models, their combination into even more powerful agents, and how these are gradually being integrated into the physical world. What happens next? Stay tuned!
In our new newsletter, State of AI by RISE, we share insights on cutting-edge research, emerging trends, and future outlooks in artificial intelligence. Get a clear overview of the latest developments at RISE and discover what our researchers envision for the future of AI.