Apostolos Pyrgelis
Forskare
Contact Apostolos
This Futureframing is written from a forward-looking perspective, set in 2035. I was invited to contribute to this issue of the State of AI newsletter by RISE, sharing my reflections on how AI may reshape cybersecurity over the coming decade — exploring both emerging threats and new forms of resilience.
It is 2035. I am standing inside the Security Operations Center of a major hospital, discussing a cyberattack that nearly brought critical services to a halt. My host, the hospital’s Chief Information Security Officer, hands me an augmented reality visor. Instantly, a holographic dashboard materializes in front of us — a living graph of the hospital’s digital ecosystem. Medical IoT devices, cloud workloads, identity systems, and third-party vendors appear as interconnected nodes. Red paths illuminate the attacker’s trajectory.
The breach began in the supply chain. A compromised firmware update from a trusted vendor introduced a dormant backdoor. Weeks later, a self-evolving malware strain activated, spreading laterally across medical IoT devices. The malware adapted in real time, learning the network’s defensive patterns and modifying its behavior to avoid detection. Critical imaging systems and patient monitoring services were disrupted.
But the recovery was just as advanced. She shows me a swarm of autonomous defensive agents that were deployed within minutes. These agents collaboratively isolated infected segments, reverse-engineered the malware’s polymorphic routines, patched vulnerable devices, and restored services. Every action was logged immutably. Every decision was explainable. Human analysts supervised the process, approving high-impact containment measures. The crisis lasted hours — not weeks.
The scenario is fictitious — but the technologies described are already emerging today.
Artificial intelligence now sits at the center of cybersecurity — for attackers and defenders alike.
On the offensive side, AI has industrialized cybercrime. Generative systems craft hyper-personalized social engineering campaigns targeting high-value individuals, while AI-assisted reconnaissance maps vendor ecosystems to identify weak links in hardware and firmware supply chains. Autonomous malware adapts dynamically to evade detection, and offensive large language models automatically discover zero-day vulnerabilities, generate working exploits, and prioritize targets based on susceptibility and impact. Adversarial techniques manipulate defensive models, poisoning their training data or degrading detection accuracy over time. AI-driven botnets coordinate globally distributed, self-propagating attacks with minimal human oversight. The scale, speed, and adaptability of attacks have increased by orders of magnitude.
Defenders, however, have not stood still. Generative models construct dynamic honeypots that evolve in response to attacker behavior, extracting intelligence on tactics and tooling. AI-powered red teams — internal security teams that simulate real-world attacks to strengthen defenses — continuously stress-test enterprise environments. Multimodal detection systems fuse telemetry across networks, endpoints, cloud workloads, identity layers, and hardware signals to identify low-and-slow attacks. Autonomous response agents triage alerts, contain intrusions, and remediate compromised assets while operating within carefully bounded authority frameworks. They produce explainable summaries for auditors and regulators. Privacy-preserving learning techniques enable organizations to share threat intelligence without exposing sensitive data, creating globally coordinated defense fabrics.
The battlefield has become algorithmic.
Despite automation, humans remain central. What has changed is not human relevance — but human leverage.
Attackers still rely on creativity, strategic intent, and contextual judgment. AI may discover new vulnerabilities, but humans decide which campaigns align with political, financial, or ideological goals.
Defenders, meanwhile, no longer drown in alerts. AI copilots handle correlation, prioritization, and first-response containment. Human analysts focus on architectural resilience, strategic risk modeling, and policy decisions. They evaluate trade-offs between security, privacy, and operational continuity. The arms race increasingly revolves around access to high-quality data, powerful computing infrastructure, and the ability to govern, adapt, and secure AI systems effectively.
In this future, AI systems themselves become prime targets. If defensive models can be manipulated, poisoned, or reverse-engineered, they transform from shields into liabilities. The compromise of an AI security orchestration platform could cascade across entire ecosystems.
Deploying AI in cybersecurity therefore requires preserving core properties such as reliability under adversarial conditions, robustness against data poisoning and evasion, transparency and explainability, trust in model supply chains, as well as accountability and auditability. This demands advancing research in adversarial machine learning, formal verification of AI models, secure data governance, and safety- and ethical-by-design AI systems — aligned with governance efforts such as the NIST AI Risk Management Framework and regulatory initiatives like the European Union AI Act.
This is exactly what RISE will contribute to in the coming years through its participation, together with Linköping, Lund, and Örebro universities, in the Multidisciplinary Research Center on Resilience and Security for Trustworthy AI Systems (RESIST), funded by the Swedish Strategic Research Foundation.
If we fail to secure AI, we will amplify systemic risk at machine speed. If we succeed, we may finally shift cybersecurity from reactive firefighting to adaptive resilience.