Iraklis Symeonidis
Senior researcher
Contact Iraklis
AI makes systems smarter, faster, and more automated. At the same time, it opens new attack surfaces that traditional IT defenses do not address. It demands resilience by design, lifecycle security, and collaboration across sectors. The core message is simple. Secure AI as a target, and secure against AI as a tool for offense.
AI systems are data driven, adaptive, and often non-transparent. They fail and leak in ways that traditional software does not, due to, e.g., opaque decision logic, increased attack surface, privacy leakage, and emergent vulnerabilities. Threat modeling must account for data origin, model behavior, and exposure through outputs, recognizing that AI threats are twofold: threats to AI - those targeting the algorithms and from AI - threats from AI targeting the infrastructure and data. Security controls must cover the entire AI lifecycle, from collection and labeling to deployment and monitoring.
Adversaries target AI models directly. Common vectors include adversarial inputs that force wrong decisions, data poisoning that embeds backdoors during training, model inversion and membership inference that reconstruct or reveal sensitive data, and leakage through parameters or confidence scores. Supply chain risks also matter. Insecure data pipelines, imported weights, or third-party preprocessing can introduce hidden behavior. Mitigations combine adversarial training, input validation, strong data governance, output restrictions, privacy mechanisms, and access control.
Attackers use AI to automate, scale, and personalize offense. Language models generate convincing phishing at volume. Deepfakes enable impersonation and fraud across voice, image, and video. Adaptive malware mutates to evade signature-based tools. AI accelerates reconnaissance, credential stuffing, and vulnerability discovery. In strategic contexts, autonomous systems and large-scale surveillance introduce ethical and regulatory risks when oversight is weak.
Security cannot be an afterthought. Start with resilience in design. Document the model with transparent cards and risk registers. Instrument data ingestion and training runs. Test robustness against evasion and poisoning before release. Limit output exposure and log queries in production. Monitor drift, bias, and leakage. Treat updates as controlled changes with rollback and attestations. Integrate security reviews into e.g., MLOps.
Use NIST AI RMF to structure governance across Govern, Map, Measure, Manage. Apply ISO/IEC 23894 to address AI specific risks alongside ISO/IEC 27001. Use e.g. , STRIDE, OWASP Machine Learning Security Top 10, and MITRE ATLAS to classify and test AI attack paths. In regulated domains, build structured assurance cases that connect evidence to security claims.
Turn principles into practice with concrete actions.
Securing AI is a multi-actor effort. Nation states, cybercriminals, insiders, and unintentional triggers shape the landscape. Align technical safeguards with legal and ethical standards. Coordinate with national agencies and competence centers. Share signals quickly across sectors.
Our survey covers 212 sources across academia and grey literature. Publications surged in 2024 and continue in 2025. Themes include adversarial attacks, generative AI, governance, and critical infrastructure. Technical mechanisms dominate, but governance attention is rising. The evidence supports lifecycle security and resilience by design.
RISE provides a neutral platform where companies, authorities, and researchers can meet, test solutions, and build shared standards. In the Cyber Range, organizations can rehearse real attacks on their own systems without damage. Our research blends technical controls with human factors, training, and routines. This work is part of the CitCom.ai Testing and Experimentation Facility (TEF), co-funded by the European Union, and the STRIDE project (Secure and Resilient Infrastructure for Digital Enterprises), funded by Vinnova, carried out within the Center for Cybersecurity at RISE.
Curious how AI security applies to your systems? Contact RISE to explore insights, challenges, and opportunities across your AI lifecycle.