AI is reshaping cybersecurity and the threat landscape
AI makes systems smarter, faster, and more automated. At the same time, it opens new attack surfaces that traditional IT defenses do not address. It demands resilience by design, lifecycle security, and collaboration across sectors. The core message is simple. Secure AI as a target, and secure against AI as a tool for offense.
Why AI needs a different security mindset
AI systems are data driven, adaptive, and often non-transparent. They fail and leak in ways that traditional software does not, due to, e.g., opaque decision logic, increased attack surface, privacy leakage, and emergent vulnerabilities. Threat modeling must account for data origin, model behavior, and exposure through outputs, recognizing that AI threats are twofold: threats to AI - those targeting the algorithms and from AI - threats from AI targeting the infrastructure and data. Security controls must cover the entire AI lifecycle, from collection and labeling to deployment and monitoring.
Threats to AI
Adversaries target AI models directly. Common vectors include adversarial inputs that force wrong decisions, data poisoning that embeds backdoors during training, model inversion and membership inference that reconstruct or reveal sensitive data, and leakage through parameters or confidence scores. Supply chain risks also matter. Insecure data pipelines, imported weights, or third-party preprocessing can introduce hidden behavior. Mitigations combine adversarial training, input validation, strong data governance, output restrictions, privacy mechanisms, and access control.
Threats from AI
Attackers use AI to automate, scale, and personalize offense. Language models generate convincing phishing at volume. Deepfakes enable impersonation and fraud across voice, image, and video. Adaptive malware mutates to evade signature-based tools. AI accelerates reconnaissance, credential stuffing, and vulnerability discovery. In strategic contexts, autonomous systems and large-scale surveillance introduce ethical and regulatory risks when oversight is weak.
Lifecycle security by design
Security cannot be an afterthought. Start with resilience in design. Document the model with transparent cards and risk registers. Instrument data ingestion and training runs. Test robustness against evasion and poisoning before release. Limit output exposure and log queries in production. Monitor drift, bias, and leakage. Treat updates as controlled changes with rollback and attestations. Integrate security reviews into e.g., MLOps.
Frameworks and taxonomies that help
Use NIST AI RMF to structure governance across Govern, Map, Measure, Manage. Apply ISO/IEC 23894 to address AI specific risks alongside ISO/IEC 27001. Use e.g. , STRIDE, OWASP Machine Learning Security Top 10, and MITRE ATLAS to classify and test AI attack paths. In regulated domains, build structured assurance cases that connect evidence to security claims.
Practical steps
Turn principles into practice with concrete actions.
Threat model AI early. Map assets, data sources, dependencies, and exposure. Use STRIDE and MITRE ATLAS for AI specific attacks.
Secure data flows end to end. Validate, filter, and track provenance before training. Control third party datasets and weights.
Test robustness systematically. Run adversarial tests for evasion and poisoning. Monitor leakage indicators and restrict outputs when needed.
Document and explain. Maintain model cards, error analysis, and post deployment explainability. Align documentation with governance needs.
Instrument operations. Log queries, detect abnormal usage, and rate limit access. Monitor drift and trigger retraining safely.
Assure and attest. Build assurance cases and use runtime attestations for models and dependencies. Integrate security into CI/CD.
Train for reality. Use cyber range exercises and red teaming to prepare teams and shorten response times.
Stakeholders and governance
Securing AI is a multi-actor effort. Nation states, cybercriminals, insiders, and unintentional triggers shape the landscape. Align technical safeguards with legal and ethical standards. Coordinate with national agencies and competence centers. Share signals quickly across sectors.
What the literature says
Our survey covers 212 sources across academia and grey literature. Publications surged in 2024 and continue in 2025. Themes include adversarial attacks, generative AI, governance, and critical infrastructure. Technical mechanisms dominate, but governance attention is rising. The evidence supports lifecycle security and resilience by design.
How RISE can help
RISE provides a neutral platform where companies, authorities, and researchers can meet, test solutions, and build shared standards. In the Cyber Range, organizations can rehearse real attacks on their own systems without damage. Our research blends technical controls with human factors, training, and routines. This work is part of the CitCom.ai Testing and Experimentation Facility (TEF), co-funded by the European Union, and the STRIDE project (Secure and Resilient Infrastructure for Digital Enterprises), funded by Vinnova, carried out within the Center for Cybersecurity at RISE.
Call to action
Curious how AI security applies to your systems? Contact RISE to explore insights, challenges, and opportunities across your AI lifecycle.