Download this chapter (PDF)
Emerging dual-use of AI in cybersecurity

AI is transforming cybersecurity by improving real-time threat detection and efficient responses against these threats. AI models can analyse large datasets and data streams, identify anomalies, and predict attacks, making them a critical tool for security operations.

However, cybercriminals are also exploiting AI to improve their attacks. Techniques like AI-driven phishing, deepfakes, and automated vulnerability scanning are making cyberattacks more convincing and scalable. Rising concerns are developments on the deployment of GenAI to generate malware with minimal input and adversarial machine learning. Attackers are manipulating data to deceive AI models – such as those used for cyber defence – causing the models to overlook threats.

Impact

education

Education

  • AI tools offer the potential to secure online learning environments through anomaly detection and behavioural analytics. However, the misuse of AI can undermine educational integrity through automated cheating, deepfakes, or phishing. Securing AI in education requires balancing innovation with ethical and regulatory safeguards.
Research

Research

  • Research institutions benefit from AI for advanced threat modelling, data protection, and network monitoring. However, open access policies and collaborative research environments also increase exposure to AI-driven threats. Robust model governance and adversarial resilience are key to maintaining research integrity.
Operations

Operations

  • As institutions digitalise, integrating AI into cybersecurity infrastructure enhances real-time response and reduces reliance on manual oversight. Yet, AI systems themselves are then becoming high-value targets. Institutions must build secure AI pipelines and invest in threat-informed AI deployment strategies to ensure long-term operational security.
More info about Cybersecurity?
Visit surf.nl
Link SURF icoon