The three-body problem
BLUF (bottom line up front): AI in Cybersecurity is a double-edged sword.
🔴 AI is changing cyberwarfare — used in attacks (phishing, deepfakes) and defenses (threat hunting, anomaly detection).
🟢 We need to secure AI itself (data privacy, model safety) and use AI for better security (threat intelligence, response automation).
🔵 Staying informed and practicing good cyber hygiene are crucial against AI-powered attacks.
In general, the use of artificial intelligence (AI) in both cyber-attacks and cyber-defenses is becoming increasingly common. In particular, Generative AI and Large Language Models (LLMs) based tools are opening doors and enhancing offensive and defensive capabilities for more threat actors and defenders alike. Thus, we need to understand those capabilities, their limitations, and potential uses and blind spots better.
Starting on the road of better understanding I see three related yet distinct angles, hence the “three-body problem” reference: 1) protecting from the use of AI; 2) protecting with the use of AI; 3) and protecting the AI itself.
Protecting the AI
This is clearly at the very core, and we need to take both AI Safety and AI Security very seriously. To briefly touch upon the differences between AI Safety and AI Security — AI Safety focuses on preventing unintended harm or negative consequences, while AI Security targets the protection of AI systems from malicious attacks, data breaches, and unauthorized access.
NSA and NIST are leading the way with “Deploying AI Systems Securely” and “Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile”
The potential threats are both old (e.g., DoS/DDoS, broken/weak authentication, leaky APIs, etc.) and new (e.g., direct and indirect prompt injections, training data poisoning, etc.) and so are the potential remedies. With good Governance always being a cornerstone, the environment in which the AI is deployed should have a robust architecture (based on tested and repeatable design patterns) with harden configuration of various components and granular monitoring and observability capabilities. Adopting a Zero Trust based mindset (never trust, always verify) along with strong detection and response capabilities, enabling quick identification and containment of compromises, is a necessity.
Within the environment the entire DataOps, MLOps, and DevOps pipelines need to be protected via regular/continuous scanning for vulnerabilities, hardening of integration points, and enforcement of policies and access.
Firewalls, proxies, and network security gateways along with Application Gateways, Web Application Firewalls (WAF), and API Management (APIM) solutions can provide a solid foundation. Technologies such as AI-specific security scanners and prompt shields could add important dimensions, while continuous monitoring and regular auditing would create a feedback loop.
- Cloudflare Firewall for AI Will Help Secure AI Applications, At Scale and For Free
- Robust Intelligence — Secure your models in real time with AI Firewall
- Discover and Protect Generative AI APIs
- Salt Security Delivers Another Industry Breakthrough with First AI-Infused API Security Platform to Address Proliferation of GenAI Application Development
Protecting with AI
The ever-evolving landscape of cyber threats demands ever-evolving defenses. Artificial intelligence (AI) is rapidly transforming how we secure our systems. AI’s strength lies in its ability to analyze massive amounts of data at lightning speed. This makes it ideal for applications in cyber defense, particularly in threat intelligence and threat hunting.
For more of my thoughts on the subject see — “The New Security Trifecta: People, Process, and AI?”
Traditionally, threat intelligence relied on manually gathering information about potential threats and attackers. AI can automate this process, scouring vast datasets to identify patterns and indicators of compromise (IOCs) associated with known threats. This allows security teams to stay ahead of attackers by predicting their tactics and preparing defenses.
AI also empowers proactive threat hunting. By analyzing network traffic and user behavior for anomalies, AI can identify potential threats that might slip past traditional signature-based security solutions. This is where Extended Detection and Response (XDR) comes in. XDR integrates data from various security tools, allowing AI to analyze the bigger picture and detect subtle signs of an attack in progress.
By combining threat intelligence, threat hunting, and XDR with AI, organizations can significantly improve their cyber defenses. This powerful combination can not only identify threats but also automate initial response actions, such as isolating compromised systems, saving crucial time and minimizing potential damage.
Protecting from AI
While AI offers significant business opportunities and security advantages, precautions must also be taken to guard against the potential for data overexposure and unintended surfacing of sensitive information. Even in anonymized or encrypted datasets, hidden pockets of sensitive data can exist. AI-powered solutions can help scan, classify, and label training data, preventing these hidden elements from being incorporated into the final AI model. This reduces the risk of privacy breaches and fosters trustworthy AI development.
However, strong data governance is just one piece of the puzzle. Before deploying AI, organizations should conduct thorough readiness exercises to identify potential risks and develop mitigation strategies. This includes simulating data breaches and outlining clear response protocols. Additionally, ongoing cleanup exercises should be implemented to identify and remove sensitive data that may have slipped through initial filtering. By combining proactive data governance with thorough readiness and cleanup practices, we can significantly reduce the risk of unintended consequences from AI.
But what is much more troubling than overexposure of data (a real issue, but also one the organizations typically have control over and can resolve) is that threat actors are increasingly leveraging AI to create sophisticated scams and attacks. For instance, AI-authored emails can bypass traditional spam filters, AI-powered robocalls that mimic human voices can avoid detection, while deepfake videos can be used to impersonate trusted figures and trick people into revealing personal information or clicking on malicious links. AI can also be used to automate cyberattacks, making them more efficient and difficult to detect. Furthermore, AI could potentially be used to target critical infrastructure, raising concerns about cyberattacks that disrupt physical systems. To protect ourselves from these evolving threats, it’s crucial to stay informed about the latest tactics, be cautious of unsolicited calls and messages, and implement strong cybersecurity practices, including employing defensive AI capabilities.
- Study shows attackers can use ChatGPT to significantly enhance phishing and BEC scams
- ‘Cyber-physical attacks’ fueled by AI are a growing threat, experts say
- LastPass: Hackers targeted employee in failed deepfake CEO call
- Deep-Fake Audio and Video Links Make Robocalls and Scam Texts Harder to Spot
- AI fakes raise election risks as lawmakers and tech companies scramble to catch up
Additional References
📝 Securing Generative AI — generative AI exploded into consumer awareness with the release of Stable Diffusion and ChatGPT, driving enterprise interest, integration, and adoption. This report details the departments most likely to adopt generative AI, their primary use cases, threats, and what security and risk teams will need to defend against as this emerging technology goes mainstream.
🗺️ The AI Attack Surface Map — this resource is a first thrust at a framework for thinking about how to attack (or conversely protect) AI systems.
📄 How to craft a generative AI security policy that works — the advent of generative AI threatens to poke additional holes in your cybersecurity strategy. Compiling a GenAI-based security policy to guide your responses can help.
📄 A CISO’s guide to AI: Embracing innovation while mitigating risk — Fear cannot keep us from taking appropriate action. Blocking AI outright hinders innovation and puts companies at a competitive disadvantage in terms of both protection from adversaries and competition in business. The key lies in proactive, informed leadership.
📄 Disrupting malicious uses of AI by state-affiliated threat actors — Based on collaboration and information sharing with Microsoft, we disrupted five state-affiliated malicious actors
➡️ Announcing new tools in Azure AI to help you build more secure and trustworthy generative AI applications — in the rapidly evolving landscape of generative AI, business leaders are trying to strike the right balance between innovation and risk management.
➡️ New capabilities to help you secure your AI transformation — AI is transforming our world. At the same time, we are also facing an unprecedented threat landscape with the speed, scale, and sophistication of attacks increasing rapidly. To meet these challenges, we must ensure that AI is built, deployed, and used responsibly with safety and security at its core.
➡️ Staying ahead of threat actors in the age of AI — Over the last year, the speed, scale, and sophistication of attacks has increased alongside the rapid development and adoption of AI. Defenders are only beginning to recognize and apply the power of generative AI to shift the cybersecurity balance in their favor and keep ahead of adversaries.
➡️ Securing the Next Frontier of AI Innovation — Just as the industry worked to secure servers, networks, applications and cloud in the past: AI is the next big platform we need to secure.
➡️ Palo Alto Networks Launches New Security Solutions Infused with Precision AI to Defend Against Advanced Threats and Safeguard AI Adoption — a host of new security solutions to help enterprises thwart AI-generated attacks and effectively secure AI-by-design. Leveraging Precision AI™, the new proprietary innovation that combines the best of machine learning (ML) and deep learning (DL) with the accessibility of generative AI (GenAI) for real-time, the global cybersecurity leader is delivering AI-powered security that can outpace adversaries and more proactively protect networks and infrastructure.
Originally written as a LinkedIn article (https://www.linkedin.com/pulse/three-body-problem-andrew-kagan-kampe/)