Why AI Demands a New Security Playbook

Rupesh Chokshi

Written by

Rupesh Chokshi

March 21, 2025

Rupesh Chokshi

Written by

Rupesh Chokshi

Rupesh Chokshi is Senior Vice President and General Manager of Akamai's Application Security Portfolio.

Artificial intelligence has become so ingrained in our daily lives that we hardly notice it anymore — until something goes wrong.
Artificial intelligence has become so ingrained in our daily lives that we hardly notice it anymore — until something goes wrong.

Artificial intelligence (AI) has become so ingrained in our daily lives that we hardly notice it anymore — until something goes wrong.

Imagine you’ve just returned home after a busy workday. Your personal digital assistant has already started playing your favorite relaxing music. It has also adjusted the room temperature to make you more comfortable. You notice that the milk in your refrigerator is almost gone, but your personal digital assistant knows how many cups you go through in a week, so it’s placed a grocery order that will be delivered this evening. It even reminds you that your wedding anniversary is next week and has made a dinner reservation through Resy for your favorite Italian restaurant.

This is not science fiction; this scenario is easily achievable today. And it would all feel so effortless that you wouldn’t give it a second thought — until the day the assistant makes a mistake. Maybe it shares your schedule with a stranger or reveals data that should remain private. Its extensive knowledge of you presents a critical dilemma: How do you safeguard all that information? 

Viewed through an enterprise lens, the dilemma becomes even more critical.

AI hijacking is on the rise 

Now, imagine you’ve learned that the AI chatbot on your company’s website is leaking customer data in real time. You and your team scramble to get all hands on deck for damage control, causing stress and disruption to your business. This is also not science fiction; threat actors are hijacking AI — or using AI tools to sidestep safeguards — with increasing frequency.

Concern among enterprise leaders is also increasing. A Gartner survey from November 2024 found that four of five senior risk and assurance executives and managers said AI-enhanced malicious attacks were their top concern. In second place? AI-assisted misinformation.

AI is revolutionizing business, but it is also reshaping security risks in ways corporate security officers and business leaders have never faced before. How prepared are you to defend against these new risks?

Existing safeguards won’t cut it

Traditional cybersecurity defenses are inadequate to meet the security challenges of an AI-driven landscape. They often favor static, rules-based detection, perimeter-focused access controls, and reactive responses to known threats. These methods are effective for deterministic applications, which respond predictably to the same inputs every time. 

In contrast, AI applications are nondeterministic, meaning that their responses can vary even when given the same input, making traditional security approaches less effective. 

Today’s AI-driven threats don’t play by the old rules. They adapt rapidly to bypass defenses, evade detection by imitating legitimate behavior, and have the ability to mount automated, large-scale attacks.

Traditional cyber defenses

AI-driven threats

Rule-based detection

Static signatures, predefined rules

Adapts rapidly

Learns and evolves to bypass static defenses

Perimeter-focused

Firewalls, identity and access management, strict access controls

Imitates legitimate users

Evades detection by blending in with normal activity

Reactive response

Detects and mitigates known threats

Automates attacks at scale 

Enables large-scale, automated cyberattacks

 

Traditional cyber defenses vs. AI-driven threats

3 key AI attack strategies

So, what exactly are we up against? Although AI attack strategies are constantly evolving, there are three broad categories of threats that are seen most commonly in the wild: prompt injection attacks, data poisoning, and sensitive data leaks.

Prompt injection attacks

In this scenario, the attacker manipulates an AI’s model by injecting malicious inputs into prompts, altering the model’s intended behavior. Picture the archetypal devil sitting on AI’s shoulder encouraging it to do things it isn’t supposed to do — like generate misinformation or reveal sensitive data.

In one publicized example of an AI jailbreak, a security researcher used successive back-and-forth prompts to sidestep a chatbot’s training, enabling access to forbidden information — instructions for making a Molotov cocktail.

Warning signs of prompt injection attacks include unexpected traffic spikes, attempts to bypass rules, or an increase in erroneous or unusual responses. In addition to exposing restricted information, a prompt injection attack can lead to compliance violations and reputational damage.

Data poisoning

This strategy involves an attacker who manipulates or corrupts AI training data to bias outcomes or introduce security vulnerabilities. Data poisoning is focused on undermining the reliability of AI-driven decisions. It’s like feeding a chess grand master bad strategies so they start making unsuccessful moves.

Warning signs of data poisoning include data and behavioral anomalies, increased error rates, and unexpected traffic spikes. Impacts from data poisoning range from delivering biased or inaccurate responses to introducing a backdoor to sidestep defenses. 

Sensitive data leaks

An AI model can inadvertently expose confidential business or customer information through its responses. For example, an AI assistant that was trained on corporate emails may suddenly repeat internal discussions to an attacker who is seeking to exploit this weakness.

Warning signs of sensitive data leaks include abnormal data exfiltration patterns, including theft of personally identifiable information (PII); unauthorized access attempts; and increased error rates. Leaked sensitive data presents significant regulatory compliance risks, with potential financial penalties and loss of customer trust.

New threats demand a new playbook

So, how can you embrace the possibilities of AI while protecting your data, your customers, and your business? The answer lies in taking a comprehensive approach that emphasizes the following best practices:

  • Discovery and validation. You don’t want to learn about a vulnerability after it’s been exploited by an attacker. Discover your entire AI inventory and keep it up to date. This includes gaining a clear view of your models and their dependencies before deployment.  

  • AI security posture management. Defend and enforce security with continuous model testing, data scanning, vulnerability scanning, and anomaly detection. Implement a practice of regular red-teaming to help identify potential vulnerabilities before an attacker does.

  • Runtime protection. Have solutions in place, such as AI-native firewalls and policy reinforcement, to ensure that AI models and applications are protected in your runtime environment. Validate outputs to spot and mitigate unusual behavior quickly.

Each of these practices builds on the others to create a cohesive, adaptive approach to securing AI and minimizing potential vulnerabilities. The OWASP Top 10 LLM Applications 2025 is a valuable resource for understanding potential vulnerabilities and mitigation strategies.

Don’t go it alone

The AI threat landscape is constantly evolving. To minimize risk, it pays to work with an AI security partner that has what it takes to keep pace with today’s threats. Akamai is that partner.

We have proven security expertise, including a deep understanding of AI threats, backed by extensive, real-world security intelligence. Our defense-in-depth approach offers a multilayered approach to security — from native-AI firewalls to bot protection and API security.

Our solutions comply with evolving AI security frameworks and regulations. They also integrate seamlessly with your existing environment and scale to meet your needs as AI adoption grows.

Learn more

AI is the future of business, and attackers know it. The time to rethink your AI security playbook is now — and Akamai can help.

Heading to San Francisco in April 2025 for the RSA Conference? Visit us at booth N-6245 to learn how we can help secure your applications and networks with real-time intelligence and adaptive AI.



Rupesh Chokshi

Written by

Rupesh Chokshi

March 21, 2025

Rupesh Chokshi

Written by

Rupesh Chokshi

Rupesh Chokshi is Senior Vice President and General Manager of Akamai's Application Security Portfolio.