Need cloud computing? Get started now

Large Loss of Money? Choose Your LLM Security Solution Wisely.

Akamai Wave Blue

Written by

Maxim Zavodchik, Alex Marks-Bluth, and Neeraj Pradeep

November 27, 2024

Maxim Zavodchik

Written by

Maxim Zavodchik

Maxim Zavodchik Experienced security research leader with a proven track record in establishing, growing, and defining strategic vision for Threat Research and Data Science teams in Web Application Security and API Protection. When he’s not protecting life online, you can find him being a super dad and/or watching Studio Ghibli movies.

Akamai Wave Blue

Written by

Alex Marks-Bluth

Alex Marks-Bluth is a Security Researcher at Akamai, with deep experience in using data science to solve web security problems.  He enjoys cricket and cooking, in between juggling work and family.

Neeraj Pradeep

Written by

Neeraj Pradeep

Neeraj Pradeep is a Senior Security Researcher at Akamai with extensive experience in cybersecurity and a curious mind driven by a passion for solving complex challenges. He enjoys long drives with her family.

 

LLMs are a complex new attack surface that most organizations don’t fully understand.
LLMs are a complex new attack surface that most organizations don’t fully understand.

Editorial and additional commentary by Tricia Howard

As your organization’s CISO, your job is to foresee and mitigate emerging cybersecurity threats. This year’s shiny new (and potentially dangerous) objects are large language models (LLMs). The LLM acronym doesn’t only mean “large language model,” however — without a proper cybersecurity assessment that’s specific to your environment, LLM can also stand for large loss of money. These AI-driven systems promise efficiency gains and innovation, but beneath the surface lurk of LLMs lie cybersecurity risks that could spell financial disaster for your organization.

Your finances and reputation may be at risk

Your company has likely started using LLMs; you may be using them yourself. It’s possible other executive leadership has put a focus on building your organization’s own proprietary LLM for your customers and/or employees. Although the “first-day gains” may show efficiency and productivity improvements short term, LLMs need to be considered with a long-term lens. Integrating LLMs into your business without a thorough cybersecurity evaluation can lead to massive financial losses — the kind of losses we typically associate with breaches.

In addition, ask yourself what the reputational fallout would be if LLMs generated harmful or offensive content that affected your customer trust and brand integrity. We wrote this blog post because you’re not the only one with questions about navigating choppy AI waters.

A complex new attack surface

LLMs are a complex new attack surface that most organizations don’t fully understand, and that is to be expected with any major technological transformation. We need to be thinking of our attack surface from the inside out. From prompt injections to data exfiltration, LLMs open up new vectors for sophisticated cybercriminals.

A security-focused audit of these new technologies is crucial. Performing due diligence is a good rule of thumb with any new technology, but it is particularly important with LLMs. Defenders have only scratched the surface on these techno-powerhouses, and attackers’ freedom from bureaucracy lends them an unfortunate leg up. Attackers don’t have shareholders to appease — if an attack fails, they just move on and try it a new way. You, however, need to protect your organization the first time.

What is the financial risk of implementing an LLM without a proper cybersecurity assessment?

We live in bits and bytes here in the cyber world, but dollar signs shouldn’t be ignored. A cyber framework is one tool among a series of tools — and, to do the job well, you need to use the correct tool

Most cybersecurity evaluation frameworks aren’t built to rank financial losses from AI-related attacks. These technical frameworks are just that — frameworks. They’re not encompassing. Even if they could estimate the cost of breach, they certainly wouldn’t show you how a seemingly minor vulnerability could escalate into a multimillion-dollar crisis. These risks will continue to increase as more LLMs become incorporated into our daily lives.

In what ways can attackers manipulate LLMs?

The time it takes for an attacker to make a massive impact is severely shortened with LLMs. Imagine an attacker manipulates your LLM into revealing proprietary data or launching harmful commands. In mere minutes, your organization could be facing financial and reputational damage on an unprecedented scale.

Sensitive data — customer information, intellectual property, or internal secrets — could leak through an LLM vulnerability. The financial cost of dealing with this is steep: regulatory fines, breach notifications, and costly remediation efforts. Consider that 60% of small businesses close within six months of a data breach. Think also about General Data Protection Regulation (GDPR) penalties, which can hit up to 4% of global revenue.

A malicious prompt could trigger unauthorized actions, leading to system shutdowns or worse — an attacker could gain access to critical systems. How long could your business survive being offline? According to Gartner, the average cost of IT downtime is US$5,600 per minute. Multiply that by hours, or even days, of downtime.

LLMs can be manipulated to generate fraudulent content or execute false transactions. In highly regulated industries, such as finance, a single fraudulent transaction could result in fines, lost contracts, and lawsuits. The total cost? Potentially in the millions.

Harm to your reputation is the most insidious threat of all when it comes to LLM security. A single misstep, such as an inappropriate output or a privacy violation, can cause irreparable reputational damage. Today’s market is more competitive than ever, and customer trust is fragile. Losing that trust could lead to permanent revenue loss as clients flee to your competitors.

Reputational damage is a financial time bomb

When an incident involving LLMs goes public, it’s not just about legal fines or regulatory action. It’s about losing customer confidence, even in the absence of a direct breach or technical failure.

Unlike traditional cyberthreats, the chat-based interface of LLMs itself poses a serious risk since the output — seemingly harmless human language — can become dangerously toxic. We are aware of how words can hurt people, but these words can hurt your technology, too.

A “softer” but more insidious and unpredictable threat

LLMs pose a threat that is far more insidious and unpredictable than the conventional cyber risks that your security teams are equipped to handle. In traditional threats, the attack surface is usually highly technical, defined by the target's technology or specific application flows.

However, with LLMs, the "protocol" is natural language, which vastly expands the attack surface and enables manipulation techniques that go beyond classic technological vulnerabilities and venture into human-like manipulations. Social engineering has been a favored method of attack on humans for years; now, those same concepts can be used against our technology.

In our research on prompt injection vulnerabilities across multiple data sets, a large number of cases lead to reputational damage. These involve toxic, offensive, biased, or manipulated outputs, as well as unintended responses from the LLM that deviate from the LLM developer's true intent. We have all seen the negative impact toxic behavior can have on human beings, and the closer our technology mirrors our brains, the more aware we have to be of these “softer” technological threats.

What can you do to protect against LLM threats?

LLMs are not just another tool in your tech stack — they are a wide new attack surface with unique risks. If you’re not carefully evaluating these systems for both security vulnerabilities and financial risk, you’re setting yourself up for a potential disaster. It’s essential to ensure that any LLM security solution you deploy incorporates four key aspects:

  1. Prompt security controls (prompt injection protection)
  2. Sandboxing and segmentation
  3. Behavioral analysis integration
  4. Rigorous audit trails

Prompt security controls (prompt injection protection)

Your LLM security solution should include comprehensive prompt content filtering on both input and output to detect and block attempts to extract proprietary data, along with defensive prompt engineering techniques that prevent LLM manipulation. This should include contextual prompt restrictions and validation checks to safeguard against harmful command execution.

Sandboxing and segmentation

It should ensure that LLM interactions are sandboxed and that the LLM environment is segmented from critical operational systems to prevent unauthorized command executions. Microsegmentation is a key part of any Zero Trust framework, and there’s no better time than now to start moving toward Zero Trust if you haven’t already. The ability to quickly identify where lateral movement is occurring — and stop it — can be the difference between an alert and an incident.

Behavioral analysis integration

Your LLM security product should use behavioral analytics to establish a baseline of typical LLM interactions. It should flag and verify atypical behavior to identify potentially fraudulent or unauthorized activities.

To secure LLMs, you need to think of them as humans with ridiculously fast processing power. Lean on your social engineers and “human hackers” for strategies to protect these supercharged techno-humans.

Rigorous audit trails

Maintaining comprehensive audit logs of LLM interactions to track inputs, outputs, and transactions is critical to successful LLM security. This can also be valuable in postincident investigations to assess and address fraudulent actions quickly. As you well know: It’s not if, it’s when you’ll be attacked, so assuming breach is a strong feather in an LLM security solution’s cap.

What’s next?

LLMs are a learning opportunity for all of us, especially those of us who protect digital lives. Take this time to research and test, just as you’ve done with any other major technological advancement.

Akamai will continue to use our wide swath of visibility to discover, monitor, and report on threats for the safety of our customers, fellow employees, and the security community at large. To keep up with our current findings, you can follow us on social media or check out our security research page.



Akamai Wave Blue

Written by

Maxim Zavodchik, Alex Marks-Bluth, and Neeraj Pradeep

November 27, 2024

Maxim Zavodchik

Written by

Maxim Zavodchik

Maxim Zavodchik Experienced security research leader with a proven track record in establishing, growing, and defining strategic vision for Threat Research and Data Science teams in Web Application Security and API Protection. When he’s not protecting life online, you can find him being a super dad and/or watching Studio Ghibli movies.

Akamai Wave Blue

Written by

Alex Marks-Bluth

Alex Marks-Bluth is a Security Researcher at Akamai, with deep experience in using data science to solve web security problems.  He enjoys cricket and cooking, in between juggling work and family.

Neeraj Pradeep

Written by

Neeraj Pradeep

Neeraj Pradeep is a Senior Security Researcher at Akamai with extensive experience in cybersecurity and a curious mind driven by a passion for solving complex challenges. He enjoys long drives with her family.