Need cloud computing? Get started now

Exploring Artificial Intelligence: Is AI Overhyped?

Berk Veral

Written by

Berk Veral

November 04, 2024

Berk Veral

Written by

Berk Veral

Berk Veral is the Senior Director of Product Marketing at Akamai.

There’s no question that the AI revolution has transformed cybersecurity, for better and for worse.
There’s no question that the AI revolution has transformed cybersecurity, for better and for worse.

Perhaps no tech topic is more ubiquitous — or more hyped — than artificial intelligence (AI). When OpenAI brought AI to the masses with its release of ChatGPT in November 2022, the world was forever changed. AI startups began to attract massive funds, giant tech companies like Microsoft and Apple raced to implement AI in their own products, and public awareness of AI as a tangible tool peaked. 

Now, terms like generative AI, large language models, machine learning, and neural networks are appearing in virtually every commercial sphere, from professional services to consumer electronics. There’s even an AI-enabled toaster.

But what is AI exactly? And how do you know if it’s just being used as a buzzword to sell a product, or actually performing a higher level of intelligence? In this blog post, we’ll delve into these questions and more — probing AI’s limitations, sharing tips for identifying AI washing, and exploring how AI is transforming industries like cybersecurity.

Unpacking the AI hype: What can AI do?

Artificial intelligence is an area of computer science devoted to creating systems capable of performing tasks that are normally associated with humans. These systems can be built to carry out tasks like making decisions, solving complex problems, and thinking creatively. AI systems use sophisticated algorithms and data (lots of it) to achieve these feats. 

But this general definition just scratches the surface. To better understand what AI is and isn’t, let’s start by defining the two broad categories of AI: inference AI and generative AI.

Inference AI

Interface AI is a type of technology that focuses on inferring information from content like text, images, audio, and video. 

For example, when presented with a photo of a cat, the technology can use context clues to correctly identify the photo’s subject. 

Although this type of AI can make inferences based on information, it cannot generate its own content.

Generative AI

Generative AI, or GenAI, is a technology that can generate new content — text, images, audio, and video — from instructions. 

For example, if presented with the instruction “draw a cat,” a GenAI model will generate an image of a cat.

You can think about inference AI and generative AI as two sides of a coin, with the latter generally considered to be more sophisticated. GenAI development involves a combination of machine learning and neural network training.

Understanding the components: How does AI work?

A neural network is inspired by the way the brain works, but functions in a simpler, more practical way. It’s a machine that consists of many layers of artificial neurons, or nodes, that process information. Each neuron takes in numbers (input), performs a simple calculation, and then sends out the result (output). 

The network learns by adjusting a set of numbers called “weights.” These weights determine how the network behaves. During supervised learning, the network is fed data and a desired output, which it uses to set the weights. Learning from mistakes through each iteration, the network adjusts its weights to improve its chances of reaching the desired output.

Deep learning is a subset of machine learning that emerged with the arrival of more powerful graphics processing units (GPUs) and more sophisticated training algorithms. It is much more capable because of the addition of more layers of neurons. One important note: “Deep” refers to the number of layers a neural network has — it’s physically deep, not intellectually deep.

One popular example of a deep learning algorithm is a large language model (LLM), like ChatGPT, which is trained to understand and generate human language text. LLMs take a text sequence as input, and then output probabilities for the next word in the sequence — a functionality that helps chatbots deliver relevant responses to questions or comments. 

LLMs are trained on massive volumes of text, usually gleaned from the web. As with neural networks, LLM weights are adjusted through self-supervised learning to boost the probability of a reasonable next word. The most sophisticated LLMs in current use have hundreds of billions of weights tuning the outputs.

Is it AI or AI washing?

Now that we have a basic understanding of deep learning and LLMs, we can consider whether a particular commercial product truly is AI-powered — or an example of AI washing. 

AI washing occurs when companies make misleading or exaggerated claims about the amount of AI used in their products in an attempt to increase profitability by capitalizing on AI hype.

To identify AI washing, consider the following questions when evaluating products:

  • Does this product require significant human involvement to generate an acceptable output? True AI is marked by a high degree of autonomy, delivering an acceptable result with minimal human input.

  • Is the company behind this product transparent about the types of data and the algorithms that it uses to power its AI? The black box approach, focusing solely on the inputs and outputs of a product rather than revealing its internal workings and processes, is often a red flag that the technology may not be all it’s cracked up to be.

Unfortunately, there are too many instances of companies using the term AI to promote a product or service that can’t pass the quiz above. That’s why it’s important to look beyond companies’ initial claims and probe a bit deeper to determine whether a product is truly AI-powered.

Biases and hallucinations

There are also other important considerations to make when evaluating an AI solution. “Hallucinations” are a key concept to keep in mind. AI hallucinations are results that are false, misleading, or nonsensical. These incorrect results are caused by a variety of factors, including training with data that is insufficient or biased, which can lead to assumptions by the AI implementation. 

It is important to remember that AI models are designed to predict outcomes based on the data they train on; therefore, incorrect data ingestion will simply result in incorrect results.

Understanding the AI hype cycle

One way to understand the fast-changing tides, false promises, and overhyped expectations that seem to come along with each new AI breakthrough is to look at a chart developed by Gartner.

In June, Gartner released its 2024 Hype Cycle for Artificial Intelligence, which tracks how emerging technology evolves, matures, and is adopted by the general public. 

In its first stage, a new technology moves through the innovation trigger, gaining traction and hype as it heads toward the peak of innovation.

After reaching this peak, the technology plummets to the trough of disillusionment, where the hard work begins. No longer influenced by inflated expectations, people can begin to understand the technology’s true capabilities and devise practical applications for it, moving it further along the cycle to the plateau of productivity.

According to Gartner, AI moves through the hype cycle quickly, and generative AI has already reached the trough of disillusionment. But despite its melancholy name, this stage of the cycle presents an opportunity for genuine innovation — spurring people to better understand AI’s limitations and put its promise into action. One industry in which this is currently happening is cybersecurity.

AI in cybersecurity: The future of threat detection

There’s no question that the AI revolution has transformed cybersecurity, for better and for worse.

As AI models become more sophisticated, so do the tactics employed by cybercriminals. Threat actors are increasingly using generative AI to automate, enhance, and scale their attacks, resulting in threats that are harder to detect and mitigate.

However, the integration of automation in cybersecurity presents significant opportunities, especially in the realm of threat detection. Cybersecurity professionals can use AI to analyze large amounts of data much faster than they could themselves to identify patterns and detect anomalies in user behavior that could signal potential threats.

AI can also be used to streamline incident response processes. By automatically carrying out predefined actions in response to detected threats — such as isolating affected systems, blocking malicious traffic, or initiating alerts — AI can respond to potential threats in real time.

Keeping it real: No AI hype, just results

Akamai has been using deep learning AI to help power our solutions since 2015. 

Our AI tools perform a variety of critical tasks, from determining whether traffic is a bot or a human user to identifying API abuse and fraudulent websites. Akamai solutions have been taking advantage of machine learning for nearly 10 years, and today we continue our journey by adding the latest advancements in AI to our products when there is a clear benefit to our customers.

Our core principle regarding AI is to maintain transparency about how and why we employ this technology, ensuring we never overstate our capabilities or engage in AI washing.

At Akamai, we’re committed to employing AI judiciously and effectively, gradually expanding our capabilities as AI understanding continues to evolve. That’s how we keep AI real … without artificial hype.



Berk Veral

Written by

Berk Veral

November 04, 2024

Berk Veral

Written by

Berk Veral

Berk Veral is the Senior Director of Product Marketing at Akamai.