Need cloud computing? Get started now

Dark background with blue code overlay
Blog
RSS

Can Edge Computing Exist Without the Edge? Part 2: Edge Computing

Ari Weil

Written by

Ari Weil

December 08, 2020

Ari Weil

Written by

Ari Weil

Ari Weil is the VP of Product Marketing at Akamai Technologies.

In part 1 of this series, I drew the architectural distinction between a centralized cloud platform and a distributed edge network. This is an important foundation upon which to explain the difference between cloud computing and edge computing. The two serve very different and complementary purposes. However, in my experience, business leaders, product owners, and application developers often mistake them as competitive.

Most businesses are still trying to understand the use cases where edge computing makes sense to them. According to Gartner, Enterprises that have deployed edge use cases in production will grow from about 1% in 2018 to about 40% in 2024. The use cases that require edge computing versus cloud computing will come down to economics. There are some forms of compute that don't make sense at the edge, just as there are edge computing use cases where low-latency requirements obviate centralizing computing in the cloud. 

Edge computing is defined as "a part of a distributed computing topology in which information processing is located close to the edge -- where things and people produce or consume that information." It's how the physical and digital world interact at the edge. Edge computing translates those interactions into data, which can be used to make a decision, look for patterns, or pass the data back to a storage or analytics application for further analysis. 

Latency is one of the primary value drivers for edge computing. Data that should be managed in the cloud is longer lived and offers sustained value; if data is still valuable after being at rest for several hours or several months, then the heavy storage and compute infrastructure in the cloud is appropriate. Conversely, the noisy data that is generated through interactions at the edge requires real-time processing to provide value; if data is ephemeral and requires decisions and actions while it is in motion, then the lightweight, localized compute resources at the edge are best. Let's consider two common examples.

As broadcasters shift to digital, they must understand subscriber preferences to design programming and ensure flawless viewing experiences. The former is only really possible with data at rest. Data must be collected from a global base, then stored and processed to determine what content to create and how to produce it given the demographics of the subscriber base. Conversely, providing those subscribers with fast startup times and error-free viewing, in their local area, on the device they use, over their specific network connection, requires real-time monitoring, processing, and actions. Value in the former case is measured in months and years, whereas in the latter, value is created or lost in milliseconds.

Similarly, retailers must understand customer preferences for omnichannel shopping experiences over thousands of interactions to design storefronts and offers that convert customers and maximize their lifetime value. These decisions cannot be made based on individual actions or in real time. This data is also only available at rest, through creating shopper cohorts and personas where merchandise can be presented and promoted to drive purchasing decisions. Assembling that information and adapting it -- based on whether the shopper is in a store or online, which device they are using, and how the e-commerce application is behaving in the moment -- requires real-time computing to maximize conversions. 

Understanding when and where data is valuable is critical, given that estimates describe 50% of enterprise data being generated outside of core data centers by 2023. Businesses must appreciate where the cloud and the edge provide unique value, and then architect their infrastructure and applications appropriately to capture it. Data that maintains its value at rest will be prohibitively expensive to manage and secure at the edge because doing so will create redundancies that the centralized cloud is designed to overcome. Real-time hyperlocal data sent back to the cloud from the edge will fail to capture value while running up costs because round trips will add latency and errors that result in poor user experiences.  

Said simply, if action needs to be taken based on data that is changing in real time, start with edge computing. If data can or must be aggregated, processed, and analyzed to be able to provide value, start with cloud computing.

We previously established that cloud providers cannot truly provide edge capabilities unless they deploy their services to the edge. We also discussed how Akamai has architected and built a distributed edge platform. Although Akamai pioneered the content delivery network (CDN) market, it was simply the first revenue-generating use case built on our distributed edge network. Since then, competitors building CDNs have pursued centralized models, just as centralized cloud platforms have added CDN capabilities to their platforms. The primary difference is realized as companies look to provide consistent scale and capabilities at low latency wherever they do business. Centralized architectures can only satisfy all of those requirements near where they've deployed their hardware.

CDN providers with centralized infrastructures cannot maintain the promise of low-latency data processing that edge use cases demand, globally. One CDN provider attempted to suggest that the edge provides limited value (though they have since co-opted the term) using this analogy: The edge is like a convenience store, closer to a person's home but with a limited set of items for sale. If a person drives a few more miles to a supermarket, they can get all of their groceries in one trip. There are two problems with this analogy: 

  • It attempts to address a latency challenge with capacity alone, when it is a well-accepted fact that distance adds latency, so it's faster to travel between two points that are closer together. So regardless of the capacity for a given location, the fact that those locations only exist in 23 countries vs. Akamai's 136 means that, in relative terms, at a minimum 113 countries will suffer from increased latency.

  • It assumes that every consumer only has access to a single convenience store, when in reality the edge is about having many nearby specialty stores that allow for the optimal shopping experience and very quick round trips, depending upon what you need. Akamai maintains over 4,100 locations to make available the right compute resources and data where our customers' customers need them.

Today, people and things, as well as systems of both, interact with one another, which presents new opportunities for edge computing solutions to provide value in layers of hardware, software, and code. IDC forecasts that enterprises will invest as much as $250.6 billion on edge computing by 2024. Capturing this value demands that the edge be well understood, that edge platforms provide both integrated services and integrations with other ecosystem providers, and that businesses appreciate when latency and digital transformation require centralization vs. distribution to capture value. 

Today, Akamai is working across a broad spectrum of use cases, including many of those mentioned above, as we help to grow and develop edge computing use cases with our customers. Our platform team is working closely with hardware providers, as well as internet service providers (ISPs) and mobile network operators (MNOs), to help ensure that edge networking is ready to enable 5G, and our carrier team is complementing that with solutions designed to maintain mobile device security. Our product teams working with leading retailers are developing edge computing implementations to help shift microservices from the cloud to the edge. One early use case example is helping retailers reduce latency and friction for online shoppers by shifting code from the cloud to the edge. And our IoT Edge Cloud team continues to develop an integrated messaging platform to reduce latency, improve decisioning, and manage costs in the burgeoning connected device ecosystem.

In the next two blog posts on this subject, we will dive into how economics helps dictate which data is best managed at the core, in the cloud, and at the edge, and then share a number of use cases that illustrate how the edge complements the cloud for latency, scalability, and security. 

Stay tuned!



Ari Weil

Written by

Ari Weil

December 08, 2020

Ari Weil

Written by

Ari Weil

Ari Weil is the VP of Product Marketing at Akamai Technologies.