Need cloud computing? Get started now

Dark background with blue code overlay

Blog

Akamai Blog | Zero Trust and the Fallacy of Secure Networks

Robert Blumofe

Written by

Robert Blumofe

May 03, 2022

Robert Blumofe

Written by

Robert Blumofe

Dr. Robert Blumofe is Executive Vice President and Chief Technology Officer at Akamai. As CTO, he guides Akamai’s technology strategy, works with Akamai’s largest customers, and convenes technology leaders within the company to catalyze innovation. Previously, he led Akamai’s Platform organization and Enterprise Division, where he was responsible for developing and operating the distributed system underlying all Akamai products and services, as well as creating products and services for major enterprises to secure and improve performance.

Talking about secure networks is like talking about safe pools. A pool is just a body of water, and if it has enough water to swim in, then it has more than enough water to drown in. A pool is inherently unsafe. We, therefore, take care in how we use a pool: We don’t swim alone; we don’t run around the pool; we don’t dive in the shallow end; and we don’t swim less than 15 minutes after eating. (Is that 15-minute rule still a thing?) These pool-safety policies ensure that our use of the pool is as safe as possible, but they do not make the pool safe in and of itself.

This distinction applies to data networks. A data network simply moves data from place to place and that’s all, so in addition to all of its desired uses, it can be used to facilitate attacks. We, therefore, use additional systems to secure our use of networks: we identify endpoints; we encrypt data flows; we control which endpoints can communicate with one another; and we inspect the data flows to block malware. These network-security policies ensure that our use of the network is as secure as possible, but they do not make the network secure in and of itself.

This distinction is fundamental to end-to-end systems design. This design principle was elucidated in 1981 in a seminal article called “End-to-End Arguments in System Design” by J.H. Saltzer, D.P. Reed, and D.D. Clark of the MIT Laboratory for Computer Science. This 40-year-old paper is as relevant as ever and remains a must-read for anyone involved in distributed computer systems. The end-to-end argument guides the placement of functionality into layers and “provides a rationale for moving a function upward in a layered system, closer to the application that uses the function.” In the case of data networks, for functions such as message ordering, delivery guarantees, and pretty much everything that we would put in the category of network security, the end-to-end argument guides these functions out of the network itself and into the endpoints.

In fact, the end-to-end argument was used to great success in the design of the internet. The Internet Protocol (IP), which defines the internet’s network layer, provides only for the transmission of bounded-size packets between endpoints, with no ordering or delivery guarantees. Guaranteed and ordered delivery is provided at the next layer up, the transport layer, via the Transmission Control Protocol (TCP) with implementation at the endpoints. It’s interesting to note that the original protocol specification published in 1974 by Vinton Cerf and Robert Kahn had TCP and IP combined as a single protocol. It wasn’t until some years later that TCP and IP were defined as two separate protocols. This was a truly landmark decision, the importance of which to the future success of the internet can not possibly be overstated.

In addition to not providing ordering or delivery guarantees, IP does not provide encryption or anything else that we would think of as network security. I’ve heard it argued that this lack of security is a flaw in IP, but I reject this argument in the most vehement terms possible. For one thing, most security functions can be implemented only with the knowledge and help of the applications on the endpoints, so the end-to-end argument guides those functions out of the network and into the endpoints. In addition, security functions have been in, and remain in, a constant state of rapid change. Having these functions in the network would be a disaster, since the network itself would have to be constantly changing.

A huge factor in the success of the internet is its stability, which comes from the stark simplicity of IP. IP does only one thing — move bounded-size packets between endpoints — and it does that one thing very well. It does not provide ordering or delivery guarantees, and it most certainly does not provide security.

So, data networks (at least data networks that are based on IP) are inherently insecure, and network security should be thought of as systems that are implemented at the endpoints to facilitate the secure use of the network.

This principle is fundamental to Zero Trust. Zero Trust recognizes that there is no such thing as a secure network, and the secure use of a network is all about the endpoints. So, with a Zero Trust approach to network security, the goal cannot be thought of as trying to make the network secure, and it cannot be implemented with network layer technology, such as VPNs and firewalls. Rather, Zero Trust is implemented at the endpoints with technology that, among other things, strongly identifies the users and their devices and controls what other endpoints each user and device can access. 

In fact, some time ago I wrote a blog saying that the phrase “Zero Trust Network Access” is an oxymoron. This nomenclature is now widespread, so I don’t expect to be able to change it, but it’s still an oxymoron. In order for access control to be Zero Trust, it has to be from endpoint to endpoint, not endpoint to network. After all, pool safety can’t end at getting into the pool safely. You’ve got to be able to get out of the pool safely, too.



Robert Blumofe

Written by

Robert Blumofe

May 03, 2022

Robert Blumofe

Written by

Robert Blumofe

Dr. Robert Blumofe is Executive Vice President and Chief Technology Officer at Akamai. As CTO, he guides Akamai’s technology strategy, works with Akamai’s largest customers, and convenes technology leaders within the company to catalyze innovation. Previously, he led Akamai’s Platform organization and Enterprise Division, where he was responsible for developing and operating the distributed system underlying all Akamai products and services, as well as creating products and services for major enterprises to secure and improve performance.