Announcing Flow-IPC, an Open-Source Project for Developers to Create Low-Latency Applications
Whether accelerating media delivery or empowering developers to build distributed low-latency applications, the path of the bits through our distributed network involves the collaboration of many programs — and, therefore, the sharing of data from program to program. This is where inter-process communication (IPC) plays an ever-present role.
In a few words, IPC means separate programs sharing data structures — from file contents to configuration to algorithmic details — among one another. Such data transmission is not instant. At Akamai, every millisecond of latency added by IPC is scrutinized: It simply needs to be fast. At the same time, our software is complex. Truly minimizing latency usually means developing custom, specialized code — again and again, depending on the context. This is costly and wasteful — as a business, we simply need to do better. We developed Flow-IPC to avoid having to repeatedly trade off between an elegant, reusable API and no-latency performance.
Today, we’re announcing the release of Flow-IPC. The project currently supports Linux, with macOS/ARM64 and Windows planned. Flow-IPC is available under the Apache 2.0 and MIT open-source licenses. We are very excited to give back to the open-source community!
Instant transmission, no matter the size
Developers can use Flow-IPC to make in-memory transfers of things like data, images, and video — between various programs — virtually instantly. For the uninitiated, we are still talking about IPC, which means we’re describing in-memory transfers of data among programs.
For example, to serve this page, a machine in a data center might first use a security-oriented application to negotiate the encryption details with your browser, then pass the network-connection descriptor and the associated security data to an entirely separate application running within the same machine. This request-processing application would then do the rest of the work of assembling the web page, which might in turn involve the aid of several more programs. As you zoom out, multiple layers of processing add time.
Flow-IPC supports the transmission of binary blobs, native I/O handles (descriptors), structured data expressed via Cap’n Proto schemas, and C++ STL-compliant containers and structures of arbitrary complexity. (Cap'n Proto is an open-source project, not affiliated with Akamai, whose use is subject to the license, as of this blog's publication date, found here.) We provide pain-free allocation directly in shared memory (SHM) using the commercial-grade jemalloc allocator for heavy-duty use of shared memory as a heap.
In all cases, expect end-to-end zero-copy performance when transmitting data from process to process. No matter how big the data — with Flow-IPC, transmission will always be nearly instantaneous.
Here’s an example of the performance gain to expect from Flow-IPC over a typical IPC implementation, for various sizes of data structure being transmitted. This comparison graph is from the example at our developer blog.
Typically, communicating a data structure — in this case, a simple representation of a file’s contents as cached in memory — involves the data being copied twice: by the sender application into an OS kernel buffer, and from that buffer into the receiving application. Flow-IPC, however, eliminates all such copying. The graph compares the two approaches.
The blue line, for a “classic” implementation without Flow-IPC, shows this clearly: The bigger the file, the longer it takes — up to almost a full second of added latency for the 1-gig file.
The orange line, for the same operation executed using Flow-IPC, shows a delay of less than 100 microseconds regardless of the file size. The curve is flat. This is an improvement of 8x for a 1-megabyte file, or 6,000x for a 1-gig file.
Who can use Flow-IPC?
Flow-IPC is a tool for C++ systems developers. Folks like us will appreciate how Flow-IPC simplifies and accelerates the IPC tasks we face repeatedly — especially in server application development. C++ developers in need of transferring something from one process to another without adding latency — and who want to avoid the notoriously touchy, hard-to-reuse techniques of managing shared memory — can benefit.
The GitHub-hosted project includes full documentation and an automated build/test pipeline. The main repository’s README will point you to all of these when you’re ready to jump in. Or you can start with the developer-oriented blog post with the example that generated the performance graph above.
Resources
Developer blog post: An example of Flow-IPC in action, showing the transmission of a memory-cached file’s contents expressed in chunks, in a Cap’n Proto schema ( the performance graph in the post is from this example)
Flow-IPC project at GitHub: To install, read documentation, file feature/change requests, or contribute
- Flow-IPC Discussions board at GitHub: A great way to reach us — and the rest of the community