Optimize live and VOD transcoding with ASIC-based video processing units on Akamai Cloud. Accelerated Compute instances pair general-purpose vCPUs and RAM with dedicated NETINT Quadra T1U VPUs to deliver high stream density, low latency, and predictable cost per stream—without buying or maintaining on‑prem hardware.
Choose Accelerated Compute when: - Your primary workload is video transcoding for live or VOD and you want the best cost per stream. - You need high-density, low-latency H.264/HEVC/AV1 pipelines and want to keep CPUs available for other services. - You are replacing or augmenting on‑prem encoders for burst capacity, events, or rapid POC.
Choose CPU plans when: - Your workload is application logic, databases, APIs, or general compute without heavy media encoding.
Choose GPU plans when: - You need CUDA/Tensor/RT cores for AI/ML, rendering, or workflows that benefit from NVENC/NVDEC but also rely on GPU compute.
VPU specifications (per NETINT Quadra T1U): - Encode: AVC/H.264 (Baseline, Main, High, High 10), HEVC/H.265 (Main, Main 10), AV1 Main, JPEG - Decode: AVC/H.264, HEVC/H.265, VP9 Profile 0/2, JPEG - Throughput: 32× 1080p30, 8× 4Kp30, 2× 8Kp30 - Resolution: 32×32 up to 8192×5120 - Audio: MP3, AAC-LC, HE‑AAC - Integration: FFmpeg SDKs and libxcoder API
Available plan characteristics: - vCPU: 8–96 cores - Memory: 16–192 GB - Storage: 200–300 GB SSD - Outbound bandwidth: up to 16 Gbps - Pricing: starts at US$280/mo (US$0.42/hr) for 1× VPU, 8 vCPUs, 16 GB RAM, 200 GB SSD - Regions: Los Angeles (US), Miami (US), London 2 (GB), Frankfurt 2 (DE), Chennai (IN)
See current plan availability and pricing on the Akamai pricing page.
1) Create and secure your cloud account - Create an account and log in to Cloud Manager. - Set up SSH keys and, optionally, default Cloud Firewalls.
2) Deploy an Accelerated Compute instance - In Cloud Manager, select Create → Linode. - Choose a region near your encoders or audience. - Choose a Linux distribution (Ubuntu 20.04/22.04/22.10 recommended). - Select “Accelerated” plan size based on your target stream density. - Configure labels/tags and assign to a placement group if needed. - Networking: - Choose Public Internet, VPC, or VLAN. - Assign a Cloud Firewall. - Optionally enable disk encryption. - Click Create Linode.
3) Initialize the NETINT VPU with Quadra SDK - SSH to the instance and install prerequisites: - sudo apt-get install nasm yasm -y - Download the latest Quadra SDK to /root/. - Run the quick installer from the extracted SDK: - bash quadra_quick_installer.sh - Accept release packages (Y). - From the menu, perform: - 1: Setup environment variables - 3: Install OS prerequisite libraries - 4: Install NVMe CLI (ignore errors on newer Ubuntu if noted) - 5: Install libxcoder - 7–15: Install your preferred FFmpeg build - Enter 22 to exit.
4) Validate FFmpeg + VPU - List devices via libxcoder and run a short test transcode to H.264/HEVC/AV1. - Tune per-title presets and ladder outputs as needed.
5) Harden and operationalize - Apply OS updates, configure alerts/monitoring, and integrate logs with Object Storage. - Optionally front ingest/egress with NodeBalancers and Akamai Adaptive Media Delivery.
For full instance creation details, see Create a Linode. For VPU setup, see Accelerated Linodes.
Streaming workloads continuously ingest media, transcode into multiple renditions/codecs, package to HLS/DASH, and deliver globally with strict latency and reliability SLOs. At scale, cloud data ingestion combines: - Distributed ingest endpoints close to sources to minimize first-mile latency. - Hardware-accelerated transcode pipelines to maximize throughput per instance. - Durable storage for mezzanine inputs and multi-bitrate outputs. - CDN integration to offload origin and minimize playback startup and rebuffering. - Private networking (VPC/VLAN) and firewalls to segment and secure traffic.
Technical fit - Supported codecs and HDR, min/max resolution, per-device throughput. - Stream density per instance; sustained FPS at target profiles. - Packaging support for HLS/DASH and DRM integration path.
Reliability and performance - Ingest-to-first-frame latency, rebuffer ratio, error rate. - Failover RTO/RPO, multi-region options, maintenance windows.
Operations - Time-to-deploy, automation via API/CLI/Terraform, observability. - Security features: VPC/VLAN, firewalls, IAM, encryption.
Economics - Cost per stream at target ladder. - Egress pricing and expected offload with CDN. - Regional availability aligned to audience.
KPIs to track - FPS per VPU, concurrent streams per instance. - Startup time (TTFF), average bitrate, rebuffer ratio. - Origin egress/GB per viewer, cost/stream-hour. - Encoder error rate and recovery time.
What to consider when comparing with Google Cloud, Azure, or AWS: - Hardware strategy - Akamai offers cloud-based NETINT VPUs purpose-built for transcoding. This ASIC approach can yield higher stream density and lower cost per stream than general-purpose CPU/GPU pipelines. - Egress economics - Low, predictable origin egress pricing helps control total delivery cost, especially when paired with CDN offload. - Proximity and distribution - Akamai’s globally distributed platform and media services are designed to reduce latency between ingest, processing, and delivery. - Open tooling - FFmpeg + libxcoder workflow portability and API/CLI/Terraform automation.
Use these criteria and the KPI checklist above to calculate price/performance per stream and validate SLOs for your workloads on any provider.