Off-Prem

PaaS + IaaS

Akamai lofts cloud services over Chicago, Washington DC, Paris

'Premium' instances, bigger buckets available too


Content-delivery-network-turned-cloud-player Akamai has flipped the switch on three bit barns in the US and France. The biz has also launched "premium" instances targeting commercial workloads and improved object storage capabilities.

Akamai is a relative newcomer to what folks would traditionally think of as public cloud computing. In early 2022, the CDN acquired tier-two cloud provider Linode in a $900 million deal and is now expanding its plans for clocking up more cloudy workloads.

But while Akamai boasts more than 4,200 "locations" these are really just edge nodes – used to cache and distribute content, and not capable of the kind of compute necessary to support your average VM, storage bucket, or Kubernetes cluster.

With the addition of Paris, Washington DC, and Chicago to Akamai's "core" datacenter plans, the network offers compute services from 14 sites. What's more, it plans to extend service to Seattle and Chennai in India later this quarter, and has an additional eight sites and 50 cloud nodes planned.

According to Akamai, these sites were chosen in large part due to their proximity and low-latency timelag to neighboring datacenters, where customers may already be running workloads. The Washington DC site, for instance, is located within spitting distance of Northern Virginia, which accounts for more than a third of the United States' hyperscale datacenter capacity. It's a similar story for Paris, which has one of Europe's densest concentrations of datacenters.

Meanwhile, Akamai is pitching its Chicago location as a middle ground for customers looking to replicate data or distribute workloads across multiple sites or clouds.

Alongside the new sites, Akamai has also introduced "premium" instances. From what we can glean from perusal of the website, these premium instances refer to higher-cost virtual machines powered by AMD's third-generation Epyc Milan processors.

These instances can be configured with 2–64 cores, 4–512GB of memory, and up to 7.2TB of SSD storage. These instances also support up to 40Gbit/sec of inbound traffic – but outbound traffic is capped at 4–12Gbit/sec. Prices range from $43 to $5,530 per month.

It's worth noting that we've seen this kind of "premium" branding used in competing clouds – like Digital Ocean, which offers AMD's Epyc Milan and Intel Cascade and Ice Lake processors as an upsell.

The instances are complemented by an upgraded object storage service, with support for up to a petabyte of storage and a billion objects per bucket.

Over the past few years, many smaller cloud providers have taken steps to differentiate themselves. Akamai in particular sees an opportunity to tie cloud compute into its CDN platform, and is teasing a global load balancer – coming later this year – that claims to route traffic dynamically between its points of presence.

However, it's hardly the only cloud contender looking to stand out. Vultr, for instance, offers low-cost GPU instances and Nvidia Multi-Instance GPU support to split a single accelerator between multiple users, and recently announced a Cloud Alliance to provide customers with access to third party services and integrations.

Digital Ocean is also chasing the AI/ML wave, announcing its acquisition of Paperspace's GPU cloud. ®

Send us news
Post a comment

Datacenter architect creates bonkers designs to illustrate the craft, and quirks, of building bit barns

They’re basically skyscrapers, says Charles Fortin. But they could be ships, cars, or rocks

Datacenters feeling the heat to turn hot air into cool solutions

It's tricky to pull off, but new rules may make reuse more common

Google unveils TPU v5p pods to accelerate AI training

Need a lot of compute? How does 8,960 TPUs sound?

Vertiv goes against the grain with wooden datacenters for greener bytes

Will timber tech take root or just go up in flames?

AMD thinks it can solve the power/heat problem with chiplets and code

CTO Mark Papermaster lays out the plan for the next two years

China's first undersea datacenter sinks – as planned

PLUS: India's landmark digital law delayed; Singaporean banks de-digitize some accounts; AUKUS to unleash AI

Google goes geothermal to power some bitbarns

Search giant exploring more locations to squeeze watts from rocks

Attacks abuse Microsoft DHCP to spoof DNS records and steal secrets

Akamai says it reported the flaws to Microsoft. Redmond shrugged

Now is a good time to buy memory because prices rise next year, Gartner predicts

To blame? The usual suspect – AI's appetite for chips

Nvidia’s China-market H20 chips hit another speed bump

Integration woes delay Nvidia's hopes of maintaining grip on Middle Kingdom

Nvidia intros the 'SuperNIC' – it's like a SmartNIC, DPU or IPU, but more super

If you're doing AI but would rather not do InfiniBand, this NIC is for you

Researchers weigh new benchmarks for Green500 amid shifting workload priorities

Just because it's super efficient at Linpack doesn't mean it'll be in everything