Special Features

Cloud Infrastructure Week

AMD adds 4th-gen Epycs to AWS in HPC and normie workload flavors

Silicon joins Amazon's homegrown Gravitons, Intel's Sapphire Rapids Xeons


AMD's fourth-gen Eypc processors have arrived on Amazon Web Services in your choice of general-purpose and high-performance compute (HPC) tasks.

Kicking things off with Amazon's Hpc7a instances, the VMs are said to be optimized for compute and memory bandwidth-constrained workloads, such as computational fluid dynamics and numerical weather prediction. Compared to its older Hpc6a instances — which used AMD's 3rd-gen Epycs — Amazon claims its latest instances are as much as 2.5x faster than the older kit.

The VMs are available with your choice of 24, 48, 96, and 192 cores. Normally the vCPU count listed would refer to threads, but in this case, Amazon says simultaneous multi-threading (SMT) has been disabled to maximize performance. This means Amazon is using a pair of 96-core CPUs — likely the Epyc 9654, which can boost to 3.7GHz.

What we do know is that Amazon isn't using AMD's cache-stacked Genoa-X CPUs. Those chips were launched at the chipmaker's datacenter and AI event in June, and can be had with up to 1.1GB of L3 cache. Even without the ocean of extra L3, AMD's 4th-gen Epycs added a number of features likely to benefit HPC and AI-based workloads. These include support for AVX-512, Vector Neural Network Instructions (VNNI), BFloat 16, and higher memory bandwidth courtesy of up to 12 lanes of DDR5 per socket.

Amazon's Hpc7a instances are available in four SKUs with between 24 and 192 vCPUs each ... Click to enlarge

While you can tune the core count of these instances — something that will benefit those running workloads with per-core licensing — the rest of the specs remain the same. All four SKUs come equipped with 768GB of memory, 300Gbps of inter-node networking courtesy of Amazon's Elastic Fabric Adapter, and 25Gbps external bandwidth.

This standardization is no accident. By keeping the memory configuration consistent, customers can tune for a specific ratio of memory or bandwidth per core. This is beneficial as HPC workloads are more often limited by memory bandwidth than core count. This isn't unique to AMD's instances either, Amazon has done the same on its previously announced Graviton3E-based Hpc7g instances.

The high-speed internode network is also key, as Amazon expects customers to distribute their workloads across multiple instances, like nodes in a cluster, as opposed to running them in a single VM. As such, they support Amazon's Batch and ParallelCluster orchestration platforms as well as its FSx for Lustre storage service.

AWS M7a instances get a performance boost

For those that maybe don't need the fastest clock-speeds or memory bandwidth, Amazon also launched the M7a general purpose instances it teased back in June.

Amazon claims the instances deliver up to 50 percent higher performance compared to its Epyc Milan-based M6a VMs and can be had with up to 192 vCPUs, each capable of 3.7GHz. Assuming Amazon is using SMT for these instances — we've reached out for clarification — this suggests Amazon is likely using a single Epyc 9654 for these instances, which coincidentally tops out at 3.7GHz.

Amazon's M7a instances pack can be had with up to 192 vCPUs and 768GB of memory ... Click to enlarge

The VMs are available in a dozen different configurations ranging from a single vCPU with 1GB of memory and 12.5Gbps of network bandwidth to a massive 192 vCPU instance with 768GBs of memory and 50Gbps of network throughput.

All of these instances are supported by Amazon's custom Nitro data processing units (DPUs) which offload a number of functions like networking, storage, and security from the host CPU.

But, if you're looking for something on the Intel side of things, we recently took a look at the company's M7i instances, which are based on a pair of 48-core Sapphire Rapids Xeons.

Amazon's M7a instances are generally available now in the cloud provider's North Virginia, Ohio, Oregon, and Ireland datacenters. ®

Send us news
Post a comment

AMD slaps together a silicon sandwich with MI300-series APUs, GPUs to challenge Nvidia’s AI empire

Chips boast 1.3x lead in AI, 1.8x in HPC over Nv's H100

AMD thinks it can solve the power/heat problem with chiplets and code

CTO Mark Papermaster lays out the plan for the next two years

AWS exec: 'Our understanding of open source has started to change'

Apache Foundation president David Nalley on Amazon Linux 2023, Free software, and more

AWS accuses Microsoft of clipping customers' cloud freedoms

World's biggest off-prem service slinger submits comments to UK cloud inquiry, mostly has Redmond HQ's rival in its sights

The AI everything show continues at AWS: Generate SQL from text, vector search, and more

Invisible watermarks on AI-generated images? Sure. But major tools in the stack matter most

You're so worried about AWS reliability, the cloud giant now lets you simulate major outages

Fake it 'til you break it, for a whole availability zone or WAN FAIL

AWS unveils core-packed Graviton4 and beefier Trainium accelerators for AI

Also hedging its bets with a healthy dose of Nvidia chips too

Watchdog claims retaliation from military after questioning cushy federal IT contracts

IT-AAC had a hand in scrutinizing JEDI, now faces probe for challenging $300M+ single-source deals

Now AWS gets a ChatGPT-style Copilot: Amazon Q to be your cloud chat assistant

Anthropic CEO also rocks up on stage for reasons

AWS rakes in half a billion pounds from UK Home Office

Someone has to top up the Bezos rocket fund, like British taxpayers

AWS plays with Fire TV Cube, turns it into a thin client for cloudy desktops

$195 a pop, delivered, pre-provisioned ready to stream desktops or apps

Google submits complaints about Microsoft licensing to UK competition regulator

Now Microsoft has regulator breathing down its neck in three regions