UK govt finds £225M for Isambard-AI supercomputer powered by Nvidia

5,448 GraceHopper superchips and 200PFLOPS gets you somewhere in the global public top ten

The UK government says it will cough up £225 million ($273 million) for a supercomputer capable of more than 200 petaFLOPS of double-precision performance, with the University of Bristol to house the machine and Nvidia providing the core computing components.

Detailed during UK's AI Safety Summit Wednesday, the machine – dubbed Isambard-AI – is expected to come online next year and may be able to help organizations studying everything from automated drug discovery and climate change to the application of neural networks in robotics, big data, and national security.

"Isambard-AI represents a huge leap forward for AI computational power in the UK," Simon McIntosh-Smith, director of the Isambard National Research facility, argued in a statement. "Today's Isambard-AI would rank in the top 10 fastest supercomputers in the world and, when in operation later in 2024, it will be one of the most powerful AI systems for open science anywhere."

The number-ten most-powerful publicly known supercomputer in the world right now is China's Tianhe-2A, which theoretically peaks at 100 petaFLOPS and benchmarks 61 petaFLOPS. A peak 200-petaFLOPS machine at FP64 in the UK would rival America's Summit, which was number one until 2020 and today is fifth in the public rankings.

The United Kingdom's fastest machine in the current top-500 public list is Archer2 in 30th place with a theoretical peak speed of 26 petaFLOPS and benchmarks at 20 petaFLOPS. On paper, Isambard-AI is about ten times faster, therefore.

According to the University of Bristol, Isambard-AI, first talked about in September, will employ 5,448 Nvidia GH200 GraceHopper Superchips.

Announced in 2022, Nvidia's GH200 meshes [PDF] a 72-core Arm Grace CPU and a Hopper GPU over a 900GB/s NVLink-C2C interconnect. Each superchip comes equipped with up to 480GB of LPDDR5x memory and either 96GB or 144GB of high bandwidth memory, depending on the configuration.

The chips will be integrated into a liquid cooled chassis developed by HPE Cray, networked using the manufacturer's custom Slingshot 11 interconnect, and supported by almost 25 petabytes of storage.

The full system will be housed in a self-cooled, self-contained datacenter alongside the previously announced Isambard-3 machine at the National Composites Center (NCC) located in the Bristol and Bath Science Park. It will feature a heat-reuse system to warm neighbouring buildings.

Isambard-3, which is due to come online next northern spring, will offer early access to UK scientists during the first phase of the broader Isambard-AI project. That system comprises 384 Nvidia Grace CPU Superchips, each of which packs a pair of 72-core Arm-compatible CPUs and up to 960 gigabytes of LPDDR5x memory.

Among the first beneficiaries of Isambard-AI will be Britain's Frontier AI Task Force, which aims to mitigate the risk of advanced AI applications to the nation's national security. The task force will also work with the AI Safety Institute to develop a research program to evaluate the safety of machine learning models.

Flexible precision an opportunity for both AI and HPC

While Isambard-AI will no doubt be capable in double-precision high-performance compute (HPC) applications, a major focus of the system is AI and other workloads that can take advantage of lower-precision floating-point calculations. Turn down the precision slider to FP8 and optimize for sparsity, and Nvidia says it expects researchers will be able to extract 21 or more exaFLOPS from the system. Presumably that's the theoretical peak.

As the performance figures imply, lower precision floating point calculations trade accuracy for speed. FP8 and FP16 are widely employed for AI training and inferencing for this reason, but as our sibling site The Next Platform has previously pointed out, it also has applications in HPC.

Researchers at Riken have been exploring the use of 32-bit or even 16-bit mathematics in HPC using the Fugaku supercomputer for years now. Meanwhile, the European Center for Medium Range Weather Forecasts have already demonstrated the benefits of 32-bit precisions in weather and climate modelling. Researchers at University of Bristol have had similar successes with their own atmospheric simulations and were able eke out a 3.6x speed up by dropping down to lower precision.

Because of this, Isambard-AI's support for a variety of floating point precisions ranging from FP64 down to sparse FP8, should allow researchers to explore a variety of low or mixed-precision workloads in both emerging AI and HPC arenas.

The Register expects to learn more about Isambard-AI and other upcoming systems when we attend the Supercomputing 2023 event in Denver, Colorado, later this month.

The British government also mentioned on Wednesday another forthcoming UK super called Dawn that we'll cover on Thursday. Nvidia described Isambard-AI as "Britain’s most powerful supercomputer" and the government said it will be the nation's "most advanced computer."

Suffice to say, the people behind Dawn say their computer will be the fastest. We guess we'll find out for sure when the machines are eventually powered on and benchmarked. ®

More about

TIP US OFF

Send us news


Other stories you might like