On-Prem

Storage

Why Intel killed its Optane memory business

Effort to create a new tier of memory flopped as rivals offered faster and more open alternatives


Analysis Intel CEO Pat Gelsinger has confirmed that Intel will quit its Optane business, ending its attempt to create and promote a tier of memory that's a little slower than RAM but had the virtues of persistence and high IOPS.

The news should not, however, come as a surprise. The division has been on life support for some time following Micron's 2018 decision to terminate its joint venture with Intel, selling the fab in which the 3D XPoint chips that go into Optane drives and modules were made. While Intel has signaled it is open to using third-party foundries, without the means to make its own Optane silicon, the writing was on the wall.

As our sister site Blocks and Files reported in May, the sale only came after Micron had saddled Intel with a glut of 3D XPoint memory modules – more than the chipmaker could sell. Estimates put Intel's inventories at roughly two years worth of supply.

In its poor earnings report for Q2, Intel said quitting Optane will result in a $559 million inventory impairment. In other words, the company is giving up on the project and writing off the inventory as a loss.

The deal also signals the end of Intel's SSD business. Two years ago Intel sold its NAND flash business and manufacturing plans to SK hynix to focus its efforts on the Optane business.

Announced in 2015, 3D XPoint memory arrived in the form of Intel's Optane SSDs two years later. However, unlike SSDs from rivals, Optane SSDs couldn't compete on capacity or speed. The devices instead offered some of the strongest I/O performance on the market – a quality that made them particularly attractive in latency sensitive applications where sheer IOPS were more important than throughput. Intel claimed its PCIe 4.0-based P5800X SSDs could reach up to 1.6 million IOPS

Intel also used 3D XPoint in its Optane persistent memory DIMMs, particularly around the launch of its second- and third-gen Xeon Scalable processors.

From a distance, Intel's Optane DIMMs looked no different than your run-of-the-mill DDR4, apart from, maybe, as a heat spreader. However, on closer inspection the DIMMs could be had in capacities far greater than is possible with DDR4 memory today. Capacities of 512GB per DIMM weren't uncommon.

The DIMMs slotted in alongside standard DDR4 and enabled a number of novel use cases, including a tiered memory architecture that was essentially transparent to the operating system software. When deployed in this fashion, the DDR memory was treated as a large level-4 cache, with the Optane memory behaving as system memory.

While offering nowhere near the performance of DRAM, the approach enabled the deployment of very large, memory-intensive workloads, like databases, at a fraction of the cost of an equivalent amount of DDR4, without requiring software customization. That was the idea, anyway.

Optane DIMMS could also be configured to behave as a high-performance storage device or a combination of storage and memory.

What now?

While DDR5 promises to address some of the capacity challenges that Optane persistent memory solved, with DIMM capacities of 512GB planned, it’s unlikely to be price competitive.

DDR isn't getting cheaper – at least not quickly – but NAND flash prices are plummeting as supply outpaces demand. All the while, SSDs are getting faster in a hurry.

Micron this week began volume production of a 232-layer module that will push consumer SSDs into 10+ GB/sec territory. That's still not fast or low latency enough to replace Optane for large in-memory workloads, analysts tell The Register, but it's getting awfully close to the 17GB/sec offered by a single channel of low-end DDR4.

So if NAND isn't the answer, then what? Well, there's actually an alternative to Optane memory on the horizon. It's called compute express link (CXL) and Intel is already heavily invested in the technology. Introduced in 2019, CXL defines a cache-coherent interface for connecting CPUs, memory, accelerators, and other peripherals.

CXL 1.1, which will ship alongside Intel's long-delayed Sapphire Rapids Xeon Scalable and AMD's fourth-gen Eypc Genoa and Bergamo processors later this year, enables memory to be attached directly to the CPU over the PCIe 5.0 link.

Vendors including Samsung and Marvell are already planning memory expansion modules that slot into PCIe like GPU and provide a large pool of additional capacity for memory intensive workloads.

Marvell’s Tanzanite acquisition this spring will allow the vendor to offer Optane-like tiered memory functionality as well.

What's more, because the memory is managed by a CXL controller on the expansion card, older and cheaper DDR4 or even DDR3 modules could be used alongside modern DDR5 DIMMs. In this regard, the CXL-based memory tiering could be superior as it doesn't rely on a specialized memory architecture like 3D XPoint.

VMware is pondering software-defined memory that shares memory from one server to other boxes – an effort that will be far more potent if it uses a standard like CXL.

However, emulating some aspects of Intel's Optane persistent memory may have to wait until the first CXL 2.0-compatible CPUs – which will add support for memory pooling and switching – come to market. It also remains to be seen how software interacts with CXL memory modules in tiered memory applications. ®

Send us news
32 Comments

Intel scores a reprieve in $2.18B VLSI patent case after court orders retrial

The never-ending IP story goes on

China's Loongson debuts processor that 'matches Intel silicon circa 2020'

Best not to dismiss it, as Asus looks to be onboard and advances are promised

German budget woes threaten chip fab funding for Intel and TSMC

Constitutional court tells govt: Er, about that €60B you handed out... it's not legal

Greenpeace calls out tech giants for carbon footprint fumble

Net-zero promises or zero-net progress?

Washington pours $3B into silicon smackdown to outpackage Asia

Uncle Sam rolls up sleeves to onshore work and protect supply chain

Intel emits patch to squash chip bug that lets any guest VM crash host servers

Sapphire Rapids, Alder Lake, Raptor Lake chip families treated for 'Redundant Prefix'

Intel chips away at carbon footprint but skims over thirst for water, chemicals

Semiconductors are a dirty business

Aurora dawns late: Half-baked entry secures second in supercomputer stakes

Half the machine, quadruple the anticipation for all-Intel super

Intel drops the deets on UK's Dawn AI supercomputer

Phase one packs 512 Xeons, 1,024 Ponte Vecchio GPUs. Phase two: 10x that

Downfall fallout: Intel knew AVX chips were insecure and did nothing, lawsuit claims

Billions of data-leaking processors sold despite warnings and patch just made them slower, punters complain

Apple exec defends 8GB $1,599 MacBook Pro, claims it's like 16GB in a PC

8,388,608KB ought to be enough for anybody, huh?

Intel's Arun Gupta on open source pragmatism and fanatics

VP of the Open Ecosystem at chip biz talks trust in the era of AI