Demands from the cloud fuel hard drive innovation

The hard drive is dead. Long live the hard drive!

The cost of NAND flash has dropped precipitously in the last decade, and tape still wins in cost per bit, but hard disk drives (HDDs) continue to rule the data center.

Carl Che, CTO of Western Digital’s HDD business unit, said hard drives are used for about 90% of cloud data storage, even as more consumer devices like laptops have flash-based solid-state drives (SSDs). He said the innovation in hard drive technology over six decades remains an exciting story even as Western Digital’s customers have changed.

The focus has moved from the individual PCs to scaling the data center with purpose-built hard drives that have their own attributes and with more consideration being given to total cost of ownership (TCO). “Storage density is one of the key attributes,” he said. Energy consumption measured by watt per terabyte is also an important attribute as it plays into TCO and trends toward sustainability.

Western Digital is addressing these trends with its shingled magnetic recording (SMR) technology, UltraSMR, which increases the capacity advantage compared with drives that use conventional magnetic recording (CMR). Previous generations of SMR HDDs provided a capacity advantage of 2TB relative to CMR.

The company recently announced a 28TB drive which Che said is the highest capacity in the industry that is shipping, which he said increases customers’ storage density and reduces their overall TCO.

product shot of hard drive

Che said two key types of data aggregation are influencing the evolution of hard drive and how storage is tiered – there is computational storage and archival storage. Hard drives handle both production data and archival storage, while some data is archived to tape. “I don't think anybody deletes any data these days,” he said.

Hard drives dominate the data storage hierarchy

There’s always a chance that “cold” data may be useful for generative AI, Che said, which is changing how data is valued. The value of data informs storage tiering, which matches an appropriate solution with how hot or cold the data is, as well as if the associated workloads are read or write-intensive.

The result is a segmented data center with flash SSDs, hard drives, and tape storage. Che said the high volumes of data in the AI era presents both a huge opportunity and challenge for innovation to manage zettabyte growth. He said there’s no alternative technology that can scale effectively to meet that growth.

Che said Western Digital is working closely with cloud data center customers with a focus on TCO and sustainability as data volumes grow.

The next frontier for expanding hard disk capacity is Heat Assisted Magnetic Recording (HAMR), a media magnetic technology on each disk. HAMR allows data bits to become smaller and more densely packed while remaining magnetically and thermally stable. A small laser diode attached to each recording head momentarily heats a tiny spot on the disk to write new data – this enables the recording head to flip the magnetic polarity of a single bit at a time. Because each bit is heated and cools down in a nanosecond, the HAMR laser has no impact at all on drive temperature, or on the temperature, stability, or reliability of the media overall.

Thomas Coughlin, president of Coughlin Associates and incoming president of the IEEE, said all the major vendors offer hard drives with capacities exceeding 20TB with both Western Digital and Seagate having announced 24TB drives. He said 32TB HAMR drives are slated to go into production in 2024.

But advances in hard drive technology can be achieved outside of the drive – the performance of the drives are further enhanced by the interfaces and how they connect with the broader ecosystem.

“There's been a push to move everything to NVMe (Non-Volatile Memory Express), including the hard drives,” Coughlin said. SATA and SAS are both on their way out, he said, and there’s a push in the industry to have a unified interface so everything runs on PCIe.

“I wouldn't be surprised in the next few years if we just started to see hard disk drives with an NVMe interface on them.”

Interconnectivity just as critical as capacity

The NVMe storage protocol, which runs over the well-established Peripheral Component Interconnect Express (PCIe) standard, has enabled SSDs to fully harness the characteristics of NAND flash, but the specification was created on the premise that other NVMs might be used as the storage media, and the possibility that spinning disk would connect via NVMe has been gaining traction.

The mature standard now includes NVM Express over Fabrics (NVMe-oF), which uses a transport protocol over a network to connect remote NVMe devices, unlike regular NVMe, where physical NVMe devices connect via a PCIe bus either directly or over a PCIe switch to a PCIe bus.

NVMe 2.0 added support for rotational storage media on NVMe, while the Open Compute Project is working to develop an NVMe specification for HDDs.

“NVMe is a protocol but it does have a fabric extension,” Mohamad El-Batal, director with the NVM Express consortium, told Fierce Electronics in an interview. He said a key goal of NVMe is to simplify storage – reducing the number of protocols reduces total cost of ownership (TCO).

The idea is that NVMe becomes ubiquitous so that any storage device can be connected whether it is near line, primary storage, or a backup storage. “It's all about modernization of the interface to the disk,” El-Batal said.

Like Coughlin, he sees SATA as becoming “archaic” in nature, and it can’t keep up with the needs of AI and machine learning. NVMe makes sense for all storage because it allows direct access to that storage by the CPU or GPU that’s processing the data.

This direct access eliminates the need to traverse multiple layers of an OS stack and having to do protocol conversion from PCIe to SATA or SAS, Mohamad El-Batal said. “My vision is that by the next decade we should see proliferation of NVMe becoming the ubiquitous interface to storage.”