Dept. of Energy to use world’s largest chip from Cerebras for AI

Cerebras Systems and two government labs will use the Wafer Scale Engine chip—the world's largest—to perform AI research. The WSE is shown next to the largest GPU. (Cerebras)

Cerebras Systems is collaborating with the U.S. Department of Energy to conduct massive deep learning experiments in government-run labs using the world’s largest chip for the first time, the company announced Tuesday.

Argonne National Lab and Lawrence Livermore National Lab will be the first labs in the multi-year partnership with Cerebras.

DOE and Cerebras will conduct artificial intelligence work to help “build and defend national competitive advantage” under the direction of President Trump’s executive order on AI from Feb. 11, said Dimitri Kusnezov, DOE deputy undersecretary for AI.

Free Daily Newsletter

Interesting read? Subscribe to FierceElectronics!

The electronics industry remains in flux as constant innovation fuels market trends. FierceElectronics subscribers rely on our suite of newsletters as their must-read source for the latest news, developments and predictions impacting their world. Sign up today to get electronics news and updates delivered to your inbox and read on the go.

RELATED: Cerebras builds AI chip 56x larger than the biggest GPU

DOE’s “unmatched computing capabilities” will be combined with the Cerebras’ Wafer Scale Engine (WSE) announced in August, said Cerebras CEO Andrew Feldman. It is the largest chip ever built and is 56 times larger than the biggest GPU to enable AI at a supercompute scale. It contains more than 1.2 trillion transistors, while the largest GPU contains 21 billion transistors.

WSE will allow Lawrence Livermore to conduct computations and storage at massive scale, said Bronis de Supinski, chief technology officer for Lawrence Livermore.

WSE is a very big chip that allows processing more quickly, since data and power don’t have to pass through nearly as many choke points at the edges of conventional chips.

Cerebras posted a lengthy blog on Sept. 6 outlining the advantages of its large WSE and how it will work in massive scale computing. The blog explains that wafer-scale chips had been tried before in the 1970s and 1980s by Texas Instruments, ITT and Trilogy, but success was out of reach. “It took gumption to build a new, more powerful chip technology…More cores, more memory and more low latency bandwidth between cores are all made possible by wafer scale…The Cerebras Wafer Scale Engine is ready to transform AI compute.”

Suggested Articles

System positions sensors every 40 kilometers connected over LoRaWAN network

The dual camera in the iPhone 11 attracted customers, but also a price lower than the iPhone XR

Despite more WFH, device shipments will drop in 2020 by nearly 14%, according to Gartner