Dept. of Energy to use world’s largest chip from Cerebras for AI

Cerebras Systems is collaborating with the U.S. Department of Energy to conduct massive deep learning experiments in government-run labs using the world’s largest chip for the first time, the company announced Tuesday.

Argonne National Lab and Lawrence Livermore National Lab will be the first labs in the multi-year partnership with Cerebras.

DOE and Cerebras will conduct artificial intelligence work to help “build and defend national competitive advantage” under the direction of President Trump’s executive order on AI from Feb. 11, said Dimitri Kusnezov, DOE deputy undersecretary for AI.

RELATED: Cerebras builds AI chip 56x larger than the biggest GPU

DOE’s “unmatched computing capabilities” will be combined with the Cerebras’ Wafer Scale Engine (WSE) announced in August, said Cerebras CEO Andrew Feldman. It is the largest chip ever built and is 56 times larger than the biggest GPU to enable AI at a supercompute scale. It contains more than 1.2 trillion transistors, while the largest GPU contains 21 billion transistors.

WSE will allow Lawrence Livermore to conduct computations and storage at massive scale, said Bronis de Supinski, chief technology officer for Lawrence Livermore.

WSE is a very big chip that allows processing more quickly, since data and power don’t have to pass through nearly as many choke points at the edges of conventional chips.

Cerebras posted a lengthy blog on Sept. 6 outlining the advantages of its large WSE and how it will work in massive scale computing. The blog explains that wafer-scale chips had been tried before in the 1970s and 1980s by Texas Instruments, ITT and Trilogy, but success was out of reach. “It took gumption to build a new, more powerful chip technology…More cores, more memory and more low latency bandwidth between cores are all made possible by wafer scale…The Cerebras Wafer Scale Engine is ready to transform AI compute.”