Cerebras Systems is building a huge chip to process deep learning and artificial intelligence. The Cerebras Wafer-Scale Engine (WSE) is 56-times larger than the next-largest chip, delivering more compute, memory, and communication bandwidth.
The WSE is an 8.5-inch square-shaped chip with a pattern of lines etched in silicon. The chip provide parallel-processing speed to enable AI research at previously-impossible speeds and scale.
The Cerebras WSE uses 1.2 trillion transistors and 400,000 AI-optimized cores. By comparison, the largest graphical processing unit (GPU), the Nvidia V100, has 21.1 billion transistors.
Nvidia is capitalizing on the upsurge in demand for AI chips. But even with dozens of Nvidia’s GPUs, it can take weeks to “train” a neural network. And bundling together multiple GPUs in a computer starts to show diminishing returns once more than eight of the chips are combined, according to an article in Fortune.
The diminishing returns occur because the chips must pass data back and forth, and this sharing over wires between the chips causes the computer processing to slow down. Putting all the chip processing power on one, giant wafer removes the need for the wires that connect multiple chips.
But chip innovators have been trying to create a giant wafer for years without any luck. Cerebras was able to create a breakthrough via its proprietary software that solves problems such as chip defects and the need for redundant circuits.
Cerebras employs about 150 people in California. It’s raised $112 million to date, according to Crunchbase. The company is producing its WSEs in partnership with Taiwan Semiconductor Manufacturing.