Memristors on a chip will lessen power consumption

University of Michigan researchers developed a power-saving memristor array chip, shown seated inside a custom chip. (Robert Coelius, Michigan Engineering Marketing)

University of Michigan researchers have configured memristors on a chip to improve performance and lower energy consumption.

The term “memristor” is a jam-up of “memory” and “resistor.” Memristors are programmed to store information as resistance levels, which allows memory and processing in the same device. That in turn cuts out the data transfer delay that happens in conventional computers where memory is separate from the processor.

“The semiconductor industry has improved performance by making devices faster. Although the processors and memories are very fast, they can’t be efficient because they have to wait for data to come in and out,” said Wei Lu, U-M professor of electrical and computer engineering and co-founder of memristor startup Crossbar Inc. in a statement.

Sponsored by Digi-Key

Industrial Ethernet Solutions from ADI Chronous™ Available Now from Digi-Key

ADI’s Chronous portfolio of edge-to-enterprise Industrial Ethernet connectivity solutions is designed to accelerate your path to Industry 4.0. The compatible and interoperable Industrial connectivity products enable best-in-class performance.

Rather than processing in 1s or 0s, memristors use resistances on a continuum. Neuromorphic computing applications can take advantage of the analog nature of memristors, but ordinary computer numerical calculations cannot because the electric current passing through a nemristor device isn’t precise enough.

RELATED: "AI chips advance with Intel's Pohoiki Beach"

So, Lu and his team defined current as specific bit values of 0 or 1, and then mapped large math problems into smaller blocks, which they call memory processing units. They believe these units can be used for machine learning and AI, as well as simulations used in weather forecasting.

Lu said multiplication and addition can be done in one step, instead of manually multiplying and then summing up, as in a typical processor. This is possible because memristors are set up to represent numbers in rows and columns with voltage pulsing along the rows. The current measured at the end of each column has the answer.

To demonstrate their work, the team used partial differential equations used in a plasma reactor, similar to those used for integrated circuit fabrication. The equations were tested on a 32 x 32 memristor array.

In future computing systems, there could be many arrays, which could cut energy consumption by a factor of 100. With that energy efficiency potential, memristor arrays could be used in small devices like smartphones. That could lead to AI processing -- like commands to a voice assistant -- right on a smartphone, instead of requiring the processing to be done in the cloud.

The researchers published their work in Nature Electronics.


Suggested Articles

Gartner sees improvement for NAND, but non-memory declines will hurt smartphone and consumer electronics production.

Don’t miss these TV shows on the lighter side, recommended by engineers to help take your mind off the news.

Postings for tech jobs have seen a precipitous decline in recent weeks, with categories like hardware design engineering particularly hard-hit.