Neuromorphic Chips: Ultimate Parallel Processors Share your comment!

Bookmark and Share

Purdue University is working with semiconductor researchers, including Intel research scientist Charles Augustine of its Circuits Research Lab (Hillsboro, Ore), to develop spin-based neuromorphic microchips as the ultimate parallel processors–consuming as little as 300-times less power than circuits today.

Traditional semiconductor chips use electrical charge to store information, requiring thousands of electrons to be transferred onto a storage device, like a capacitor, until its voltage exceeds a threshold. However, switching from encoding digital ones and zeros with electrical charge to using the spin-state of electrons can drastically cut the energy consumption of electronic circuits.

 By  combining bipolar spin neurons with memristors (phase change memory), input signals can program self-adaptive weights sandwiched between metal interconnects. SOURCE: Purdue

 

Spin states are inherent to electrons, which are constantly spinning, imparting a momentum to their electrical charge which can be oriented “up” or “down”. Such spin-polarized electrons can be used to encode digital ones and zeros using much less energy than just piling up charge on a capacitor. Ideally, a single electron could be used to store a digital one as “up” spin and a digital zero as “down” spin, enabling the ultimate downsizing for parallel processors to one-bit-per-electron. And for intrinsically parallel applications, such as emulating the billions of neurons in the human brain, the super low power achieved by spin-polarized digital encodings could enable the ultimate parallel processing applications of the future.

“We plan to progress on system level modeling of large scale neuromorphic architectures based on the proposed device-circuit scheme,” said research fellow at Purdue, Mrigank Sharad. “We are discussing the prospects of prototype development with Intel and some others groups.” 

In the paper authored by Intel’s Augustine and Purdue’s Sharad (along with professor Kaushik Roy and doctoral candidate Georgios Panagopoulos at Purdue), entitled Proposal for Neuromorphic Hardware Using Spin Devices various parallel processing applications for modeling the neural networks of the brain were evaluated using spin encodings. Simulation results for common image processing tasks routinely performed by neural networks, such as edge extraction and motion detection, were shown to take at least 100-times less energy than conventional parallel processors when using spin-based encodings.

And by combining spin-polarization with new materials, such as the circuit element called a “memristor” by its inventor, University of California professor Leon Chua, the Purdue and Intel researchers showed how associative memory, pattern matching and other inherently parallel applications can be accelerated with spin-encodings.

Funding for this research is being provided by the Semiconductor Research Corp. (SRC) and the Focus Center Research Program (FCRP) of the Defense Advanced Research Project Agency (DARPA).

Posted on by R. Colin Johnson, Geeknet Contributing Editor
11 comments
armingaud
armingaud

IBM manufactured aroung 1995 a neuron chip called the ZISC ("zero-instruction computer set" designed and manufacttured in its Corbeil-Essones laboratory. An excellent demonstration pade it drive a (simulated) car on a (simulated) track on a PC screen (the ZISC was mounted on a pluggable card) with excellent performance.

Jaime Soto
Jaime Soto

I have been programming parallel math processes with Xilinx FPGAs since the late 80´s in all families, and today I use hierarchical block diagrams to represent functions connected by signals buses, I no longer use text as the main thing, just to place some templates at the bottom hierarchy of some block, but in general I prefer schematics as the best way to represent real parallel and independent math processes.

But still I have to fight against complicated team memebers who use secuential like text based "programs", I just hate that.

PatrickCowett
PatrickCowett

I'm not a fan of the big dinosaur corporations, but the technology itself here looks great! For a long time, I've been looking forward to graduation from those old bottleneck processing dinosaurs that are so ridiculously popular today! I read about FPGA-based "CPUs" coming out in new laptops from a start-up company about 15 years ago - and it never happened. This technology looks far better! I'm very glad to see a vast improvement in energy efficiency too.

The old paradigm of faster and faster (and hotter) bottleneck processors just doesn't cut it any more. And these fake "parallel" processors that are just multiple "multi-core" bottlenecks connected in parallel, effectively widening the bottleneck - well it's a whimsical baby step toward the right direction, but it's just stalling, delaying the inevitable, it doesn't do it. That old paradigm of relying on bottlenecks - its time has long passed. I am quite ready for a real (completely) parallel architecture to materialize.

And like the inventor of this "memristor" technology has said, the brain is an excellent example, from nature, of a real parallel processor. It is so simple in design and it has no need to burn itself up running at 5 Ghz hahaha! We've had the technology for some time now to create such a processor in ICs. It's very simple to do and we certainly can put enough FETs in there to make a good-sized one, but no one with the resources has decided to do it yet (that I'm aware of). Glad to see this is changing!

One suggestion about the name "memristor" - why not call it a "quantistor" (from quantum-physics: the spin), a "spinsitor" (or something similar) or a "neuristor"? With that last one, creating true neural nets, as it looks like is the inventor's intention, will mean moving beyond the digital, into the analog. The brain (the ultimate prototype) is an unclocked analog device.

jmaglitta
jmaglitta

Amazing. Wondering: Can they stilled accurately be called "chips"?  

ValentinIle
ValentinIle

@PatrickCowett---These properties make the nerve axon capable of logic operations. In 1960 a semiconductor device called a neuristor was devised, capable of propagating a signal in one direction without attenuation and able to perform numerical and logical operations. The neuristor computer, inspired by a natural model, imitates the dynamic behaviour of natural neural information networks; each circuit can serve sequentially for different operations in a manner similar to that of the nervous system. --- http://www.britannica.com/EBchecked/topic/66160/bionics#ref84374

R Colin Johnson
R Colin Johnson

@jmaglitta When this research first originated there was much speculation about building hybrid man-machines, and in the 1980s there were experimental chps that relocated real neurons (from animals) onto chips where they grew interconnecting networks. Today, however, scientists are more interested in emulating neural networks with electronics, or at most augmenting real brains with electronic implants.