University of Massachusetts Amherst

Search Google Appliance

Links

Memristor Neural Network Ready for Machine Learning

Qiangfei Xia

Qiangfei Xia

Joshua Yang

Joshua Yang

Professors Qiangfei Xia and J. Joshua Yang of the Electrical and Computer Engineering (ECE) Department at the University of Massachusetts Amherst headed up a multidisciplinary, multi-institutional team whose latest manuscript, entitled "Efficient and self-adaptive in-situ learning in multilayer memristor neural networks," has just been published in Nature Communications. As Xia and Yang summarized the findings in the manuscript, “This work proves that the memristor neural network is ready for machine-learning applications.”

In addition to Xia and Yang and their graduate students from the UMass ECE department, the other collaborators on this research and the resulting manuscript are located at: Hewlett Packard Labs, Palo Alto, California; and the Air Force Research Laboratory, Information Directorate, Rome, New York.

“Memristors with tunable resistance states are emerging building blocks of artificial neural networks,” the authors explained in their manuscript. “However, in-situ learning on a large-scale, multiple-layer memristor network has yet to be demonstrated because of challenges in device-property engineering and circuit integration.”

As the authors explained: “Traditional CMOS-based computers that implement artificial intelligence burn several orders of magnitude more power than a human brain. Neural networks built with emerging electronic devices and circuits have been proposed to address this issue.” 

One promising approach is to use analogue computation in memristor crossbars because their tunable resistance states can be used both to store information and to perform computation in the same location, circumventing the so called “von-Neumann bottleneck,” a limitation on throughput found in the standard personal computer architecture.

“In addition,” the authors wrote, “the memristor crossbars are capable of performing inference and training on analogue data acquired directly from sensors, using physical laws such as Ohm’s law for multiplication and Kirchhoff’s law for summation.” Ohm's law states that the current through a conductor between two points is directly proportional to the voltage across the two points, while Kirchhoff’s law describes the flow of currents in electric circuits.

“The avoidance of analogue-to-digital conversion and the highly parallel analogue computing without the need of shuttling data around would lead to significant increase in the speed-energy efficiency of computation,” wrote the authors.

“In this publication,” the authors explained, “we report a leap-forward in progress in this field by experimentally demonstrating highly efficient in-situ and self-adaptive learning…This in-situ training scheme enables the network to continuously adapt and update its knowledge as more training data become available.”

As the authors explained their new process: “Here, we monolithically integrate hafnium-oxide-based memristors with a foundry-made transistor array into a multiple layer neural network. We experimentally demonstrate in-situ learning capability and achieve competitive classification accuracy on a standard machine-learning dataset, which further confirms that the training algorithm allows the network to adapt to hardware imperfections.”

The authors projected, through simulation using the experimental parameters, that a larger network would further increase the classification accuracy that approaches the capability of its CMOS counterpart with much improved efficiency. The authors also said that the learning process and its tolerance to defects in the hardware are similar to how our human brain works.

In conclusion, as the authors observed, “We trained our large multilayer network with standard machine-learning algorithms and achieved competitive classification accuracy for the database but with orders of magnitude higher in speed-energy efficiency.” (June 2018)