New synthetic learning could inspire the future of neuromorphic AI
Source: Hainguynrp / Pixabay
A new peer-reviewed study published this week in PNAS shows how learning, an important aspect of human intelligence, can be recreated in synthetic matter, a discovery that may lead to new forms of artificial intelligence (AI) and neuromorphic computing in the future.
“Habituation and awareness (non-associative learning) are among the most basic forms of memory learning and behavior present in organisms that enable adaptation and learning in dynamic environments,” the authors wrote. of the study affiliated with Rutgers University, Purdue University, the University of Georgia and the Argonne National Laboratory. “The emulation of such intelligence features found in nature in the solid state may serve as inspiration for algorithmic simulations in artificial neural networks and potential use in neuromorphic computing.”
Human intelligence and the biological brain have long served as the inspiration for the architecture and design of machine learning in artificial intelligence. Neuromorphic computing, also known as neuromorphic engineering, is the growing field of study that seeks to replicate aspects of human cognition in modern electronic devices such as computers. The goal of neuromorphic computing is to overcome the limitations of von Neumann’s architecture with a solution that more faithfully mimics the biological brain.
The basis of most computer hardware today is known as the von Neumann architecture. In 1945, the Hungarian-born American mathematician John von Neumann (1903-1957) published a computer architecture design consisting of inputs and outputs, a memory unit and a central processing unit (CPU ) which contains a control unit (CU), an arithmetic system and logic unit (ALU), and a variety of registers. The disadvantages of the von Neumann architecture are that it is difficult to integrate long-term memory storage and it requires a lot of power to transmit data between processing and memory units.
Von Neumann’s architecture is very different from the functioning of a biological brain where computation and memory are heavily distributed among its roughly 80 billion neurons which act as simple processing units, and memory is not located at the bottom. center, but rather involves several areas of the brain. In neuroanatomy, the prefrontal cortex, amygdala, hippocampus, and cerebellum are major areas among the many parts of the brain associated with memory.
Machine learning is a method of allowing computers to “learn” without any hard coding or explicit programming. Deep learning is a subset of machine learning. The Artificial Neural Network Architecture for Deep Learning with its Artificial Neural Network (Nodes) is an example of a brain-inspired design. Since the design of AI machine learning is partly inspired by the biological brain, the von Neumann architecture presents computational challenges for deep learning.
To demonstrate the learning of synthetic matter, for this study, the researchers used a quantum material whose properties are not fully explained by classical physics called nickel oxide (NiO) with a Mott insulator that belongs to a class materials which, when measured, behave like insulators even though their band structure gives the impression that they would conduct electricity instead.
Using gases to stimulate quantum material at room temperature and above, scientists found that nickel oxide was addictive and sensitized like that of Aplysia, a genus of medium to giant sea slugs.
“Similar to biological species such as Aplysia, habituation and sensitization of NiO possess a time-dependent plasticity which relies on both the force and the time interval between stimuli, âthe researchers reported. “A combination of experimental approaches and first principle calculations reveals that such learning behavior of NiO results from the dynamic modulation of its defect and electronic structure.”
Scientists believe that if a quantum material could recreate forms of habituation learning and awareness, then they could potentially develop AI directly into the material. This would reduce energy costs, while increasing overall compute efficiency and performance.
“An artificial neural network model inspired by such non-associative learning is simulated to show the benefits of an unsupervised clustering task in terms of accuracy and reduction of catastrophic interference, which could help alleviate the stability dilemma- plasticity, âthe researchers concluded. “Mott isolators can therefore serve as building blocks to examine noted learning behaviors in biology and inspire new learning algorithms for artificial intelligence.”
Copyright Â© 2021 Cami Rosso All rights reserved.