Difference between revisions of "24F Final Project: Neuromorphic Computing"

From Psyc 40 Wiki
Jump to: navigation, search
Line 4: Line 4:
  
 
== Background ==
 
== Background ==
Though neuromorphic computing presently remains no more than a promising concept, it is agreed upon that the foundations of the field were laid down by Caltech's Carver Mead in the late 1980s.<ref>https://iopscience.iop.org/article/10.1088/1741-2560/13/5/051001/meta</ref> Using human neurobiology as the model, Mead contrasted the brain's hierarchical encoding of information, such as its ability to instantly integrate sensory perception as an event with an associated emotion, with a computer's glaring lack of an ability to encode in this way; he further noted the brain alone's capacity to combine signal processing with [https://www.cell.com/current-biology/fulltext/S0960-9822(23)01442-2#:~:text=Gain%20control%20is%20a%20process,needed%20or%20how%20they%20interact. gain control]. Mead nonetheless recognized the glaring need to better understand the brain before an accurately-designed neuromorphic computer could be realized; he emphasizes the fact that it is barely understood how the brain, for some given amount of energy output, is able to perform many times more computations than even the most advanced computer.<ref>https://spie.org/news/photonics-focus/septoct-2024/inventing-the-integrated-circuit#_=_]</ref> Mead himself, along with his PhD student Misha Mahowald, provided the first practical example of the potential of neuromorphic computing through their development of a [https://redwood.berkeley.edu/wp-content/uploads/2018/08/Mead-chapter15-silicon-retina.pdf silicon retina] in 1991, which successfully imitated output signals seen in true human retinas most notably in response to moving images.<ref>https://tilde.ini.uzh.ch/~tobi/wiki/lib/exe/fetch.php?media=mahowaldmeadsiliconretinasciam1991color.pdf</ref>
+
Though neuromorphic computing presently remains no more than a promising concept, it is agreed upon that the foundations of the field were laid down by Caltech's Carver Mead in the late 1980s.<ref>Furber, S. (2016). Large-scale neuromorphic computing systems. Journal of Neural Engineering, 13(5), 051001. https://doi.org/10.1088/1741-2560/13/5/051001 </ref> Using human neurobiology as the model, Mead contrasted the brain's hierarchical encoding of information, such as its ability to instantly integrate sensory perception as an event with an associated emotion, with a computer's glaring lack of an ability to encode in this way; he further noted the brain alone's capacity to combine signal processing with [https://www.cell.com/current-biology/fulltext/S0960-9822(23)01442-2#:~:text=Gain%20control%20is%20a%20process,needed%20or%20how%20they%20interact. gain control]. Mead nonetheless recognized the glaring need to better understand the brain before an accurately-designed neuromorphic computer could be realized; he emphasizes the fact that it is barely understood how the brain, for some given amount of energy output, is able to perform many times more computations than even the most advanced computer.<ref>https://spie.org/news/photonics-focus/septoct-2024/inventing-the-integrated-circuit#_=_]</ref> Mead himself, along with his PhD student Misha Mahowald, provided the first practical example of the potential of neuromorphic computing through their development of a [https://redwood.berkeley.edu/wp-content/uploads/2018/08/Mead-chapter15-silicon-retina.pdf silicon retina] in 1991, which successfully imitated output signals seen in true human retinas most notably in response to moving images.<ref>https://tilde.ini.uzh.ch/~tobi/wiki/lib/exe/fetch.php?media=mahowaldmeadsiliconretinasciam1991color.pdf</ref>
  
 
== How Neuromorphic Computing Works ==
 
== How Neuromorphic Computing Works ==

Revision as of 07:27, 22 October 2022

By Chris Mecane

Neuromorphic computing, also known as neuromorphic engineering, is a fairly recent development in the realm of computational neuroscience. The goal of neuromorphic computing is to construct computers whose hardware and software mimic the anatomy and physiology of the human brain via replication of neural structure and emulation of synaptic communication.[1] Though still in its nascent years, the field is a source of hope for the future of computing and artificial intelligence.

Background

Though neuromorphic computing presently remains no more than a promising concept, it is agreed upon that the foundations of the field were laid down by Caltech's Carver Mead in the late 1980s.[2] Using human neurobiology as the model, Mead contrasted the brain's hierarchical encoding of information, such as its ability to instantly integrate sensory perception as an event with an associated emotion, with a computer's glaring lack of an ability to encode in this way; he further noted the brain alone's capacity to combine signal processing with gain control. Mead nonetheless recognized the glaring need to better understand the brain before an accurately-designed neuromorphic computer could be realized; he emphasizes the fact that it is barely understood how the brain, for some given amount of energy output, is able to perform many times more computations than even the most advanced computer.[3] Mead himself, along with his PhD student Misha Mahowald, provided the first practical example of the potential of neuromorphic computing through their development of a silicon retina in 1991, which successfully imitated output signals seen in true human retinas most notably in response to moving images.[4]

How Neuromorphic Computing Works

Neuromorphic computers are partially defined by the fact that they are different from the widespread von Neumann computers, which have distinct CPUs and memory units and store data as binary. In contrast, neuromorphic computers, in their efforts to process information analogously to the human brain, integrate memory and processing into a single mechanism
Snn vs vn.jpg
regulated by its neurons and synapses[5]; neurons receive "spikes" of information, with the timing, magnitude, and shape of the spike all being meaningful attributes in the encoding of numerical information. As such, neuromorphic computers are said to be modeled using spiking neural networks (SSNs); spiking neurons behave similarly to biological neurons, in that they factor in characteristics such as threshold values for neuronal activation and synaptic weights which can change over time.[1] The aforementioned features can all be found in existing, standard neural networks that are capable of continual learning, such as perceptrons. Neuromorphic computers, however, would surpass these traditional networks in their ability to incorporate neuronal and synaptic delay; as information flows in, "charge" accumulates in the neurons until the surpassing of some charge threshold produces a "spike," or action potential. If the charge does not exceed the threshold over some given time period, it "leaks."[1] The neuronal and synaptic makeup of neuromorphic computers additionally allows for, unlike traditional computers, for many parallel operations to be running in different neurons at a given time; van Neumann computers utilize sequential processing of information. Through their parallel processing, along with their integration of processing and memory, neuromorphic computers provide a glimpse into a future full of vastly more energy-efficient computation.[5]


Algorithms for neuromorphic computation differ based on their specific applications. These algorithms are broadly associated with one of two categories: machine-learning algorithms and manually-constructed non-machine learning algorithms. In the case of machine-learning algorithms, the spiking neural network optimizes synaptic weights
Various training methods for SSNs, including a.) backpropagation b.) mapping c.) reservoir computing d.) evolutionary machine learning and e.) spike-timing-dependent plasticity
relevant to achieving a specific goal; training of the network can occur, for example, through backpropagation, mapping of a trained traditional artificial neural network onto the spiked artificial neural network, or reservoir computing. While these three methods of training are currently utilized by artificial neural networks, neuromorphic computers would uniquely look to utilize an evolutionary approach, in which the structure, connections, number of neurons, and weights of the network, like a brain, change over time without relaying on any particular network architecture, as well as an approach referred to as "spike-timing-dependent plasticity; since, unlike artificial neural networks, neuromorphic computers would be capable of incorporating delay, the weights of synapses would be able to change as a function of the timings of spikes from both pre- and post-synaptic neurons. Machine-learning algorithms are the most common method by which spiking neural networks train.[5]

Hardware and Present Applications

While a modern lack of complete understanding of the brain's intricacies has restricted the development of full-fledged neuromorphic computers, the past few decades have seen great strides in terms of the neuromorphic hardware developed.
The SpiNNaker supercomputer
SpiNNaker architecture
Most of this hardware, as may be expected, is made of the semiconductor silicon, along with complementary metal oxide semiconductor (CMOS) technology. A remarkable breakthrough has been in the form of the University of Manchester's SpiNNaker neuromorphic supercomputer, whose eventual goal is to simulate up to one billion neurons at once. The half-million silicon central processing units (CPUs) in SpiNNAker were developed in 2011, and communicate via a packet-switched network that mimics the dense neuronal connectivity of the human brain. The computer additionally contains a router that allows units of information to be sent to more than one location. Since the hardware itself brokers all packet transmission, the computer is able to achieve a bandwidth of up to five billion packets per second; in combination with its fifty-seven thousand nodes arranged in hexagonal arrays and various methods of communication between cores, SpiNNaker is able to achieve ultra-high levels of parallel processing while consuming about 90kW of electrical power: roughly four times the power consumption of the brain.[6] Current SpiNNaker research focuses on finding more efficient hardware for simulating neuronal spiking as it exists in the human brain.[7]


The wafer of BrainScaleS
Another tangible example of advancement in neuromorphic computing, BrainScaleS, provides a model for one of the most state-of-the-art, large-scale analog spiking neural networks. Utilizing programmable plasticity units (PPUs), a type of custom CPU, BrainScale simulates synaptic plasticity at one thousand times the speed observed in the human brain. The system is comprised of twenty silicon wafers, each with fifty million plastic synapses and two hundred thousand realistic neurons, and evolves its code based on the physical properties of the hardware. The wafer are constructed of High Input Count Analog Neural Network chips (HICANNs), which simulate actively-changing neurons and synapses. The high speed of this model relative to that of the human brain, due to the fact that the model's circuits are physically shorter than those observed in the brain, permits that it requires much less power input than other simulated neuronal networks. Synaptic weights are determined by the current received by the associated neuron.[8][9]

Intel's Loihi 2 neuromorphic processor provides a perhaps even more impressive illustration of neuromorphic computing; released in 2021, the research chip simulates one million neurons at once, utilizing a spiking neural network that evolves its synapses using variations of backpropagation. Intel looks to commercialize this chip alongside the future advent of mainstream neuromorphic computing.[10]

Limitations and Future Research

References