Spiking Neural Networks

From Psyc 40 Wiki
Revision as of 07:27, 22 October 2022 by User (talk | contribs)
Jump to: navigation, search

Overview

Spiking Neural Networks (SNN) are a type of artificial neural network inspired by biological neurons. In SNN, when the membrane potential of a neuron exceeds a certain threshold, it generates a discrete signal known as a "spike," which is used to drive the network's computations. Unlike traditional neural networks, which process continuous signals, SNN convert input signals into binary spike trains (spiking or non-spiking events) and perform computations based on these discrete events. SNN offer two significant advantages over conventional artificial neural networks. The first advantage is that SNN model biological neural activity in a more detailed and biologically plausible manner. Traditional neural networks typically handle information as real-valued signals. However, the brain processes information through binary spike trains, which are electrical pulses transmitted between neurons. By utilizing binary signals, SNN more closely resemble the activity of neurons in the human brain, making them valuable for simulating brain function and conducting neuroscience experiments. The second advantage of SNN is their energy efficiency. Because SNNs rely on binary signals for computation, they can describe input-output relationships using only addition operations. In contrast to multiplication-based operations used in conventional networks, addition operations generally require fewer computational resources and lower power consumption, making SNNs more power-efficient.


History and background

Example of a Spiking Neural Network

The Leaky Integrate-and-Fire (LIF) Model

The Leaky Integrate-and-Fire (LIF) model is a commonly used spiking neuron model, which describes the dynamics of the membrane potential V(t) as follows:

<math>\tau_m \frac{dV(t)}{dt} = (-V(t) + E_{\text{rest}}) + I(t)</math> (1) where \(V(t)\) is the membrane potential, \(I(t)\) is the input current, \(E_{\text{rest}}\) is the resting membrane potential, and \(\tau_m\) is the membrane time constant.

The membrane potential V(t)V(t) changes in response to the input current I(t)I(t), as described by Equation (1). When the membrane potential V(t)V(t) exceeds the threshold VθVθ, depolarization occurs, and the membrane potential rises to the peak potential VpeakVpeak, triggering an action potential (spike). After the spike, the neuron enters a refractory period during which the membrane potential remains unchanged for a fixed duration, denoted as τrefτref. The times and frequencies of the spikes generated by this LIF neuron are then used as outputs to interact with other neurons. The derivation of Equation (1) can be understood by modeling the neuron as an equivalent electrical circuit.


Challenges of Spiking Neural Networks (SNN)

SNN are known to pose significant challenges in network training. The back propagation algorithm, commonly used in traditional neural networks, cannot be directly applied to SNN due to the discrete nature of spikes. As a result, there is a need for the development of specialized algorithms tailored specifically for SNN. Additionally, in SNN, the timing of spikes and the occurrence or absence of firing events play a critical role, which makes the network more susceptible to noise and variations in input data. Furthermore, because the timing and sequence of spikes are crucial in SNN, computations that account for temporal dynamics are essential. Therefore, hardware capable of efficiently processing these temporal aspects is required.