International Journal of Fuzzy Logic and Intelligent Systems 2021; 21(4): 317-337
Published online December 25, 2021
https://doi.org/10.5391/IJFIS.2021.21.4.317
© The Korean Institute of Intelligent Systems
Chan Sik Han and Keon Myung Lee
Department of Computer Science, Chungbuk National University, Cheongju, Korea
Correspondence to :
Keon Myung Lee (kmlee@cbnu.ac.kr)
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted noncommercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
Spiking neural networks (SNNs) have attracted attention as the third generation of neural networks for their promising characteristics of energy-efficiency and biological plausibility. The diversity of spiking neuron models and architectures have made various learning algorithms developed. This paper provides a gentle survey of SNNs to give an overview of what they are and how they are trained. It first presents how biological neurons works and how they are mathematically modelled specially in differential equations. Next it categorizes the learning algorithms of SNNs into groups and presents how their representative algorithms work. Then it briefly describe the neuromorphic hardware on which SNNs run.
Keywords: Spiking neural network, Deep learning, Neural network, Machine learning, Learning algorithms
No potential conflict of interest relevant to this article was reported.
E-mail: chatterboy@cbnu.ac.kr
E-mail: kmlee@cbnu.ac.kr
International Journal of Fuzzy Logic and Intelligent Systems 2021; 21(4): 317-337
Published online December 25, 2021 https://doi.org/10.5391/IJFIS.2021.21.4.317
Copyright © The Korean Institute of Intelligent Systems.
Chan Sik Han and Keon Myung Lee
Department of Computer Science, Chungbuk National University, Cheongju, Korea
Correspondence to:Keon Myung Lee (kmlee@cbnu.ac.kr)
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted noncommercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
Spiking neural networks (SNNs) have attracted attention as the third generation of neural networks for their promising characteristics of energy-efficiency and biological plausibility. The diversity of spiking neuron models and architectures have made various learning algorithms developed. This paper provides a gentle survey of SNNs to give an overview of what they are and how they are trained. It first presents how biological neurons works and how they are mathematically modelled specially in differential equations. Next it categorizes the learning algorithms of SNNs into groups and presents how their representative algorithms work. Then it briefly describe the neuromorphic hardware on which SNNs run.
Keywords: Spiking neural network, Deep learning, Neural network, Machine learning, Learning algorithms
Phase shifts of membrane potential [
Hodgkin-Huxley model.
Leaky integrate-and-fire (LIF) model.
Izhikevich model [
An spike-timing-dependent plasticity (STDP) function.
A Synfire chain architecture [
A liquid state machine architecture [
Population coding.
An SNN with multiple synaptic connections [
An SNN for ReSuMe training [
An SNN for BP-STDP training [
Surrogate gradient functions which lim
Firing rate functions for soft-LIF (dotted curve) and LIF (solid curve) [
Sharpening of bReLU function at the Whetstone method.
Activation functions of ReLU, threshold ReLU, and SNN.
A modified ReLU function and its derivative.
Knowledge distillation-based SNN training [
Table 1 . Direct training algorithms.
Algorithm | Neuron model | Architecture | Input encoding | Output decoding | Features |
---|---|---|---|---|---|
SpikeProp (2000, [37]) | SRM | Shallow network | Population code | Time-to-first code | Surrogate gradient; multiple delayed synaptic terminals |
ReSuMe (2005, [38]) | don’t care | (FF, RNN, LSM)+ trainable single layer | Spike train | Spike train | Train the weights for the last layer; STDP & anti-STDP |
PES (2011, [40]) | IF/LIF model | Two-layered network | Spike train (firing rate) | Spike train (firing rate) | MSE loss for decoded value |
STBP (2018, [42]) | LIF | Shallow network | Spike train (rate code) | Spike train (firing rate) | BPTT-like over spatial & time domains |
BP-STDP (2019, [32]) | LIF | Deep network | Spike train (spike count) | Direct output (spike count) | Backpropagation + STDP |
SBBP (2019, [43]) | IF/LIF | Deep network | Spike train (rate code) | Direct output (membrane potential) | Surrogate gradient |
Table 2 . ANN-SNN conversion algorithms.
Algorithm | Neuron model | Architecture | Input encoding | Output decoding | Features |
---|---|---|---|---|---|
soft-LIF (2015, [44]) | soft-LIF (ANN) | Deep network | Spike train (rate code) | Spike train (firing rate) | Use soft-LIF in ANN for LIF |
Cao et al. (2015, [45]) | ReLU (ANN) | Shallow network | Spike train (rate code) | Spike train (firing rate) | Constrained arch.; avg. pooling, no bias |
Diehl et al. (2015, [46]) | ReLU (ANN) | Shallow network | Spike train (rate code) | Spike train (firing rate) | Constrained arch.; weight normalization |
Rueckauer et al. (2017, [30]) | ReLU (ANN) | Deep network | Direct input | Spike train (firing rate) | Constrained arch.; batch norm.; softmax |
Whetstone (2018, [47]) | bReLU (ANN) | Deep network | Spike train (rate code) | Spike train (firing rate) | Adaptive sharpening of activation function |
Sengupta et al. (2019, [48]) | ReLU (ANN) | Deep network | Spike train (rate code) | Spike train (firing rate) | Normalization in SNN; Spike-Norm |
RMP-SNN (2020, [49]) | ReLU (ANN) | Deep network | Spike train (rate code) | Spike train (firing rate) | IF with soft-reset; control threshold range; threshold balancing |
Deng et al. (2021, [50]) | thr. ReLU (ANN) | Deep network | Spike train (rate code) | Spike train (firing rate) | Conversion loss-aware bias adaptation; threshold ReLU; shifted bias |
Ding et al. (2021, [51]) | RNL (ANN) | Deep network | Spike train (rate code) | Spike train (rate code) | Optimal scaling factors for threshold balancing |
Patel et al. (2021, [52]) | mod. ReLU (ANN) | Scaled-down | Spike train (rate code) | Spike train (rate code) | image segmentation Loihi deployment |
Table 3 . Hybrid training algorithms.
Algorithm | Neuron model | Architecture | Input encoding | Output decoding | Features |
---|---|---|---|---|---|
Rathi et al. (2020, [54]) | ReLU (ANN) | Deep network | Spike train (rate coding) | Direct ouput (membrane potential) | ANN-SNN conv. + STDB; ST-based surrogate gradient |
DIET-SNN (2020, [55]) | ReLU (ANN) | Deep network | Direct input | Direct output | Trainable leakage and threshold in LIF |
Takuya et al. (2021, [58]) | ReLU (ANN) | Deep network | Direct input | Direct output (membrane potential) | Knowledge distillation for conv.; fine-tuning |
Nishant Chauhan and Byung-Jae Choi
International Journal of Fuzzy Logic and Intelligent Systems 2019; 19(4): 315-322 https://doi.org/10.5391/IJFIS.2019.19.4.315Sang-jin Oh, Chae-og Lim, Byeong-choel Park, Jae-chul Lee, and Sung-chul Shin
International Journal of Fuzzy Logic and Intelligent Systems 2019; 19(3): 140-146 https://doi.org/10.5391/IJFIS.2019.19.3.140Gayoung Kim
International Journal of Fuzzy Logic and Intelligent Systems 2024; 24(3): 287-294 https://doi.org/10.5391/IJFIS.2024.24.3.287Phase shifts of membrane potential [
Hodgkin-Huxley model.
|@|~(^,^)~|@|Leaky integrate-and-fire (LIF) model.
|@|~(^,^)~|@|Izhikevich model [
An spike-timing-dependent plasticity (STDP) function.
|@|~(^,^)~|@|A Synfire chain architecture [
A liquid state machine architecture [
Population coding.
|@|~(^,^)~|@|An SNN with multiple synaptic connections [
An SNN for ReSuMe training [
An SNN for BP-STDP training [
Surrogate gradient functions which lim
Firing rate functions for soft-LIF (dotted curve) and LIF (solid curve) [
Sharpening of bReLU function at the Whetstone method.
|@|~(^,^)~|@|Activation functions of ReLU, threshold ReLU, and SNN.
|@|~(^,^)~|@|A modified ReLU function and its derivative.
|@|~(^,^)~|@|Knowledge distillation-based SNN training [