Article Search
닫기

Original Article

Split Viewer

International Journal of Fuzzy Logic and Intelligent Systems 2021; 21(4): 317-337

Published online December 25, 2021

https://doi.org/10.5391/IJFIS.2021.21.4.317

© The Korean Institute of Intelligent Systems

A Survey on Spiking Neural Networks

Chan Sik Han and Keon Myung Lee

Department of Computer Science, Chungbuk National University, Cheongju, Korea

Correspondence to :
Keon Myung Lee (kmlee@cbnu.ac.kr)

Received: November 14, 2021; Revised: November 14, 2021; Accepted: December 7, 2021

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted noncommercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Spiking neural networks (SNNs) have attracted attention as the third generation of neural networks for their promising characteristics of energy-efficiency and biological plausibility. The diversity of spiking neuron models and architectures have made various learning algorithms developed. This paper provides a gentle survey of SNNs to give an overview of what they are and how they are trained. It first presents how biological neurons works and how they are mathematically modelled specially in differential equations. Next it categorizes the learning algorithms of SNNs into groups and presents how their representative algorithms work. Then it briefly describe the neuromorphic hardware on which SNNs run.

Keywords: Spiking neural network, Deep learning, Neural network, Machine learning, Learning algorithms

This research was partly supported by the MSIT (Ministry of Science and ICT), Korea, under the Grand Information Technology Research Center support program (IITP-2021-0-01462) supervised by the IITP (Institute for Information & communications Technology Planning & Evaluation), and partly by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2019-0-00708, IDE for Autonomic IoT Applications based on Neuromorphic Architecture) and Korea Evaluation Institute of Industrial Technology (KEIT) grant funded by the Korea government (MOTIE) (No. 2019-0-00708, IDE for Autonomic IoT Applications based on Neuromorphic Architecture).

No potential conflict of interest relevant to this article was reported.

Chan Sik Han is a Ph.D. candidate at Department of Computer Science, Chungbuk National University, Korea. He received his bachelor’s and master’s degrees at the same department. He has been working on research related to machine learning, deep learning, and spiking neural networks.

E-mail: chatterboy@cbnu.ac.kr

Keon Myung Lee is a professor in the Department of Computer Science, Chungbuk National University, Korea. He received his B.S., M.S., and Ph.D. degrees in computer science from KAIST, Korea and was a postdoctorate fellow at INSA de Lyon, France. He was a visiting professor at the University of Colorado at Denver and a visiting scholar at Indiana University, USA. His principal research interests are machine learning, deep learning, soft computing, data science, and intelligent service systems.

E-mail: kmlee@cbnu.ac.kr

Article

Original Article

International Journal of Fuzzy Logic and Intelligent Systems 2021; 21(4): 317-337

Published online December 25, 2021 https://doi.org/10.5391/IJFIS.2021.21.4.317

Copyright © The Korean Institute of Intelligent Systems.

A Survey on Spiking Neural Networks

Chan Sik Han and Keon Myung Lee

Department of Computer Science, Chungbuk National University, Cheongju, Korea

Correspondence to:Keon Myung Lee (kmlee@cbnu.ac.kr)

Received: November 14, 2021; Revised: November 14, 2021; Accepted: December 7, 2021

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted noncommercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Spiking neural networks (SNNs) have attracted attention as the third generation of neural networks for their promising characteristics of energy-efficiency and biological plausibility. The diversity of spiking neuron models and architectures have made various learning algorithms developed. This paper provides a gentle survey of SNNs to give an overview of what they are and how they are trained. It first presents how biological neurons works and how they are mathematically modelled specially in differential equations. Next it categorizes the learning algorithms of SNNs into groups and presents how their representative algorithms work. Then it briefly describe the neuromorphic hardware on which SNNs run.

Keywords: Spiking neural network, Deep learning, Neural network, Machine learning, Learning algorithms

Fig 1.

Figure 1.

Phase shifts of membrane potential [13].

The International Journal of Fuzzy Logic and Intelligent Systems 2021; 21: 317-337https://doi.org/10.5391/IJFIS.2021.21.4.317

Fig 2.

Figure 2.

Hodgkin-Huxley model.

The International Journal of Fuzzy Logic and Intelligent Systems 2021; 21: 317-337https://doi.org/10.5391/IJFIS.2021.21.4.317

Fig 3.

Figure 3.

Leaky integrate-and-fire (LIF) model.

The International Journal of Fuzzy Logic and Intelligent Systems 2021; 21: 317-337https://doi.org/10.5391/IJFIS.2021.21.4.317

Fig 4.

Figure 4.

Izhikevich model [18].

The International Journal of Fuzzy Logic and Intelligent Systems 2021; 21: 317-337https://doi.org/10.5391/IJFIS.2021.21.4.317

Fig 5.

Figure 5.

An spike-timing-dependent plasticity (STDP) function.

The International Journal of Fuzzy Logic and Intelligent Systems 2021; 21: 317-337https://doi.org/10.5391/IJFIS.2021.21.4.317

Fig 6.

Figure 6.

A Synfire chain architecture [24].

The International Journal of Fuzzy Logic and Intelligent Systems 2021; 21: 317-337https://doi.org/10.5391/IJFIS.2021.21.4.317

Fig 7.

Figure 7.

A liquid state machine architecture [25].

The International Journal of Fuzzy Logic and Intelligent Systems 2021; 21: 317-337https://doi.org/10.5391/IJFIS.2021.21.4.317

Fig 8.

Figure 8.

Population coding.

The International Journal of Fuzzy Logic and Intelligent Systems 2021; 21: 317-337https://doi.org/10.5391/IJFIS.2021.21.4.317

Fig 9.

Figure 9.

An SNN with multiple synaptic connections [37].

The International Journal of Fuzzy Logic and Intelligent Systems 2021; 21: 317-337https://doi.org/10.5391/IJFIS.2021.21.4.317

Fig 10.

Figure 10.

An SNN for ReSuMe training [38].

The International Journal of Fuzzy Logic and Intelligent Systems 2021; 21: 317-337https://doi.org/10.5391/IJFIS.2021.21.4.317

Fig 11.

Figure 11.

An SNN for BP-STDP training [32]. Li is the target spike train, Gi is the generated spike train at node i of the output layer, Gh is the generated spike train at node h of a hidden layer, and Gj is the spike train of node j at the input layer which encodes a scalar input value.

The International Journal of Fuzzy Logic and Intelligent Systems 2021; 21: 317-337https://doi.org/10.5391/IJFIS.2021.21.4.317

Fig 12.

Figure 12.

Surrogate gradient functions which limai→0+hi(u) = dg(u)/du where g(u) is an activation function [42].

The International Journal of Fuzzy Logic and Intelligent Systems 2021; 21: 317-337https://doi.org/10.5391/IJFIS.2021.21.4.317

Fig 13.

Figure 13.

Firing rate functions for soft-LIF (dotted curve) and LIF (solid curve) [44].

The International Journal of Fuzzy Logic and Intelligent Systems 2021; 21: 317-337https://doi.org/10.5391/IJFIS.2021.21.4.317

Fig 14.

Figure 14.

Sharpening of bReLU function at the Whetstone method.

The International Journal of Fuzzy Logic and Intelligent Systems 2021; 21: 317-337https://doi.org/10.5391/IJFIS.2021.21.4.317

Fig 15.

Figure 15.

Activation functions of ReLU, threshold ReLU, and SNN.

The International Journal of Fuzzy Logic and Intelligent Systems 2021; 21: 317-337https://doi.org/10.5391/IJFIS.2021.21.4.317

Fig 16.

Figure 16.

A modified ReLU function and its derivative.

The International Journal of Fuzzy Logic and Intelligent Systems 2021; 21: 317-337https://doi.org/10.5391/IJFIS.2021.21.4.317

Fig 17.

Figure 17.

Knowledge distillation-based SNN training [58].

The International Journal of Fuzzy Logic and Intelligent Systems 2021; 21: 317-337https://doi.org/10.5391/IJFIS.2021.21.4.317

Table 1 . Direct training algorithms.

AlgorithmNeuron modelArchitectureInput encodingOutput decodingFeatures
SpikeProp (2000, [37])SRMShallow networkPopulation codeTime-to-first codeSurrogate gradient; multiple delayed synaptic terminals
ReSuMe (2005, [38])don’t care(FF, RNN, LSM)+ trainable single layerSpike trainSpike trainTrain the weights for the last layer; STDP & anti-STDP
PES (2011, [40])IF/LIF modelTwo-layered networkSpike train (firing rate)Spike train (firing rate)MSE loss for decoded value
STBP (2018, [42])LIFShallow networkSpike train (rate code)Spike train (firing rate)BPTT-like over spatial & time domains
BP-STDP (2019, [32])LIFDeep networkSpike train (spike count)Direct output (spike count)Backpropagation + STDP
SBBP (2019, [43])IF/LIFDeep networkSpike train (rate code)Direct output (membrane potential)Surrogate gradient

Table 2 . ANN-SNN conversion algorithms.

AlgorithmNeuron modelArchitectureInput encodingOutput decodingFeatures
soft-LIF (2015, [44])soft-LIF (ANN)LIF (SNN)Deep networkSpike train (rate code)Spike train (firing rate)Use soft-LIF in ANN for LIF
Cao et al. (2015, [45])ReLU (ANN)IF (SNN)Shallow networkSpike train (rate code)Spike train (firing rate)Constrained arch.; avg. pooling, no bias
Diehl et al. (2015, [46])ReLU (ANN)IF (SNN)Shallow networkSpike train (rate code)Spike train (firing rate)Constrained arch.; weight normalization
Rueckauer et al. (2017, [30])ReLU (ANN)IF (SNN)Deep networkDirect inputSpike train (firing rate)Constrained arch.; batch norm.; softmax
Whetstone (2018, [47])bReLU (ANN)IF (SNN)Deep networkSpike train (rate code)Spike train (firing rate)Adaptive sharpening of activation function
Sengupta et al. (2019, [48])ReLU (ANN)IF (SNN)Deep networkSpike train (rate code)Spike train (firing rate)Normalization in SNN; Spike-Norm
RMP-SNN (2020, [49])ReLU (ANN)IF (SNN)Deep networkSpike train (rate code)Spike train (firing rate)IF with soft-reset; control threshold range; threshold balancing
Deng et al. (2021, [50])thr. ReLU (ANN)IF (SNN)Deep networkSpike train (rate code)Spike train (firing rate)Conversion loss-aware bias adaptation; threshold ReLU; shifted bias
Ding et al. (2021, [51])RNL (ANN)IF (SNN)Deep networkSpike train (rate code)Spike train (rate code)Optimal scaling factors for threshold balancing
Patel et al. (2021, [52])mod. ReLU (ANN)IF (SNN)Scaled-downU-NetSpike train (rate code)Spike train (rate code)image segmentation Loihi deployment

Table 3 . Hybrid training algorithms.

AlgorithmNeuron modelArchitectureInput encodingOutput decodingFeatures
Rathi et al. (2020, [54])ReLU (ANN)LIF (SNN)Deep networkSpike train (rate coding)Direct ouput (membrane potential)ANN-SNN conv. + STDB; ST-based surrogate gradient
DIET-SNN (2020, [55])ReLU (ANN)IF/LIF (SNN)Deep networkDirect inputDirect outputTrainable leakage and threshold in LIF
Takuya et al. (2021, [58])ReLU (ANN)LIF (SNN)Deep networkDirect inputDirect output (membrane potential)Knowledge distillation for conv.; fine-tuning

Share this article on :

Related articles in IJFIS