ANN-SNN conversion algorithms
Algorithm | Neuron model | Architecture | Input encoding | Output decoding | Features |
---|---|---|---|---|---|
soft-LIF (2015, [44]) | soft-LIF (ANN) | Deep network | Spike train (rate code) | Spike train (firing rate) | Use soft-LIF in ANN for LIF |
Cao et al. (2015, [45]) | ReLU (ANN) | Shallow network | Spike train (rate code) | Spike train (firing rate) | Constrained arch.; avg. pooling, no bias |
Diehl et al. (2015, [46]) | ReLU (ANN) | Shallow network | Spike train (rate code) | Spike train (firing rate) | Constrained arch.; weight normalization |
Rueckauer et al. (2017, [30]) | ReLU (ANN) | Deep network | Direct input | Spike train (firing rate) | Constrained arch.; batch norm.; softmax |
Whetstone (2018, [47]) | bReLU (ANN) | Deep network | Spike train (rate code) | Spike train (firing rate) | Adaptive sharpening of activation function |
Sengupta et al. (2019, [48]) | ReLU (ANN) | Deep network | Spike train (rate code) | Spike train (firing rate) | Normalization in SNN; Spike-Norm |
RMP-SNN (2020, [49]) | ReLU (ANN) | Deep network | Spike train (rate code) | Spike train (firing rate) | IF with soft-reset; control threshold range; threshold balancing |
Deng et al. (2021, [50]) | thr. ReLU (ANN) | Deep network | Spike train (rate code) | Spike train (firing rate) | Conversion loss-aware bias adaptation; threshold ReLU; shifted bias |
Ding et al. (2021, [51]) | RNL (ANN) | Deep network | Spike train (rate code) | Spike train (rate code) | Optimal scaling factors for threshold balancing |
Patel et al. (2021, [52]) | mod. ReLU (ANN) | Scaled-down | Spike train (rate code) | Spike train (rate code) | image segmentation Loihi deployment |