Spiking neural networks (SNNs) have been proposed as an (energy-)efficient alternative to conventional artificial neural networks. However, the aspired benefits have not yet been realized in practice. To gain a better understanding of why this gap persists, we theoretically study both discrete-time and continuous-time models of leaky integrate-and-fire neurons. In the discrete-time model, which is a widely used framework due to its amenability to conventional deep learning software and hardware approaches, we analyze the impact of explicit recurrent connections on the network size required to approximate continuously differentiable functions. We contrast this view by investigating the computational efficiency of digital systems that simulate spike-based computations in the continuous-time model. It turns out that even in well-behaved settings, the computational complexity of this task may grow super-polynomially in the prescribed accuracy. Thereby, we exemplarily highlight the intricacies of realizing potential strengths in the biological context, namely recurrent connections and computational efficiency, of spike-based computations on digital systems.
inproceedings FKB26
BibTeXKey: FKB26