Note: see also my recent Habilitation à Diriger des Recherches (HDR) manuscript here.

Computing and learning with spikes

Spike pattern learning through STDP

"Spike-timing-dependent plasticity" (STDP) is a now well-established physiological mechanism of activity-driven synaptic plasticity. According to STDP, synapses through which a presynaptic spike arrived before (respectively after) a postsynaptic one are reinforced (respectively depressed)[1]. One of my major contributions so far has been to show that neurons equipped with STDP can detect and learn repeating spike patterns, in an unsupervised manner, even when those patterns are embedded in noise[2,3], and detection can be optimal[4]. Importantly, the spike patterns do not need to repeat exactly: it also works when only a firing probability pattern repeats, providing this profile has narrow (10-20ms) temporal peaks[5]. All together, these studies show that some envisaged problems associated to spike timing codes, in particular noise-resistance, the need for a reference time, or the decoding issue, might not be as severe as once thought.

Oscillations and phase-of-firing coding

Oscillatory brain activity has been widely reported experimentally, yet its functional roles, if any, are still under debate. Our theoretical work[6,7] suggest two things: firstly, thanks to oscillations, even slowly changing stimuli can be encoded in precise relative spike times, decodable by downstream "coincidence detector" neurons in a feedforward manner. Secondly, the required connectivity to do so can spontaneously emerge with STDP, in an unsupervised manner. The key is that a common oscillatory drive enables neurons to remain under a fluctuation-driven regime. In this regime spike time jitter does not accumulate and can thus be lower than the intrinsic timescales of stimulus fluctuations. Furthermore, the oscillatory drive formats the spikes in discrete oversampling volleys, and the relative spike times between neurons indicate the eventual differences in their activation levels. The oversampling accelerates the STDP-based learning for downstream neurons. After learning, readout only takes one oscillatory cycle.

Biological and artificial vision systems

The above-mentioned generic mechanisms are presumably at work in particular the visual system, where they can explain how information can be encoded in the spike times, and how selectivity to visual primitives emerges[8-11] (see also the demos). They are also relevant for neuromorphic engineering: we showed that they can be efficiently implemented in hardware, leading to fast systems with self-learning abilities[12-14].

In addition, we suggested a new functional role for microsaccades, that is, the small, jerk-like, involuntary eye movements made when fixating[15]. Using numerical simulations, we found that microsaccades, but not the other kinds of fixational eye movements known as drift or tremor, are sufficiently fast to synchronize certain retinal ganglion cells, namely those whose receptive fields contain contrast edges after the microsaccade. This could serve to rapidly transmit the most crucial information about a stimulus.

Finally, we are also interested in the problem of view-invariant object recognition. It is a challenging problem, which has attracted much attention among the psychology, neuroscience, and computer vision communities. Humans are notoriously good at it, and are thought to solve the problem through hierarchical processing along the ventral stream, which progressively extracts more and more invariant visual features. This feed-forward architecture has inspired a new generation of bio-inspired computer vision systems called deep convolutional neural networks (DCNN), which are currently the best algorithms for object recognition in natural images. We compared these DCNNs with humans at view invariant object recognition. It turned out that the last DCNNs match human performance[16], and the relative difficulty of different kinds of variations (e.g. scale, rotation, etc.) was roughly the same for humans and DCNNs[17] (this last study has been highlighted in the MIT Tech Review).

A clear limitation of current DCNNs, however, is the way they learn, which has nothing to do with the way humans learn, and which is much less efficient. This brings us to my new research project: seek for inspiration in the brain to improve learning in DCNNs[18].


1. Bi GQ, Poo MM (1998) Synaptic modifications in cultured hippocampal neurons: dependence on spike timing, synaptic strength, and postsynaptic cell type. J Neurosci 18: 10464-10472.
2. Masquelier T, Guyonneau R, Thorpe SJ (2008) Spike timing dependent plasticity finds the start of repeating patterns in continuous spike trains. PLoS One 3: e1377. doi:10.1371/journal.pone.0001377.
3. Masquelier T, Guyonneau R, Thorpe SJ (2009) Competitive STDP-Based Spike Pattern Learning. Neural Comput 21: 1259-1276. doi:10.1162/neco.2008.06-08-804.
4. Masquelier T (2017) STDP allows close-to-optimal spatiotemporal spike pattern detection by single coincidence detector neurons. Neuroscience. doi:10.1016/j.neuroscience.2017.06.032.
5. Gilson M, Masquelier T, Hugues E (2011) STDP allows fast rate-modulated coding with Poisson-like spike trains. PLoS Comput Biol 7: e1002231. doi:10.1371/journal.pcbi.1002231.
6. Masquelier T, Hugues E, Deco G, Thorpe SJ (2009) Oscillations, phase-of-firing coding, and spike timing-dependent plasticity: an efficient learning scheme. J Neurosci 29: 13484-13493. doi:10.1523/JNEUROSCI.2207-09.2009.
7. Masquelier T (2014) Oscillations can reconcile slowly changing stimuli with short neuronal integration and STDP timescales. Network 25: 85-96. doi:10.3109/0954898X.2014.881574.
8. Masquelier T, Thorpe SJ (2007) Unsupervised learning of visual features through spike timing dependent plasticity. PLoS Comput Biol 3: e31. doi:10.1371/journal.pcbi.0030031.
9. Masquelier T, Thorpe SJ (2010) Learning to recognize objects using waves of spikes and Spike Timing-Dependent Plasticity. Proc. IEEE International Joint Conference on Neural Networks (IJCNN).
10. Masquelier T (2012) Relative spike time coding and STDP-based orientation selectivity in the early visual system in natural continuous and saccadic vision: a computational model. J Comput Neurosci 32: 425-441. doi:10.1007/s10827-011-0361-9.
11. Kheradpisheh SR, Ganjtabesh M, Masquelier T (2016) Bio-inspired unsupervised learning of visual features leads to robust invariant object recognition. Neurocomputing 205: 382-392. doi:10.1016/j.neucom.2016.04.029.
12. Zamarreño-Ramos C, Camuñas-Mesa LA, Pérez-Carrasco JA, Masquelier T, Serrano-Gotarredona T, et al. (2011) On Spike-Timing-Dependent-Plasticity, Memristive Devices, and Building a Self-Learning Visual Cortex. Front Neurosci 5: 22.
13. Serrano-Gotarredona T, Masquelier T, Prodromakis T, Indiveri G, Linares-Barranco B (2013) STDP and STDP variations with memristors for spiking neuromorphic learning systems. Front Neurosci 7: 2. doi:10.3389/fnins.2013.00002.
14. Yousefzadeh A, Masquelier T, Serrano-Gotarredona T, Linares-Barranco B (2017) Hardware Implementation of Convolutional STDP for On-line Visual Feature Learning. Proc IEEE ISCAS.
15. Masquelier T, Portelli G, Kornprobst P (2016) Microsaccades enable efficient synchrony-based coding in the retina: a simulation study. Sci Rep 6: 24086. doi:10.1038/srep24086.
16. Kheradpisheh SR, Ghodrati M, Ganjtabesh M, Masquelier T (2016) Deep Networks Can Resemble Human Feed-forward Vision in Invariant Object Recognition. Sci Rep 6: 32672. doi:10.1038/srep32672.
17. Kheradpisheh SR, Ghodrati M, Ganjtabesh M, Masquelier T (2016) Humans and Deep Networks Largely Agree on Which Kinds of Variation Make Object Recognition Harder. Front Comput Neurosci 10: 1-15. doi:10.3389/fncom.2016.00092.
18. Kheradpisheh SR, Ganjtabesh M, Thorpe SJ, Masquelier T (2018) STDP-based spiking deep convolutional neural networks for object recognition. Neural Networks 99: 56-67. doi:10.1016/j.neunet.2017.12.005.

Last updated Feb 5 2018