Deep convolutional neural networks (CNNs) have been largely inspired by the visual system of animals. However, major discrepancies remain between artificial CNNs and their biological counterparts: neurons in the visual system mostly communicate with noisy and sparse signals (spikes). While this grants a high energy efficacy to biological neural networks, noise will diminish their reliability. How can animals solve visual tasks while computing with noisy units? This project aims at combining real neuronal recording with machine learning to study the strategies that networks of neurons from the visual system have developed in order to cope with this intrinsic noise.

We will address this issue in the case of the retina, a dense network of neurons with feedforward and horizontal connections, which processes the visual stimulus in a highly non-linear way, and can be seen as an efficient biological implementation of a deep CNN with recurrent connections. Due to the presence of these recurrent connections, the stochastic spiking activity of neurons is correlated across cells from the same layer. The impact of this correlated noise remains unclear and is an active topic of research in neuroscience. One hypothesis is that this correlated noise could be beneficial and help to mitigate the effect of single cell noise on information transmission.

Following the latest developments in deep learning applied to neuroscience, we will develop CNN models that faithfully describe the dynamical response of individual retinal neurons to various stimulus ensembles and then build over the approach developed in our group to extend the CNN framework to accurately capture the correlated noise in neuronal responses.

Learning models from recordings of retinal activity will allow us to study how noise and its correlated structure impact stimulus encoding and information transmission in the retina and beyond.

 

PhD student: Gabriel Mahuas 

PhD supervisor: Serge Picaud

Research laboratory: Institut de la Vision