Modélisation en neurosciences – et ailleurs / Modelling in neuroscience – and elsewhere
LearningTrack Santé


Basics in probability and dynamical systems. Concepts and tools from statistical physics and information theory will be introduced as needed. Some knowledge of neurobiology will be helpful but not necessary.

Objectif du cours

Introduction to the field of computational neuroscience: modelling of learning, adaptation and decision-making in natural neural systems, with openings to other subjects by highlighting links with other fields (machine learning, signal processing, complex systems in social sciences, etc.). The tools used come from statistical physics, dynamical systems theory, Bayesian inference and information theory. The course focuses as much on mathematical or algorithmic aspects, as on qualitative ones (historical aspects, conceptual contributions of modeling, implications for the understanding of human or animal cognition).

Presentation : here

Organisation des séances

20h of lectures.
Course in French or English depending on the students.

Mode de validation

Critical reading of an article and small personal project based on the article (numerical simulation, extension of a result…), with written report and oral presentation.


  • • L. Abbott and P. Dayan, Theoretical Neuroscience, MIT Press, 2nd ed., 2005.
  • • W. Gerstner, W. M. Kistler, R. Naud and L. Paninski, Neuronal Dynamics, Cambridge Univ. Press, 2014
  • • T. M. Cover and J. A. Thomas, Elements of Information Theory, Wiley, 2nd ed., 2006.
  • • and see the course web site:

Plus d’information…

Thèmes abordés

  • Associative memory: Hebbian plasticity; attractor neural networks. Related topics: Markov random fields; physics of disordered systems; combinatorial optimization; formation of coalitions (between countries, companies…).
  • Supervised learning: from the Perceptron to the modelling of the Cerebellum. Related topics: Support Vector Machines (SVM), deep learning.
  • Unsupervised learning: from Hebbian plasticity to Principal Component Analysis (PCA).
  • Neural coding: efficient coding hypothesis (H. Barlow); « infomax » principle (Shannon’s information maximization); stimulus (parameter) estimation (Cramer-Rao bound and Fisher information); population coding (coding with a large number of stimulus specfic cells, such as place cells, orientation-specifc cells, face-specific cells, etc). Related topics: histogram equalization; independent component analysis (ICA); natural image statistics.
  • Categorical perception and decision making: optimal coding and reaction times; drift diffusion models vs. attractor neural networks.
  • Categorization: artificial neural networks vs. human brain.
Les intervenants

Jean-Pierre Nadal


voir les autres cours du 2nd semestre