In our PhD we build neural networks, by connecting mathematical models of individual neurons. The dynamics of such networks can be described using tools from dynamical systems and are usually called attractor-networks. By constraining our models with biologically plausible parameters, we can reproduce neuronal dynamics seen in real brains. Thanks to these models we are able to relate the activity of single neurons with behavioral patterns seen in monkeys and humans solving complex cognitive tasks.
In the lab, in order to solve controlled versions of these tasks, subjects have to report a specific characteristic of visual or auditory stimulus that was previously shown. For example, a popular visual stimulus consists of a cloud of moving dots. During stimulus presentation, a percentage of these dots move coherently towards one direction and the rest move completely random. In pure decision making tasks, the subjects must report in which direction were the coherent dots moving. In working memory versions of these tasks, there is a delay between the end of the stimulus and the response period, forcing the subjects to hold in memory their response during the delay period.
The so called attractor models are biologically plausible networks that can solve both of these tasks, by accumulating stimulus evidence, hold the decision in memory and finally report it. Using techniques from dynamical system and stochastic processes, we will explain the key mechanisms by which attractor networks are able to solve decision making and working memory tasks. In addition to solve these tasks these models provide specific behavioral and neurophysiological predictions than were already tested experimentally. We will show experimental evidence that support these models as a powerful framework to study real brains.