Publications of year 2019 |
Theses |
Abstract: | As human beings, we can understand spoken language, recognize the opening bars of Beethoven's 5th Symphony, notice the tide-induced fluctuations of the level of the ocean, predict the color of traffic lights, and identify many more of the ubiquitous temporal regularities that characterize our daily environment. How does the human brain detect, identify, process and leverage those regularities in spite of their striking diversity? In this dissertation, I studied the mechanisms through which the human brain acquires knowledge of sequences and of regularities they may entail. To do so, I recorded behavioural and neural responses to auditory binary sequences characterized by various types of regularities. In parallel, I derived mathematical models of sequence processing that rest upon normative principles of probabilistic inference, but that are yet characterized by different computational architectures, and used human data to arbitrate among them. Using this same general approach, I investigated three different facets of the human sensitivity to sequences. Firstly, I demonstrated that a simple machinery for inferring transition structures between sequence items supports various aspects of the human perception of sequences encountered in seemingly disparate studies. Secondly, I then found that this learning algorithm was implemented in distinct brain systems which extracted statistical trends over different timescales, thereby providing a mechanistic explanation for the human sensitivity to both global statistical biases and to the recent history of observations. In addition to statistical learning, humans also possess the ability to quickly detect and identify deterministic rules. Thirdly, I showed that statistics and rules correspond to two distinct hypothesis spaces, instead of a continuum; and that human subjects could rationally arbitrate among them given the observed sequence. Altogether, my investigations of the cognitive foundations, computational principles and neural architectures supporting sequence processing suggest that the human brain is equipped with several systems that conform to normative principles of probabilistic inference but that are specialized in different aspects of sequences, thereby providing a putative explanation for the human perception of a vast repertoire of temporal regularities. |
Book chapters |
Articles in journals |
Abstract: | This study examines memory retrieval and syntactic composition using fMRI while participants listen to a book, The Little Prince. These two processes are quantified drawing on methods from computational linguistics. Memory retrieval is quantified via multi-word expressions that are likely to be stored as a unit, rather than built-up compositionally. Syntactic composition is quantified via bottom-up parsing that tracks tree-building work needed in composed syntactic phrases. Regression analyses localise these to spatially-distinct brain regions. Composition mainly correlates with bilateral activity in anterior temporal lobe and inferior frontal gyrus. Retrieval of stored expressions drives right-lateralised activation in the precuneus. Less cohesive expressions activate well-known nodes of the language network implicated in composition. These results help to detail the neuroanatomical bases of two widely-assumed cognitive operations in language processing. |
Abstract: | A central goal in cognitive science is to parse the series of processing stages underlying a cognitive task. A powerful yet simple behavioral method that can resolve this problem is finger trajectory tracking: by continuously tracking the finger position and speed as a participant chooses a response, and by analyzing which stimulus features affect the trajectory at each time point during the trial, we can estimate the absolute timing and order of each processing stage, and detect transient effects, changes of mind, serial versus parallel processing, and real-time fluctuations in subjective confidence. We suggest that trajectory tracking, which provides considerably more information than mere response times, may provide a comprehensive understanding of the fast temporal dynamics of cognitive operations. |
Abstract: | Memory for spatial sequences does not depend solely on the number of locations to be stored, but also on the presence of spatial regularities. Here, we show that the human brain quickly stores spatial sequences by detecting geometrical regularities at multiple time scales and encoding them in a format akin to a programming language. We measured gaze-anticipation behavior while spatial sequences of variable regularity were repeated. Participants' behavior suggested that they quickly discovered the most compact description of each sequence in a language comprising nested rules, and used these rules to compress the sequence in memory and predict the next items. Activity in dorsal inferior prefrontal cortex correlated with the amount of compression, while right dorsolateral prefrontal cortex encoded the presence of embedded structures. Sequence learning was accompanied by a progressive differentiation of multi-voxel activity patterns in these regions. We propose that humans are endowed with a simple "language of geometry" which recruits a dorsal prefrontal circuit for geometrical rules, distinct from but close to areas involved in natural language processing. |
Conference proceedings |
Abstract: | Recent work has shown that LSTMs trained on a generic language modeling objective capture syntax-sensitive generalizations such as long-distance number agreement. We have however no mechanistic understanding of how they accomplish this remarkable feat. Some have conjectured it depends on heuristics that do not truly take hierarchical structure into account. We present here a detailed study of the inner mechanics of number tracking in LSTMs at the single neuron level. We discover that long-distance number information is largely managed by two "number units". Importantly, the behaviour of these units is partially controlled by other units independently shown to track syntactic structure. We conclude that LSTMs are, to some extent, implementing genuinely syntactic processing mechanisms, paving the way to a more general understanding of grammatical encoding in LSTMs. |
Miscellaneous |