Abstracts

Steven Piantadosi – The Language of Thought as a modern psychological theory

I’ll provide an overview of recent work on the “language of thought,” including computational models and experimental studies. I’ll discuss several foundational questions the LOT attempts to address about symbolic representations, nativism and empiricism, and the nature of thought. I’ll also highlight how language of thought approaches connect to exciting tools in deep learning, large language models, and computational neuroscience, with an emphasis on the way in which meaning may arise for all of these systems.


Fausto Carcassi (joint work w/ Danaja Rutar and George Deane)- Structure learning in the probabilistic Language of Thought and Active Inference

The active inference account of cognition (Nave et al., 2022) suggests that the human brain’s primary function is to maintain generative world models. These models are constantly updated to reduce prediction errors, either by modifying the brain’s internal states (perception) or by taking action in the external world. While this account has mainly focused on how continuous parameters of the world model are updated, it is less clear how the model’s structure should be revised (Rutar et al., 2022). On the other hand, the probabilistic Language of Thought (pLoT) framework places structural model revision at its core. However, this framework lacks a clear algorithmic-level story for integrating action and perception. Although these two approaches have complementary strengths, there are obstacles to integrating them. In this context, we explore possible ways to combine these approaches and consider the primary challenges to achieving this integration.


Nina Kazanina – Neural implementation for Language of Thought (joint work w/David Poeppel)

Since its appearance in 1975, Fodor’s Language of Thought (LoT) hypothesis has been influential in cognitive psychology and linguistics, but did not gain traction in cognitive neuroscience. The reason for the scepticism lies in the perception that neural implementation of a LoT is untenable. We disagree and demonstrate that critical ingredients needed for a neural implementation of a LoT have in fact been found in the hippocampal spatial navigation system in rodents and other animals. We argue that cell types found in spatial navigation – border cells, object cells, head-direction cells, etc. – provide exactly the types of representations and computations that the LoT calls for.


Isabelle Dautriche – Compositionality in non-linguistic thought and in early language acquisition

A defining feature of language of thought models is compositionality: complex mental representations are functions of simpler elements and their mode of combination. In this talk, I will evaluate whether compositionality arises in phylogeny only with the evolution of natural language and in ontogeny only with the acquisition of language. First, I will review a suite of experimental results suggesting that language is not necessary to entertain some forms of compositional mental representation, both in ‘pre-linguistic’ human infants and in non-human primates. Second, I will present some recent results on the development of compositional language in infancy which suggest that compositional language that is both systematic and productive may arise much later than previously thought. Finally, I will relate these findings to discuss differences in compositional architectures of thought and the role that language could play in shaping these. 


Rachel Dudley – Understanding of negation in infancy

Abstract combinatorial thought supports adult human reasoning. But it is unknown whether such thought is available to infants who are in the process of acquiring their native language. Taking logical operators as a defining hallmark of abstract combinatorial thought, I will report on a series of experiments testing whether pre-verbal infants are able to solve reasoning problems that require negation. 


Jean-Remy Hochmann – Incomplete Language of Thought in infancy

The view that infants possess a full-fledged propositional Language of Thought (LoT) is appealing, providing a unifying account for infants’ precocious reasoning skills in several domains. However, careful appraisal of empirical evidence suggests that infants may lack a crucial component of a propositional LoT: discrete representations of abstract relations. 


Sophie Moracchini – Semantic priming in a non-verbal learning task

The question of how Language and Language of Thought (LoT) relate has been experimentally approached by studying the cognitive abilities of preverbal infants and infraverbal animal species. This talk tackles the question of how to experimentally probe the relation between Language and LoT but from the perspective of the adult human mind. Specifically, using a non-verbal learning task with adults, we examine whether the learnability of (boolean) operators is facilitated when the learner is linguistically primed.


The Best Game in Town: The Re-Emergence of the Language of Thought Hypothesis Across the Cognitive Sciences

What is the structure of thought? Many philosophers and cognitive scientists think we’ve moved past the language of thought (LoT). They believe that instead of symbolic, logical, abstract cognition, we can simply posit deep neural nets, associative models, sensory representations, embodied/extended/etc. cognition, or some other more fashionable approach. However, experimental evidence from the study of perception, infant and animal reasoning, automatic cognition in adults, and cutting-edge computational modeling tells a different story: the LoT hypothesis now enjoys more robust empirical support than ever before. In the course of defending this claim, we’ll also outline six properties that are suggestive of a LoT. The presence and absence of these properties open up possibilities for taxonomizing differences in minds and mental processes in terms of their expressive power, thus giving us a guide for mapping the many languages of thought, both within and across species.

Jake Quilty-Dunn – The Best Game in Town, Part: Perception

Eric Mandelbaum – The Best Game in Town, Part: Logic

Nicolas Porot – The Best Game in Town, Part: Infants and Animals


Véronique Izard – Abstraction at birth

Newborns are not tied to represent narrow, modality- and object-specific aspects of their environment. Rather, they sometimes prove sensitive to abstract properties shared by stimuli of very different nature: for example, properties such as approximate numerosity, magnitude or rate. These findings suggest that neonates represent their environment in broad strokes, in terms of its most abstract properties. In line with this picture, I will present studies revealing the existence of yet another type of abstract representations in newborns, applying to small sets.


Mathias Sablé-Meyer – Using a Language of Thought formalism to account for the mental representation of geometric shapes in humans

In various cultures, across history and over a wide range of spatial scales, humans have displayed an ability to produce a rich complexity of geometric shapes such as lines, circles or spirals. I will present a formalization of the hypothesis that humans possess an inner compositional language of thoughts that produces geometric shapes by combining elementary primitives such as lines and numbers. I will explore the consequences of that hypothesis, both from a theoretical standpoint by drawing a link between human perception and program induction; and from an empirical standpoint by showing that behavioral markers correlate with additive laws of repetition, concatenation, and embedding. Finally, I will make this hypothesis concrete by proposing a specific language and showing that indeed it provides a good model of geometric shape complexity in humans.