Home >> Talks >>

Systems, Models and Signature Limits

by Stephen A. Butterfill


"Two puzzles about mindreading, and in particular the nature of belief ascription, require resolution. Can infants ascribe false beliefs in their first or second year of life? Some measures indicate that they can, others that they cannot. Is belief ascription automatic? Some findings suggest that it is, others that it is not. Reflection on these puzzling patterns of findings suggests that we should step back to ask, What it is to ascribe beliefs? More generally, What it is to be a mindreader? Mindreading involves representing mental states; this requires having some model of the mental, much as representing physical states requires having some model of the physical. The history of science reveals that there are multiple models of the physical. Some models are relatively easy to acquire or apply but are limited in accuracy, others are harder to acquire and use but also more accurate. The first part of this talk will show that there are also multiple models of the mental. To say that someone represents beliefs or other mental states leaves open the question of which model of the mental she is using. Just as humans use multiple models of the physical (an expert physicist will probably leave quantum mechanics behind when putting up a garden fence in favour of a model she can more efficiently deploy), so it is likely that they use multiple models of the mental. But how can we distinguish among hypotheses about which model is used in a particular task? In the physical case, such hypotheses can be distinguished by identifying signature limits. To illustrate, impetus mechanics makes incorrect predictions about certain trajectories. If a certain group of individuals use impetus mechanics on a particular task, they should make incorrect predictions about these trajectories. Testing whether they make such predictions can therefore yield evidence about which model of the physical they are using. The second part of this talk will show that certain models of the mental have signature limits. In particular, simpler models of the mental are limited in making incorrect predictions when beliefs essentially involve identity. Other talks in this symposium provide evidence that, in certain cases, mindreading exhibits this signature limit. A conjecture about how mindreaders variously model minds does not completely solve the puzzles about the automaticity and development of mindreading. The third part of this talk concerns systems; that is, the different ways in which a model of the mind can be implemented cognitively in an actual mindreader. It is a familiar idea that different models of the physical are implemented in different cognitive systems; representing physical states can involve core systems or modules, for example. Representing mental states may similarly involve multiple systems. Conjectures about multiple systems are linked to conjectures about the models they implement. If mindreading involves multiple systems which implement multiple models of the mind, we can discover this thanks to relations between systems and models and their signature limits.