theory-of-mind

Theory of Mind

Theory of Mind

Theory of Mind is the branch of cognitive science that investigates how we ascribe mental states to other persons and how we use the states to explain and predict the actions of those other persons. More accurately, it is the branch that investigates mindreading or mentalizing or mentalistic abilities. These skills are shared by almost all human beings beyond early childhood. They are used to treat other agents as the bearers of unobservable psychological states and processes, and to anticipate and explain the agents’ behavior in terms of such states and processes. These mentalistic abilities are also called “folk psychology” by philosophers, and “naïve psychology” and “intuitive psychology” by cognitive scientists.

It is important to note that Theory of Mind is not an appropriate term to characterize this research area (and neither to denote our mentalistic abilities) since it seems to assume right from the start the validity of a specific account of the nature and development of mindreading, das ist, the view that it depends on the deployment of a theory of the mental realm, analogous to the theories of the physical world (“naïve physics”). But this view—known as theory-theory—is only one of the accounts offered to explain our mentalistic abilities. Im Gegensatz, theorists of mental simulation have suggested that what lies at the root of mindreading is not any sort of folk-psychological conceptual scheme, but rather a kind of mental modeling in which the simulator uses her own mind as an analog model of the mind of the simulated agent.

Both theory-theory and simulation-theory are actually families of theories. Some theory-theorists maintain that our naïve theory of mind is the product of the scientific-like exercise of a domain-general theorizing capacity. Other theory-theorists defend a quite different hypothesis, according to which mindreading rests on the maturation of a mental organ dedicated to the domain of psychology. Simulation-theory also shows different facets. According to the “moderate” version of simulationism, mental concepts are not completely excluded from simulation. Simulation can be seen as a process through which we first generate and self-attribute pretend mental states that are intended to correspond to those of the simulated agent, and then project them onto the target. Dagegen, the “radical” version of simulationism rejects the primacy of first-person mindreading and contends that we imaginatively transform ourselves into the simulated agent, interpreting the target’s behavior without using any kind of mental concept, not even ones referring to ourselves.

Endlich, the claim─common to both theorists of theory and theorists of simulation─that mindreading plays a primary role in human social understanding was challenged in the early 21st century, mainly by phenomenology-oriented philosophers and cognitive scientists.

Inhaltsverzeichnis
Theory-Theory
The Child-Scientist Theory
The Modularist Theory-Theory
First-Person Mindreading and Theory-Theory
Simulation-Theory
Simulation with and without Introspection
Simulation in Low-Level Mindreading
Social Cognition without Mindreading
Referenzen und weiterführende Literatur
Empfohlene weiterführende Literatur
Referenzen
1. Theory-Theory

Social psychologists have investigated mindreading since at least the 1940s. In Heider and Simmel’s (1944) classic studies, participants were presented with animated events involving interacting geometric shapes. When asked to report what they saw, the participants almost invariably treated these shapes as intentional agents with motives and purposes, suggesting the existence of an automatic capacity for mentalistic attribution. Pursuing this line of research would lead to Heider’s The Psychology of Interpersonal Relations (1958), a seminal book which is one of the main historical referents of the scientific inquiry into our mentalistic practice. In this book Heider characterizes “commonsense psychology” as a sophisticated conceptual scheme that has an influence on human perception and action in the social world comparable to that which Kant’s categorical framework has on human perception and action in the physical world (see Malle & Ickes 2000: 201).

Heider’s visionary work played a central role in the origination and definition of attribution theory, das ist, the field of social psychology that investigates the mechanisms underlying ordinary explanations of our own and other people’s behavior. Aber, attribution theory is a quite different way of approaching our mentalistic practice. Heider took commonsense psychology in its real value of knowledge, arguing that scientific psychology has a good deal to learn from it. Im Gegensatz, most research on causal attribution has been faithful to behaviorism’s methodological lesson and focused on the epistemic inaccuracy of commonsense psychology.

Two years before Heider’s book, Wilfred Sellars’ (1956) Empiricism and the Philosophy of Mind had suggested that our grasp of mental phenomena does not originate from direct access to our inner life, but is the result of a “folk” theory of mind, which we acquire through some form or other of enculturation. Sellars’ speculation turned out to be very philosophically productive and in agreement with social-psychology research on self-attribution, coming to be known as “Theory-Theory” (a term coined by Morton 1980—henceforth “TT”).

During the 1970s one or other form of TT was seen as a very effective antidote to Cartesianism and philosophical behaviorism. Besonders, TT was coupled with Nagel’s (1961) classic account of intertheoretic reduction as deduction of the reduced from the reducing theory via bridge principles in order to turn the ontological problem of the relationship between the mental and the physical into a more tractable epistemological problem concerning the relations between theories. Thus it became possible to take a notion—intertheoretic reduction—rigorously studied by philosophers of science; to examine the relations between folk psychology as a theory including the commonsense mentalistic ontology and its scientific successors (scientific psychology, Neurowissenschaften, or some other form of science of the mental); and to let ontological/metaphysical questions be answered by (Ich) focusing on questions about explanation and theory reduction first and foremost, und dann (Ii) depending on how those first questions were answered, drawing the appropriate ontological/metaphysical conclusions based on a comparison with how similar questions about explanation and reduction got answered in other scientific episodes and the ontological conclusions philosophers and scientists drew in those cases (this strategy is labelled “the intertheoretic-reduction reformulation of the mind-body problem” in Bickle 2003).

In diesem Kontext, TT was taken as the major premise in the standard argument for eliminative materialism (see Ramsey 2011: §2.1). In its strongest form, eliminativism predicts that part or all of our folk-psychological theory will vanish into thin air, just as it happened in the past when scientific progress led to the abandonment of the folk theory of witchcraft or the protoscientific theories of phlogiston and caloric fluid. This prediction rests on an argument which moves from considering folk psychology as a massively defective theory to the conclusion that—just as with witches, phlogiston, and caloric fluid—folk-psychological entities do not exist. Thus philosophy of mind joined attribution theory in adopting a critical attitude toward the explanatory adequacy of folk psychology (sehen, Zum Beispiel, Stich’s 1983 eliminativistic doubts about the folk concept of belief, motivated inter alia by the experimental social psychology literature on dissonance and self-attribution).

Notice, Jedoch, that TT can be differently construed depending on whether we adopt a personal or subpersonal perspective (see Stich & Ravenscroft 1994: §4). The debate between intentional realists and eliminativists favored David Lewis’ personal-level formulation of TT. Laut Lewis, the folk theory of mind is implicit in our everyday talk about mental states. We entertain “platitudes” regarding the causal relations of mental states, sensory stimuli, and motor responses that can be systematized (or “Ramsified”). The result is a functionalist theory that gives the terms of mentalistic vocabulary their meaning in the same way as scientific theories define their theoretical terms, namely “as the occupants of the causal roles specified by the theory…; as the entities, whatever those may be, that bear certain causal relations to one another and to the referents of the O[bservational]-terms” (Lewis 1972: 211). In this perspective, mindreading can be described as an exercise in reflective reasoning, which involves the application of general reasoning abilities to premises including ceteris paribus folk-psychological generalizations. A good example of this conception of mindreading is Grice’s schema for the derivation of conversational implicatures:

He said that P; he could not have done this unless he thought that Q; he knows (and knows that I know that he knows) that I will realize that it is necessary to suppose that Q; he has done nothing to stop me thinking that Q; so he intends me to think, or is at least willing for me to think, that Q(Grice 1989: 30-1; cit. in Wilson 2005: 1133).

Since the end of the 1970s, Jedoch, primatology, developmental psychology, cognitive neuropsychiatry and empirically-informed philosophy have been contributing to a collaborative inquiry into TT. In the context of this literature the term “theory” refers to a “tacit” or “sub-doxastic” structure of knowledge, a corpus of internally represented information that guides the execution of mentalistic capacities. But then the functionalist theory that fixes the meaning of mentalistic terms is not the theory implicit in our everyday, mentalistic talk, but the tacit theory (in Chomsky’s sense) subserving our thought and talk about the mental realm (see Stich & Nichols 2003: 241). On this perspective, the inferential processes that depend on the theory have an automatic and unconscious character that distinguishes them from reflective reasoning processes.

In developmental psychology part of the basis for the study of mindreading skills in children was already in Jean Piaget’s seminal work on egocentrism in the 1930s to 50s, and the work on metacognition (especially metamemory) in the 1970s. But the developmental research on mindreading took off only under the thrust of three discoveries in the 1980s (see Leslie 1998). Erste, normally developing 2-year-olds are able to engage in pretend play. Zweite, normally developing children undergo a deep change in their understanding of the psychological states of other people somewhere between the ages of 3 and 4, as indicated especially by the appearance of their ability to solve a variety of “false-belief” problems (see immediately below). zuletzt, children diagnosed with autism spectrum disorders are especially impaired in attributing mental states to other people.

Besonders, Wimmer & Perner (1983) provided the theory-of-mind research with a seminal experimental paradigm: the “false-belief task.” In the most well-known version of this task, a child watches two puppets interacting in a room. One puppet (“Sally”) puts a toy in location A and then leaves the room. While Sally is out of the room, the other puppet (“Anne”) moves the toy from location A to location B. Sally returns to the room, and the child onlooker is asked where she will look for her toy, in location A or in location B. Jetzt, 4- and 5-year-olds have little difficulty passing this test, judging that Sally will look for her toy in location A although it really is in location B. These correct answers provide evidence that the child realizes that Sally does not know that the toy has been moved, and so will act upon a false belief. Many younger children, typically 3-year-olds, fail such a task, often asserting that Sally will look for the toy in the place where it was moved. Dozens of versions of this task have now been used, and while the precise age of success varies between children and between task versions, in general we can confidently say that children begin to successfully perform the (“verbal”) false-belief tasks at around 4 years (see the meta-analysis in Wellman et al. 2001; see also below, the reference to “non-verbal” false-belief tasks).

Wimmer and Perner’s false-belief task set off a flood of experiments concerning the infant understanding of the mind. In diesem Kontext, the first hypotheses about the process of acquisition of the naïve theory of mind were suggested. The finding that mentalistic skills emerge very early, in the first 3-4 years, and in a way relatively independent from the development of other cognitive abilities, led some scholars (Zum Beispiel, Simon Baron-Cohen, Jerry Fodor, Alan Leslie) to conceive them as the end-state of the endogenous maturation of an innate theory-of-mind module (or system of modules). This contrasted with the view of other researchers (Zum Beispiel, Alison Gopnik, Josef Perner, Henry Wellman), who maintained that the intuitive theory of mind develops in childhood in a manner comparable to the development of scientific theories.

An. The Child-Scientist Theory

According to a first version of TT, “the child (as little) scientist theory,” the body of internally-represented knowledge that drives the exercise of mentalistic abilities has much the same structure as a scientific theory, and it is acquired, stored, and used in much the same way that scientific theories are: by formulating explanations, making predictions, and then revising the theory or modifying auxiliary hypotheses when the predictions fail. Gopnik & Meltzoff (1997) put forward this idea in its more radical form. They argue that the body of knowledge underlying mindreading has all the structural, functional and dynamic features that, on their view, characterize most scientific theories. One of the most important features is defeasibility. As it happens in scientific practice, the child’s naïve theory of mind can also be “annulled," das ist, replaced when an accumulation of counterevidence to it occurs. The child-scientist theory is, deshalb, akin to Piaget’s constructivism insofar as it depicts the cognitive development in childhood and early adolescence as a succession of increasingly sophisticated naïve theories. Zum Beispiel, Wellman (1990) has argued that around age 4 children become able to pass the false-belief tests because they move from an elementary “copy” theory of mind to a fully “representational” theory of mind, which allows them to acknowledge the explanatory role of false beliefs.

The child-scientist theory inherits from Piaget not only the constructivist framework but also the idea that the cognitive development is a process that depends on a domain-general learning mechanism. A domain-general (or general-purpose) psychological structure is one that can be used to do problem solving across many different content domains; it contrasts with a domain-specific psychological structure, which is dedicated to solving a restricted class of problems in a restricted content domain (see Samuels 2000). Jetzt, Piaget’s model of cognitive development posits an innate endowment of reflexes and domain-general learning mechanisms, which enable the child to set up sensorimotor interactions with the environment that unfold a steady improvement in the capacity of problem-solving in any cognitive domain—physical, biologisch, psychologisch, und so weiter. Analog, Gopnik & Schulz (2004, 2007) have argued that the learning mechanism that supports all of cognitive development is a domain-general Bayesian mechanism that allows children to extract causal structure from patterns of data.

Another theory-theorist who endorses a domain-general conception of cognitive development is Josef Perner (1991). On his view, it is the appearance of the ability to metarepresent that enables the 4-year-olds to shift from a “situation theory” to a “representation theory,” and thus pass false-belief tests. Children are situation theorists by the age of around 2 years. At 3 they possess a concept, “prelief” (or “betence”), in which the concepts of pretend and belief coexist undifferentiated. The concept of prelief allows the child to understand that a person can “act as if” something was such and such (Zum Beispiel, as if “this banana is a telephone”) when it is not. At 4 children acquire a representational concept of belief which enables them to understand that, like the public representations, inner representations can also misrepresent states of affairs (see Perner, Bäcker & Hutton 1994). Thus Perner suggests that children first learn to understand the properties of public (pictorial and linguistic) representations; only in a second moment they extend, through a process of analogical reasoning, these characteristics to mental representations. On this perspective, dann, the concept of belief is the product of a domain-general metarepresentational capacity that includes but is not limited to metarepresentation of mental states. (But for criticism, see Harris 2000, who argues that pretence and belief are very different and are readily distinguished by context by 3-year olds.)

b. The Modularist Theory-Theory

According to the child-scientist theory, children learn the naïve theory of mind in much the same way that adults learn about scientific theories. Dagegen, the modularist version of TT holds that the body of knowledge underlying mindreading lacks the structure of a scientific theory, being stored in one or more innate modules, which gradually become functional (“mature”) during infant development. Inside the module the body of information can be stored as a suite of domain-specific computational mechanisms; or as a system of domain-specific representations; or in both ways (see Simpson et al. 2005: 13).

The notion of modularity as domain-specificity, whose paradigm is Noam Chomsky’s module of language, informs the so-called “core knowledge” hypothesis, according to which human cognition builds on a repertoire of domain-specific systems of knowledge. Studies of children and adults in diverse cultures, human infants, and non-human primates provide evidence for at least four systems of knowledge that serve to represent significant aspects of the environment: inanimate objects and their motions; agents and their goal-directed actions; places and their geometric relations; sets and their approximate numerical relation. These are systems of domain-specific, task-specific representations, which are shared by other animals, persist in adults, and show little variation by culture, language or sex (see Carey & Spelke 1996; Spelke & Kinzler 2007).

And yet a domain-specific body of knowledge is an “inert” psychological structure, which gives rise to behavior only if it is manipulated by some cognitive mechanism. The question arises, dann, whether the domain-specific body of information that subserves mentalistic abilities is the database of either a domain-specific or domain-general computational system. In some domains, a domain-specific computational mechanism and a domain-specific body of information can form a single mechanism (Zum Beispiel, a parser is very likely to be a domain-specific computational mechanism that manipulates a domain-specific data structure). But in other domains, as Samuels (1998, 2000) has noticed, domain-specific systems of knowledge might be computed by domain-general rather than domain-specific algorithms (but for criticism, see Carruthers 2006, §4.3).

The existence of a domain-specific algorithm that exploits a body of information specific to the domain of naïve psychology has been proposed by Alan Leslie (1994, 2000). He postulated a specialized component of social intelligence, the “Theory-of-Mind Mechanism” (ToMM), which receives as input information about the past and present behavior of other people and utilizes this information to compute their probable psychological states. The outputs of ToMM are descriptions of psychological states in the form of metarepresentations or M-representations, das ist, agent-centered descriptions of behavior, which include a triadic relation that specifies four kinds of information: (Ich) an agent, (Ii) an informational relation that specifies the agent’s attitude (pretending, believing, desiring, und so weiter), (iii) an aspect of reality that grounds the agent’s attitude, (iv) the content of the agent’s attitude. Deshalb, in order to pretend and understand others’ pretending, the child’s ToMM is supposed to output the M-representation . Analog, in order to predict Sally’s behavior in the false-belief test, ToMM is supposed to output the M-representation . (Note that Leslie coined the term “M-representation” to distinguish his own concept of meta-representation from Perner’s 1991. For Perner uses the term at a personal level to refer to the child’s conscious theory of representation, whereas Leslie utilizes the term at a subpersonal level to designate an unconscious data structure computed by an information-processing mechanism. See Leslie & Thaiss 1992: 231, note 2.)

In the 1980s, Leslie’s ToMM hypothesis was the basis for the development of a neuropsychological perspective on autism. Children suffering from this neurodevelopmental disorder exhibit a triad of impairments: social incompetence, poor verbal and nonverbal communicative skills, and a lack of pretend play. Because social competence, communication, and pretending all rest on mentalistic abilities, Baron-Cohen, Frith & Leslie (1985) speculated that the autistic triad might be the result of an impaired ToMM. This hypothesis was investigated in an experiment in which typically developing 4-year-olds, children with autism (12 years; IQ 82), and children with Down syndrome (10 years; IQ 64) were tested on the Sally and Ann false-belief task. Eighty-five percent of the normally developing children and 86% of the children with Down syndrome passed the test; but only 20% of the autistic children predicted that Sally would look in the basket. This is one of the first examples of psychiatry driven by cognitive neuropsychology (followed by Christopher Frith’s 1992 theory of schizophrenia as late-onset autism).

According to Leslie, the ToMM is the specific innate basis of basic mentalistic abilities, which matures during the infant’s second year. In support of this hypothesis, he cites inter alia his analysis of pretend play that would show that 18-month-old children are able to metarepresent the propositional attitude of pretending. This analysis results, Jedoch, in an immediate empirical problem. If the ToMM is fully functional at 18 months, why are children unable to successfully perform false-belief tasks until they are around 4 years old? Leslie’s hypothesis is that although the concept of belief is already in place in children younger than 4, in the false-belief tasks this concept is masked by immaturity in another capacity that is necessary for good performance on the task—namely inhibitory control. Seit, by default, the ToMM attributes a belief with content that reflects current reality, to succeed in a false-belief task this default attribution must be inhibited and an alternative nonfactual content for the belief selected instead. This is the task of an executive control mechanism that Leslie calls “Selection Processor” (SP). Thus 3-year-olds fail standard false-belief tasks because they possess the ToMM but not yet the inhibitory SP (see Leslie & Thaiss 1992; Leslie & Polizzi 1998).

The ToMM/SP model seems to find support in a series of experiments that test understanding of false mental and public representations in normal and autistic children. Leslie & Thaiss (1992) have found that normal 3-year-olds fail the standard false-belief tasks, the two non-mental meta-representational tests, the false-map task and Zaitchik’s (1990) outdated-photograph task. Im Gegensatz, autistic children are at or near ceiling on the non-mental metarepresentational tests but fail false-belief tasks. Normal 4-year-olds can succeed in all these tasks. According to Leslie and Thaiss, the ToMM/SP model can account for these findings: normal 3-year-olds possess the ToMM but not yet SP; autistic children are impaired in ToMM but not in SP; normal 4-year-olds possess both the ToMM and an adequate SP. Dagegen, these results appear to be counterevidence to Perner’s idea that children first understand public representations before then applying that understanding to mental states. If this were right, then autistic children should have difficulty with both kinds of representations. And in fact Perner (1993) suggests that the autistic deficit is due to a genetic impairment of the mechanisms that subserve attention shifting, a damage that interferes with the formation of the database required for the development of a theory of representation in general. But what autistics’ performance in mental and non-mental metarepresentational tasks seems to show is a dissociation between understanding false maps and outdated photographs, on one hand, and understanding false beliefs, auf dem anderen. A finding that can be easily explained in the context of Leslie’s domain-specific approach to mindreading, according to which children with autism have a specific deficit in understanding mental representation but not representation in general. In support of this interpretation, fMRI studies showed that activity in the right temporo-parietal junction is high while participants are thinking about false beliefs, but no different from resting levels while participants are thinking about outdated photographs or false maps or signs. This suggests a neural substrate for the behavioral dissociation between pictorial and mental metarepresentational abilities (see Saxe & Kanwisher 2003; for a critical discussion of the domain-specificity interpretation of these behavioral and neuroimaging data, see Gerrans & Stone 2008; Perner & Aichhorn 2008; Perner & Leekam 2008).

Leslie (2005) recruits new data to support his claim that mental metarepresentational abilities emerge from a specialized neurocognitive mechanism that matures during the second year of life. Standard false-belief tasks are “elicited-response” tasks in which children are asked a direct question about an agent’s false belief. But investigations using “spontaneous-response” tasks (Onishi & Baillargeon 2005) seem to suggest that the ability to attribute false beliefs is present much earlier, at the age of 15 months (even at 13 months in Surian, Caldi & Sperber 2007). Aber, Leslie’s mentalistic interpretation of these data has been challenged by Ruffman & Perner (2005), who have proposed an explanation of Onishi and Baillargeon’s results that assumes that the infants might be employing a non-mentalistic behavior-rule such as, “People look for objects where last seen” (for replies, see Baillargeon et al. 2010).

The ToMM has been considered, contra Fodor, as one of the strongest candidates for central modularity (sehen, Zum Beispiel, Botterill & Carruthers 1999: 67-8). Aber, Samuels (2006: 47) has objected that it is difficult to establish whether or not the ToMM’s domain of application is really central cognition. He suggests that the question is still more controversial in light of Leslie’s proposal of modelling ToMM as a relatively low-level mechanism of selective attention, whose functioning depends on SP, which is a non-modular mechanism, penetrable to knowledge and instruction (see Leslie, Friedmann & German 2004).

c. First-Person Mindreading and Theory-Theory

During the 1980s and 1990s most of the work in Theory of Mind was concerned with the mechanisms that subserve the attribution of psychological states to others (third-person mindreading). In the last decade, Jedoch, an increasing number of psychologists and philosophers have also proposed accounts of the mechanisms underlying the attribution of psychological states to oneself (first-person mindreading).

For most theory-theorists, first-person mindreading is an interpretative activity that depends on mechanisms that capitalize on the same theory of mind used to attribute mental states to other agents. Such mechanisms are triggered by information about mind-external states of affairs, essentially the target’s behavior and/or the situation in which it occurs/occurred. The claim is, dann, that there is a functional symmetry between first-person and third-person mentalistic attribution—the “outside access” view of introspection in Robbins (2006: 619); the “symmetrical” or “self/other parity” account of self-knowledge in Schwitzgebel (2010, §2.1).

The first example of a symmetrical account of self-knowledge is Bem’s (1972) “self-perception theory.” With reference to Skinner’s methodological guidance, but with a position that reveals affinities with symbolic interactionism, Bem holds that one knows one’s own inner states (Zum Beispiel, attitudes and emotions) through a process completely analogous to that occurring when one knows other people’ inner states, das ist, by inferring them from the observation/recollection of one’s own behavior and/or the circumstances in which it occurs/occurred. The TT version of the symmetrical account of self-knowledge develops Bem’s approach by claiming that observations and recollections of one’s own behavior and the circumstances in which it occurs/occurred are the input of mechanisms that exploit theories that apply to the same extent to ourselves and to others.

In the well-known social-psychology experiments reviewed by Nisbett & Wilson (1977), the participants’ attitudes and behavior were caused by motivational factors inaccessible to consciousness—such factors as cognitive dissonance, numbers of bystanders in a public crisis, positional and “halo” effects and subliminal cues in problem solving and semantic disambiguation, und so weiter. Aber, when explicitly asked about the motivations (Ursachen) of their actions, the subjects did not hesitate to state, sometimes with great eloquence, their very reasonable motives. Nisbett and Wilson explained this pattern of results by arguing that the subjects did not have any direct access to the real causes of their attitudes and behavior; eher, they engaged in an activity of confabulation, das ist, they exploited a priori causal theories to develop reasonable but imaginary explanations of the motivational factors of their attitudes and behavior (see also Johansson et al. 2006, where Nisbett and Wilson’s legacy is developed through a new experimental paradigm to study introspection, the “choice blindness” paradigm).

Evidence for the symmetrical account of self-knowledge comes from Nisbett & Bellows’ (1977) utilization of the so-called “actor-observer paradigm.” In one experiment they compared the introspective reports of participants (“actors”) to the reports of a control group of “observers” who were given a general description of the situation and asked to predict how the actors would react. Observers’ predictions were found to be statistically identical to—and as inaccurate as—the reports by the actors. This finding suggests that “both groups produced these reports via the same route, namely by applying or generating similar causal theories” (Nisbett & Wilson 1977: 250-1; see also Schwitzgebel 2010: §§2.1.2 and 4.2.1).

In developmental psychology Alison Gopnik (1993) has defended a symmetrical account of self-knowledge by arguing that there is good developmental evidence of developmental synchronies: children’s understanding of themselves proceeds in lockstep with their understanding of others. Zum Beispiel, since TT assumes that first-person and third-person mentalistic attributions are both subserved by the same theory of mind, it predicts that if the theory is not yet equipped to solve certain third-person false-belief problems, then the child should also be unable to perform the parallel first-person task. A much discussed instance of parallel performance on tasks for self and other is in Gopnik & Astington (1988). In the “Smarties Box” experiment, children were shown with the candy container for the British confection “Smarties” and were asked what they thought was in the container. Naturally they answered “Smarties.” The container was then opened to reveal not Smarties, but a pencil. Children were then asked a series of questions, including “What will [your friend] say is in the box?”, and successively “When you first saw the box, before we opened it, what did you think was inside it?”. It turned out that the children’s ability to answer the question concerning oneself was significantly correlated with their ability to answer the question concerning another. (See also the above-cited Wellman et al. 2001, which offers meta-analytic findings to the effect that performance on false-belief tasks for self and for others is virtually identical at all ages.)

Data from autism have also been used to motivate the claim that first-person and third-person mentalistic attribution has a common basis. An intensely debated piece of evidence comes from a study by Hurlburt, Happé & Frith (1994), in which three people suffering from Asperger syndrome were tested with the descriptive experience sampling method. In this experimental paradigm, subjects are instructed to carry a random beeper, pay attention to the experience that was ongoing at the moment of the beep, and jot down notes about that now-immediately-past experience (see Hurlburt & Schwitzgebel 2007). The study showed marked qualitative differences in introspection in the autistic subjects: unlike normal subjects who report several different phenomenal state types—including inner verbalisation, visual images, unsymbolised thinking, and emotional feelings—the first two autistic subjects reported visual images only; the third subject could report no inner experience at all. According to Frith & Happé (1999: 14), this evidence strengthens the hypothesis that self-awareness, like other-awareness, is dependent on the same theory of mind.

So, evidence from social psychology, development psychology and cognitive neuropsychiatry makes a case for a symmetrical account of self-knowledge. As Schwitzgebel (2010: §2.1.3) rightly notes, Jedoch, no one advocates a thoroughly symmetrical conception because some margin is always left for some sort of direct self-knowledge. Nisbett & Wilson (1977: 255), Zum Beispiel, draw a sharp distinction between “cognitive processes” (the causal processes underlying judgments, decisions, Emotionen, Empfindungen) and mental “content” (those judgments, decisions, Emotionen, sensations themselves). Subjects have “direct access” to this mental content, and this allows them to know it “with near certainty.” In contrast, they have no access to the processes that cause behavior. Aber, insofar as Nisbett and Wilson do not propose any hypothesis about this alleged direct self-knowledge, their theory is incomplete.

In order to offer an account of this supposedly direct self-knowledge, some philosophers made a more or less radical return to various forms of Cartesianism, construing first-person mindreading as a process that permits the access to at least some mental phenomena in a relatively direct and non-interpretative way. On this perspective, introspective access does not appeal to theories that serve to interpret “external” information, but rather exploits mechanisms that can receive information about inner life through a relatively direct channel— the “inside access” view of introspection in Robbins (2006: 618); the “self-detection” account of self-knowledge in Schwitzgebel (2010: §2.2).

The inside access view comes in various forms. Mentalistic self-attribution may be realized by a mechanism that processes information about the functional profile of mental states, or their representational content, or both kinds of information (see Robbins 2006: 618; for a “neural” version of the inside access view, siehe unten, §2a). A representationalist-functionalist version of the inside access view is Nichols & Stich’s (2003) account of first-person mindreading in terms of “monitoring mechanisms.” The authors begin by drawing a distinction between detection and inference. It is one thing to detect mental states, it is another to reason about mental states, das ist, using information about mental states to predict and explain one’s own or other people’s mental states and behavior. Darüber hinaus, both the attribution of a mental state and the inferences that one can make about it can be referred to oneself or other people. So, we get four possible operations: Erste- and third-person detection, Erste- and third-person reasoning. Jetzt, Nichols and Stich’s hypothesis is that whereas third-person detecting and first- and third-person reasoning are all subserved by the same theory of mind, the mechanism for detecting one’s own mental states is quite independent of the mechanism that deals with the mental states of other people. Etwas präziser, the Monitoring Mechanism (MM) theory assumes the existence of a suite of distinct self-monitoring computational mechanisms, including one for monitoring and providing self-knowledge of one’s own experiential states, and one for monitoring and providing self-knowledge of one’s own propositional attitudes. So, Zum Beispiel, if X believes that p, and the proper MM is activated, it copies the representation p in X’s “Belief Box”, embeds the copy in a representation schema of the form “I believe that___”, and then places this second-order representation back in X’s Belief Box.

Since the MM theory assumes that first-person mindreading does not involve mechanisms of the sort that figure in third-person mindreading, it implies that the first capacity should be dissociable, both diachronically and synchronically, from the second. In support of this prediction Nichols & Stich (2003) cite developmental data to the effect that, on a wide range of tasks, instead of the parallel performance predicted by TT, children exhibit developmental asynchronies. Zum Beispiel, children are capable of attributing knowledge and ignorance to themselves before they are capable of attributing those states to others (Wimmer et al. 1988). Darüber hinaus, they suggest—on the basis, inter alia, of a reinterpretation of the aforementioned Hurlburt, Happé & Frith’s (1994) data—that there is some evidence of a double dissociation between schizophrenic and autistic subjects: the MMs might be intact in autistics despite their impairment in third-person mindreading; in schizophrenics the pattern might be reversed.

The MM theory provides a neo-Cartesian reply to TT—and especially to its eliminativist implications inasmuch as the mentalistic self-attributions based on MMs are immune to the potentially distorting influence of our intuitive theory of psychology. Aber, the MM theory faces at least two difficulties. To start with, the theory must tell us how MM establishes which attitude type (or percept type) a given mental state belongs to (Goldman 2006: 238-9). A possibility is that there is a separate MM for each propositional attitude type and for each perceptual modality. But then, as Engelbert and Carruthers (2010: 246) remark, since any MM can be selectively impaired, the MM theory predicts a multitude of dissociations—for example, subjects who can self-attribute beliefs but not desires, or visual experiences but not auditory ones, und so weiter. Aber, the hypothesis of such a massive dissociability has little empirical plausibility.

Darüber hinaus, Carruthers (2011) has offered a book-length argument against the idea of a direct access to propositional attitudes. His neurocognitive framework is Bernard Baars’ Global Workspace Theory model of consciousness (see Gennaro 2005: §4c), in which a range of perceptual systems “broadcast” their outputs (Zum Beispiel, sensory data from the environment, imagery, somatosensory and proprioceptive data) to a complex of conceptual systems (judgment-forming, memory-forming, desire-forming, decision-making systems, und so weiter). Among the conceptual systems there is also a multi-componential “mindreading system,” which generates higher-order judgments about the mental states of others and of oneself. By virtue of receiving globally broadcast perceptual states as input, the mindreading system can easily recognize those percepts, generating self-attributions of the form “I see something red,” “It hurts,” and so on. But the system receives no input from the systems that generate propositional attitude events (like judging and deciding). Infolgedessen, the mindreading system cannot directly self-attribute propositional attitude events; it must infer them by exploiting the perceptual input (together with the outputs of various memory systems). So, Carruthers (2009: 124) concludes, “self-attributions of propositional attitude events like judging and deciding are always the result of a swift (and unconscious) process of self-interpretation.” On this perspective, deshalb, we do not introspect our own propositional attitude events. Our only form of access to those events is via self-interpretation, turning our mindreading faculty upon ourselves and engaging in unconscious interpretation of our own behavior, physical circumstances, and sensory events like visual imagery and inner speech. Carruthers bases his proposal on considerations to do with the evolution of mindreading and metacognition, the rejection of the above-cited data that according to Nichols & Stich (2003) suggest developmental asynchronies and dissociation between self-attribution and other-attribution, and on evidence about the confabulation of attitudes. So, Carruthers develops a very sophisticated version of the symmetrical account of self-knowledge in which the theory-driven mechanisms underlying first- and third-person mindreading can count not only on observations and recollections of one’s own behavior and the circumstances in which it occurs/occurred, but also on the recognition of a multitude of perceptual and quasi-perceptual events.

2. Simulation-Theory

Until the mid-1980s the debate on the nature of mindreading was a debate between the different variants of TT. But in 1986, TT as a whole was impugned by Robert Gordon and, independently, by Jane Heal, who gave life to an alternative which was termed “simulation-theory” (ST). In 1989 Alvin Goldman and Paul Harris began to contribute to this new approach to mindreading. In 2006, Goldman provided the most thoroughly developed, empirically supported defense of a simulationist account of our mentalistic abilities.

According to ST, our third-person mindreading ability does not consist in implicit theorizing but rather in representing the psychological states and processes of others by mentally simulating them, das ist, attempting to generate similar states and processes in ourselves. So, the same resources that are used in our own psychological states and processes are recycled—usually but not only in imagination—to provide an understanding of psychological states and processes of the simulated target. This has often been compared to the method of Einfühlung exalted by the theorists of Verstehen (see Stueber 2006: 5-19).

In order for a mindreader to engage in this process of imaginative recycling, various information processing mechanisms are needed. The mindreader simulates the psychological etiology of the actions of the target in essentially two steps. Erste, the simulator generates pretend or imaginary mental states in her own mind which are intended to (at least partly) correspond to those of the target. Zweite, the simulator feeds the imaginary states into a suitable cognitive mechanism (Zum Beispiel, the decision-making system) that is taken “offline," das ist, it is disengaged from the motor control systems. If the simulator’s decision-making system is similar to the target’s one, and the pretend mental states that the simulator introduces into the decision-making system (at least partly) match the target’s, then the output of the simulator’s decision-making system might reliably be attributed or assigned to the target. On this perspective, there is no need for an internally represented knowledge base and there is no need of a naïve theory of psychology. The simulator exploits a part of her cognitive apparatus as a model for a part of the simulated agent’s cognitive apparatus.

Hence follows one of the main advantages ST is supposed to have over TT—namely its computational parsimony. According to advocates of ST, the body of tacit folk-psychological knowledge which TT attributes to mindreaders imposes too heavy a burden on mental computation. Aber, such a load will diminish radically if, instead of computing the body of knowledge posited by TT, mindreaders must only co-opt mechanisms that are primarily used online, when they experience a kind of mental state, to run offline simulations of similar states in the target (the argument is suggested by Gordon 1986 and Goldman 1995, and challenged by Stich & Nichols 1992, 1995).

In the early years of the debate over ST, a main focus was on its implications for the controversy between intentional realism and eliminative materialism. Gordon (1986) and Goldman (1989) suggested that by rejecting the assumption that folk psychology is a theory, ST undercuts eliminativism. Stich & Ravenscroft (1994: §5), Jedoch, objected that ST undermines eliminativism only provided that the latter adopts the subpersonal version of TT. For ST does not deny the evident fact that human beings have intuitions about the mental, and neither rules out that such intuitions might be systematized by building, as David Lewis suggests, a theory that implies them. Infolgedessen, ST does not refute eliminativism; it instead forces the eliminativist to include among the premises of her argument Lewis’ personal formulation of TT, together with the observation/prediction that the theory implicit in our everyday talk about mental states is or will turn out to be seriously defective.

One of the main objections that theory-theorists raise against ST is the argument from systematic errors in prediction. According to ST errors in prediction can arise either (Ich) because the predictor’s executive system is different from that of the target, oder (Ii) because the pretend mental states that the predictor has introduced into the executive system do not match the ones that actually motivate the target. Aber, Stich & Nichols (1992, 1995; see also Nichols et al. 1996) describe experimental situations in which the participants systematically fail to predict the behavior of targets, and in which it is unlikely that (Ich) oder (Ii) is the source of problem. Jetzt, TT can easily explain such systematic errors in prediction: it is sufficient to assume that our naïve theory of psychology lacks the resources required to account for such situations. It is no surprise that a folk theory that is incomplete, teilweise, and in many cases seriously defective often causes predictive failures. But this option is obviously not available for ST: simulation-driven predictions are “cognitively impenetrable," das ist, they are not affected by the predictor’s knowledge or ignorance about psychological processes (see also Saxe 2005; and the replies by Gordon 2005 and Goldman 2006: 173-4).

In jüngerer Zeit, Jedoch, a consensus seems to be emerging to the effect that mindreading involves both TT and ST. Zum Beispiel, Goldman (2006) grants a variety of possible roles for theorizing in the context of what he calls “high-level mindreading.” This is the imaginative simulation discussed so far, which is subject to voluntary control, is accessible to consciousness, and involves the ascription of complex mental states such as propositional attitudes. High-level simulation is a species of what Goldman terms “enactment imagination” (a notion that builds on Currie & Ravenscroft’s 2002 concept of “recreative imagination”). Goldman contrasts high-level mindreading to the “low-level mindreading,” which is unconscious, hard-wired, involves the attribution of structurally simple mental states such as face-based emotions (Zum Beispiel, Freude, fear, disgust), and relies on simple imitative or mirroring processes (sehen, Zum Beispiel, Goldman & Sripada 2005). Jetzt, theory definitely plays a role in high-level mindreading. In a prediction task, Zum Beispiel, theory may be involved in the selection of the imaginary inputs that will be introduced into the executive system. In diesem Fall, Goldman (2006: 44) admits, mindreading depends on the cooperation of simulation and theorizing mechanisms.

Goldman’s blend of ST and TT (albeit with a strong emphasis on the simulative component) is not the only “hybrid” account of mindreading: for other hybrid approaches, see Botterill & Carruthers (1999), Nichols & Stich (2003), and Perner & Kühberger (2006). And it is right to say that now the debate aims first of all to establish to what extent and in which processes theory or simulation prevails.

An. Simulation with and without Introspection

There is an aspect, Jedoch, that makes Goldman’s (2006) account of ST different from other hybrid theories of mindreading, namely the neo-Cartesian priority that he assigns to introspection. On his view, first-person mindreading both ontogenetically precedes and grounds third-person mindreading. Mindreaders need to introspectively access their offline products of simulation before they can project them onto the target. Und das, Goldman claims, is a form of “direct access.”

In 1993 Goldman put forward a phenomenological version of the inside access view (siehe oben, §1c), by arguing that introspection is a process of detection and classification of one’s (current) psychological states that does not depend at all on theoretical knowledge, but rather occurs in virtue of information about the phenomenological properties of such states. But in light of criticism (Carruthers 1996; Nichols & Stich 2003), in his 2006 book Goldman has remarkably reappraised the relevance of the qualitative component for the detection of psychological states, pointing out the centrality of the neural properties. Building on Craig’s (2002) account of interoception, as well as Marr’s and Biederman’s computational models of visual object recognition, Goldman now maintains that introspection is a perception-like process that involves a transduction mechanism that takes neural properties of mental states as input and outputs representations in a proprietary code (the introspective code, or the “I-code”). The I-code represents types of mental categories and classifies mental-state tokens in terms of those categories. Goldman also suggests some possible primitives of the I-code. Also, Zum Beispiel, our coding of the concept of pain might be the combination of the “bodily feeling” parameter (a certain raw feeling) with the “preference” or “valence” one (a negative valence toward the feeling). So, the neural version of the inside access view is an attempt to solve the problem of the recognition of the attitude type, which proved problematic for Nichols and Stich’s representationalist-functionalist approach (siehe oben, §1c). Aber, since different percept and attitude types are presumably realized in different cerebral areas, each percept or attitude type will depend on a specific informational channel to feed the introspective mechanism. Infolgedessen, Goldman’s theory also seems to be open to the objection of massive dissociability raised to the MM theory (see Engelbert and Carruthers 2010: 247).

Goldman’s primacy of first-person mindreading is, Jedoch, rejected by other simulationists. According to Gordon’s (1995, 1996) “radical” version of ST, simulation can occur without introspective access to one’s own mental states. The simulative process begins not with my pretending to be the target, but rather with my becoming the target. As Gordon (1995: 54) puts it, simulation is not “a transfer but a transformation.” “I” changes its referent and the equivalence “I=target” is established. In virtue of this de-rigidification of the personal pronoun, any introspective step is ruled out: one does not first assign a psychological state to oneself to transfer it to the target. Since the simulator becomes the target, no analogical inference from oneself to the other is needed. Still more radically, simulation can occur without having any mentalistic concepts. Our basic competence in the use of utterances of the form “I that p” involves not direct access to the propositional attitudes, but only an “ascent routine” through which we express our propositional attitudes in this new linguistic form (see Gordon 2007).

Carruthers has raised two objections to Gordon’s radical ST. Erste, it is a “step back” to a form of “quasi-behaviorism” (Carruthers 1996: 38). Zweite, Gordon problematically assumes that our mentalistic abilities are constituted by language (Carruthers 2011: 225-27). In developmental psychology de Villiers & de Villiers (2003) have put forward a constitution-thesis similar to Gordon’s: thinking about mental states comes from internalizing the language with which these states are expressed in the child’s linguistic environment. Genauer gesagt, mastery of the grammatical rules for embedding tensed complement clauses under verbs of speech or cognition provides children with a necessary representational format for dealing with false beliefs. Aber, correlation between linguistic exposure and mindreading does not depend on the use of specific grammatical structures. In a training study Lohman & Tomasello (2003) found that performance on a false-belief task is enhanced by simply using perspective-shifting discourse, without any use of sentential complement syntax. Darüber hinaus, syntax is not constitutive of the mentalistic capacities of adults. Varley et al. (2001) and Apperly et al. (2006) provided clear evidence that adults with profound grammatical impairment show no impairments on non-verbal tests of mindreading. Endlich, mastery of sentence complements is not even a necessary condition of the development of mindreading in children. Perner et al. (2005) have shown that such mastery may be required for statements about beliefs but not about desires (as in English), for beliefs and desires (as in German), or for neither beliefs nor desires (chinesisch); and yet children who learn each of these three languages all understand and talk about desire significantly earlier than belief.

b. Simulation in Low-Level Mindreading

Another argument for a (prevalently) simulationist approach to mindreading consists in pointing out that TT is thoroughly limited to high-level mindreading (essentially the attribution of propositional attitudes), whereas ST is also well equipped to account for forms of low-level mindreading such as the perception of emotions or the recognition of facial expressions and motor intentions (see Slors & Macdonald 2008: 155).

This claim finds its main support in the interplay between ST and neuroscience. In the early 1990s mirror neurons were first described in the ventral premotor cortex and inferior parietal lobe of macaque monkeys. These visuomotor neurons activate not only when the monkey executes motor acts (such as grasping, manipulating, holding, and tearing objects), but also when it observes the same, or similar, acts performed by the experimenter or a conspecific. Although there is only one study that seems to offer direct evidence for the existence of mirror neurons in humans (Mukamel et al. 2010), many neurophysiological and brain imaging investigations support the existence of a human action mirroring system. Zum Beispiel, fMRI studies using action observation or imitation tasks demonstrated activation in areas in the human ventral premotor and parietal cortices assumed to be homologous to the areas in the monkey cortex containing mirror neurons (see Rizzolatti et al. 2002). It should be emphasized that most of the mirror neurons that discharge when a certain type of motor act is performed also activate when the same act is perceived, even though it is not performed with the same physical movement—for example, many mirror neurons that discharge when the monkey grasps food with the hand also activate when it sees a conspecific who grasps food with the mouth. This seems to suggest that mirror neurons code or represent an action at a high level of abstraction, das ist, they are receptive not only to a mere movement but indeed to an action.

In 1998, Vittorio Gallese and Goldman wrote a very influential article in which mirror neurons were indicated as the basis of the simulative process. When the mirror neurons in the simulator’s brain are externally activated in observation mode, their activity matches (simulates or resonates with) that of mirror neurons in the target’s brain, and this resonance process retrodictively outputs a representation of the target’s intention from a perception of her movement.

More recently a number of objections have been raised against the “resonance” ST advocated by some researchers that have built on Gallese and Goldman’s hypothesis. Some critics, although admitting the presence of mirror neurons in both non-human and human primates, have drastically reappraised their role in mindreading. Zum Beispiel, Saxe (2009) has argued that there is no evidence that mirror neurons represent the internal states of the target rather than some relatively abstract properties of observed actions (see also Jacob & Jeannerod 2005; Jacob 2008). Andererseits, Goldman himself has mitigated his original position. Unlike Gallese, Keysers & Rizzolatti (2004), who propose mirror systems as the unifying basis of all social cognition, now Goldman (2006) considers mirror neuron activity, or motor resonance in general, as merely a possible part of low-level mindreading. Trotzdem, it is right to say that resonance phenomena are at the forefront of the field of social neuroscience (see Slors & Macdonald 2008: 156).

3. Social Cognition without Mindreading

By the early 21st century, the primacy that both TT and ST assigns to mindreading in social cognition had been challenged. One line of attack has come from philosophers working in the phenomenological tradition, such as Shaun Gallagher, Matthew Ratcliffe, and Dan Zahavi (see Gallagher & Zahavi 2008). Others working more from the analytic tradition, such as Jose Luis Bermúdez (2005, 2006b), Dan Hutto (2008), and Heidi Maibom (2003, 2007) have made similar points. Let’s focus on Bermúdez’ contribution because he offers a very clear account of the kind of cognitive mechanisms that might subserve forms of social understanding and coordination without mindreading (for a brief overview of this literature, see Slors & Macdonald 2008; for an exhaustive examination, see Herschbach 2010).

Bermúdez (2005) argues that the role of high-level mindreading in social cognition needs to be drastically re-evaluated. We must rethink the traditional nexus between intelligent behavior and propositional attitudes, realizing that much social understanding and social coordination are subserved by mechanisms that do not capitalize on the machinery of intentional psychology. Zum Beispiel, a mechanism of emotional sensitivity such as “social referencing” is a form of low-level mindreading that subserve social understanding and social coordination without involving the attribution of propositional attitudes (see Bermúdez 2006a: 55).

To this point Bermúdez is on the same wavelength as simulationists and social neuroscientists in drawing our attention to forms of low-level mindreading that have been largely neglected by philosophers. Aber, Bermúdez goes a step beyond them and explores cases of social interactions that point in a different direction, das ist, situations that involve mechanisms that can no longer be described as mindreading mechanisms. He offers two examples.

(1) In game theory there are social interactions that are modeled without assuming that the agents involved are engaged in explaining or predicting each other’s behavior. In social situations that have the structure of the iterated prisoner’s dilemma, the so-called “tit-for-tat” heuristic simply says: “start out cooperating and then mirror your partner’s move for each successive move” (Axelrod 1984). Applying this heuristic simply requires understanding the moves available to each player (cooperation or defection), and remembering what happened in the last round. So we have here a case of social interaction that is conducted on the basis of a heuristic strategy that looks backward to the results of previous interactions rather than to their psychological etiology. We do not need to infer other players’ reasons; we only have to coordinate our behavior with theirs.

(2) There is another important class of social interactions that involve our predicting and/or explaining the actions of other participants, but in which the relevant predictions and explanations seem to proceed without us having to attribute propositional attitudes. These social interactions rest on what social psychologists call “scripts” (“frames” in artificial intelligence), das ist, complex information structures that allow predictions to be made on the basis of the specification of the purpose of some social practice (Zum Beispiel, eating a meal at a restaurant), the various individual roles, and the appropriate sequence of moves.

According to Bermúdez, dann, much social interaction is enabled by a suite of relatively simple mechanisms that exploit purely behavioral regularities. It is important to notice that these mechanisms subserve central social cognition (in Fodor’s sense). Dennoch, they implement relatively simple processes of template matching and pattern recognition, das ist, processes that are paradigmatic cases of perceptual processing. Zum Beispiel, when a player A applies the tit-for-tat rule, A must determine what the other player B did in the preceding round. This can be implemented in virtue of a template matching in which A verifies that B’s behavioral pattern matches A’s prototype of cooperation and defection. And also detecting the social roles implicated in a script-based interaction is a case of template matching: one verifies whether the perceived behavior matches one of the templates associated with the script (or the prototype represented in the “frame”).

Bermúdez (2005: 223) notes that the idea that much of what we intuitively identify as central processing is actually implemented by mechanisms of template matching and pattern recognition has been repeatedly put forward by the advocates of the connectionist computationalism, especially by Paul M. Churchland. But unlike the latter, Bermúdez does not carry the reappraisal of the role of propositional attitudes in social cognition to the point of their elimination; he argues that social cognition does not involve high-level mindreading when the social world is “transparent” or “ready-to-hand,” as he says quoting Heidegger’s zuhanden. Aber, when we find ourselves in social situations that are “opaque," das ist, situations in which all the standard mechanisms of social understanding and interpersonal negotiation break down, it seems that we cannot help but appeal to the type of metarepresentational thinking characteristic of intentional psychology (2005: 205-6).

4. Referenzen und weiterführende Literatur
An. Empfohlene weiterführende Literatur
Apperly, ICH. (2010). Mindreaders: The Cognitive Basis of “Theory of Mind.” Hove, East Sussex, Psychology Press.
Carruthers, P. und Smith, P. K. (Hrsg.) (1996). Theories of Theories of Mind. Cambridge, Cambridge University Press.
Churchland, P. M. (1994). “Folk Psychology (2).” In S. Guttenplan (ed.), A Companion to the Philosophy of Mind, Oxford, Blackwell, pp. 308–316.
Cundall, M. (2008). “Autism.” In The Internet Encyclopedia of Philosophy..
Davies, M. and Stone, T. (Hrsg.) (1995a). Folk Psychology: The Theory of Mind Debate. Oxford, Blackwell.
Davies, M. and Stone, T. (Hrsg.) (1995b). Mental Simulation: Evaluations and Applications. Oxford, Blackwell.
Decety, J. and Cacioppo, J. T. (2011). The Oxford Handbook of Social Neuroscience. Oxford, Oxford University Press.
Doherty, M. J. (2009). Theory of Mind. How Children Understand Others’ Thoughts and Feelings. Hove, East Sussex, Psychology Press.
Dokic, J. and Proust, J. (Hrsg.) (2002). Simulation and Knowledge of Action. Amsterdam, John Benjamins.
Gerrans, P. (2009). “Imitation and Theory of Mind.” In G. Berntson and J. T. Cacioppo (Hrsg.), Handbook of Neuroscience for the Behavioral Sciences. Chicago, University of Chicago Press, Bd. 2, pp. 905–922.
Gordon, R. M. (2009). “Folk Psychology as Mental Simulation.” In E. N. Zalta (ed.), Die Stanford Encyclopedia of Philosophy (Fall 2009 Edition).
Hutto, D., Herschbach, M. and Southgate, V. (Hrsg.) (2011). Special Issue “Social Cognition: Mindreading and Alternatives.” Review of Philosophy and Psychology 2(3).
Kind, Ein. (2005). “Introspection.” In The Internet Encyclopedia of Philosophy.
Meini, C. (2007). “Naïve psychology and simulations.” In M. Marraffa, M. De Caro and F. Ferretti (Hrsg.), Cartographies of the Mind. Dordrecht, Unordnung, pp. 283–294.
Nichols, S. (2002). “Folk Psychology.” In Encyclopedia of Cognitive Science. London, Nature Publishing Group, pp. 134–140.
Ravenscroft, ICH. (2010). “Folk Psychology as a Theory.” In E. N. Zalta (ed.), Die Stanford Encyclopedia of Philosophy (Fall 2010 Edition).
Rizzolatti, G., Sinigaglia, C. and Anderson, F. (2007). Mirrors in the Brain. How Our Minds Share Actions, Emotions, and Experience. Oxford, Oxford University Press.
Saxe, R. (2009). “The happiness of the fish: Evidence for a common theory of one’s own and others’ actions.” In K. D. Markman, W. M. P. Klein and J. Ein. Suhr (Hrsg.), The Handbook of Imagination and Mental Simulation. New York, Psychology Press, pp. 257–266.
Shanton, K. and Goldman, Ein. (2010). “Simulation theory.” Wiley Interdisciplinary Reviews: Cognitive Science 1(4): 527–538.
Stich, S. and Rey, G. (1998). “Folk psychology.” In E. Craig (ed.), Routledge Encyclopedia of Philosophy. London, Routledge.
Von Eckardt, B. (1994). “Folk Psychology (1).” In S. Guttenplan (ed.), A Companion to the Philosophy of Mind. Oxford, Blackwell, pp. 300–307.
Weiskopf, D. Ein. (2011). “The Theory-Theory of Concepts.” In The Internet Encyclopedia of Philosophy.
b. Referenzen
Apperly, I.A., Samson, D., Caroll, N., Hussain, S. and Humphreys, G. (2006). “Intact first- and second-order false belief reasoning in a patient with severly impaired grammar.” Social Neuroscience 1(3-4): 334-348.
Axelrod, R. (1984). The Evolution of Cooperation. New York, Grundlegende Bücher.
Baillargeon, R., Scott, R.M. and He, Z. (2010). “False–belief understanding in infants.” Trends in Cognitive Sciences 14(3): 110–118.
Bem, D. J. (1972). “Self-Perception Theory.” In L. Berkowitz (ed.), Advances in Experimental Social Psychology. New York, Akademische Presse, Bd. 6, pp. 1–62.
Bermúdez, J. L. (2005). Philosophy of Psychology: A Contemporary Introduction. London, Routledge.
Bermúdez, J. L. (2006a). “Commonsense psychology and the interface problem: Reply to Botterill.” SWIF Philosophy of Mind Review 5(3): 54–57.
Bermúdez, J. L. (2006b), “Arguing for eliminativism.” In B. L. Keeley (ed.), Paul Churchland. Cambridge, Cambridge University Press, pp. 32–65.
Bickle, J. (2003). Philosophy and Neuroscience: A Ruthlessly Reductive Account. Dordrecht, Unordnung.
Botterill, G. and Carruthers, P. (1999). The Philosophy of Psychology. Cambridge, Cambridge University Press.
Carey, S. and Spelke, E. (1996). “Science and core knowledge.” Philosophy of Science 63: 515–533.
Carruthers, P. (1996). “Simulation and self-knowledge.” In P. Carruthers and P. K. Schmied (Hrsg.), Theories of Theories of Mind. Cambridge, Cambridge University Press, pp. 22–38.
Carruthers, P. (2006). The Architecture of the Mind. Oxford, Oxford University Press.
Carruthers, P. (2009). “How we know our own minds: The relationship between mindreading and metacognition.” Behavioral and Brain Sciences 32: 121–138.
Carruthers, P. (2011). The Opacity of Mind: The Cognitive Science of Self-Knowledge. Oxford, Oxford University Press.
Craig, Ein. D. (2002). “How do you feel? Interoception: The sense of the physiological condition of the body.” Nature Reviews Neuroscience 3: 655–666.
Currie, G. and Ravenscroft, ICH. (2002). Recreative Minds: Imagination in Philosophy and Psychology. Oxford, Oxford University Press.
de Villiers, J. G. and de Villiers P. Ein. (2003). “Language for thought: Coming to understand false beliefs.” In D. Gentner and S. Goldin–Meadow (Hrsg.), Language in Mind. Cambridge, MIT Press, pp. 335–384.
Engelbert, M. and Carruthers, P. (2010). “Introspection.” Wiley Interdisciplinary Reviews: Cognitive Science 1: 245–253.
Fogassi, L. and Ferrari P. F. (2010). “Mirror systems.” Wiley Interdisciplinary Reviews: Cognitive Science 2(1): 22–38.
Frith, C. (1992). Cognitive Neuropsychology of Schizophrenia. Hove, Erlbaum.
Frith, U. and Happé, F. (1999). “Theory of mind and self-consciousness: What is it like to be autistic?" Geist & Language 14(1): 1–22.
Gallagher, S. and Zahavi, D. (2008). The Phenomenological Mind. London, Routledge.
Gallese, V. and Goldman, Ein. (1998). “Mirror neurons and the simulation theory of mind-reading.” Trends in Cognitive Sciences 12: 493–501.
Gallese, V., Keysers, C. and Rizzolatti, G. (2004). “A unifying view of the basis of social cognition.” Trends in Cognitive Sciences 8: 396–403.
Gennaro, R. J. (2005). “Consciousness.” In The Internet Encyclopedia of Philosophy..
Gerrans, P. and Stone, V. E. (2008). Generous or parsimonious cognitive architecture? Cognitive neuroscience and Theory of Mind. British Journal for the Philosophy of Science 59: 121–141.
Goldman, Ein. ICH. (1993). “The psychology of folk psychology.” Behavioral and Brain Sciences 16: 15–28.
Goldman, Ein. ICH. (1989). “Interpretation psychologized.” Mind and Language, 4: 161–185; reprinted in M. Davies and T. Stone (Hrsg.), Folk Psychology. Oxford, Blackwell, 1995, pp. 74–99.
Goldman, Ein. ICH. (1995). “In defense of the simulation theory.” In M. Davies and T. Stone (Hrsg.), Folk Psychology. Oxford, Blackwell, pp. 191–206.
Goldman, Ein. ICH. (2006). Simulating Minds. Oxford, Oxford University Press.
Goldman, Ein. ICH. and Sripada, C. (2005). “Simulationist models of face-based emotion recognition.” Cognition 94: 193–213.
Gopnik, Ein. (1993). “How we read our own minds: The illusion of first-person knowledge of intentionality.” Behavioral and Brain Sciences 16: 1–14.
Gopnik, Ein. and Astington, J. W. (1988). “Children’s understanding of representational change and its relation to the understanding of false belief and the appearance-reality distinction. “ Child Development 59: 26–37.
Gopnik, Ein. and Meltzoff, Ein. (1997). Wörter, Thoughts, and Theories. Cambridge, MA, MIT Press.
Gopnik, Ein. and Schulz, L. (2004). “Mechanisms of theory-formation in young children.” Trends in Cognitive Sciences 8(8): 371–377.
Gopnik, Ein. and Schulz, L. (Hrsg.) (2007). Causal Learning: Psychologie, Philosophie, and Computation. New York, Oxford University Press.
Gordon, R. M. (1986). “Folk psychology as simulation.” Mind and Language, 1: 158–171; reprinted in M. Davies and T. Stone (Hrsg.), Folk Psychology. Oxford, Blackwell, 1995, pp. 60–73.
Gordon, R. M. (1995). “Simulation without introspection or inference from me to you.” In M. Davies and T. Stone (Hrsg.), Mental simulation: Evaluations and Applications. Oxford, Blackwell, pp. 53–67.
Gordon, R. M. (1996). “Radical simulationism.” In P. Carruthers and P. Schmied (Hrsg.), Theories of theories of mind. Cambridge, Cambridge University Press, pp. 11–21.
Gordon, R. M. (2005). “Simulation and systematic errors in prediction.” Trends in Cognitive Sciences 9: 361–362.
Gordon, R. M. (2007). “Ascent routines for propositional attitudes.” Synthese 159: 151–165.
Grice, H. P. (1989). Studies in the Way of Words. Cambridge, MA, Harvard University Press.
Harris, P. L. (1989). Children and Emotion: The Development of Psychological Understanding. Oxford, Blackwell.
Harris, P. L. (2000). The Work of the Imagination. Oxford: Blackwell.
Heider, F. (1958). The Psychology of Interpersonal Relations, New York, Wiley.
Heider, F. and Simmel, M. (1944). “An experimental study of apparent behavior.” American Journal of Psychology 57: 243–259.
Herschbach, M. (2010). Beyond Folk Psychology? Toward an Enriched Account of Social Understanding. PhD dissertation, University of California, San Diego.
Hurlburt, R., Happé, F. and Frith, U. (1994). “Sampling the form of inner experience in three adults with Asperger syndrome.” Psychological Medicine 24: 385–395.
Hurlburt, R. T. and Schwitzgebel, E. (2007). Describing Inner Experience? Proponent Meets Skeptic. Cambridge, MA, MIT Press.
Hutto, D. D. (2008). Folk Psychological Narratives: The Sociocultural Basis of Understanding Reasons. Cambridge, MA, MIT Press.
Jacob, P. (2008). “What do mirror neurons contribute to human social cognition?” Mind and Language 23: 190–223.
Jacob, P. and Jeannerod, M. (2005). “The motor theory of social cognition: A critique.” Trends in Cognitive Science 9: 21–25.
Johansson, P., Hall, L., Sikström, S., Tärning, B. and Lind, Ein. (2006). “How something can be said about telling more than we can know: On choice blindness and introspection.” Consciousness and Cognition 15: 673–692.
Leslie, A.M. (1998). “Mind, child’s theory of.” In E. Craig (ed.), Routledge Encyclopedia of Philosophy. London, Routledge.
Leslie, A.M. (1994). “ToMM, ToBy, and agency: Core architecture and domain specificity.” In L. Hirschfeld and S. Gelman (Hrsg.), Den Geist kartieren: Domänenspezifität in Kognition und Kultur. Cambridge, MA, Cambridge University Press, pp. 119–148.
Leslie, Ein. M. (2000). “‘Theory of mind’ as a mechanism of selective attention.” In M. Gazzaniga (ed.), Die neuen kognitiven Neurowissenschaften. Cambridge, MA, MIT Press, 2. Auflage, pp. 1235–1247.
Leslie, Ein. M. (2005). “Developmental parallels in understanding minds and bodies.” Trends in Cognitive Sciences 9(10): 459-462.
Leslie, Ein. M., Friedmann, Ö. and German, T. P. (2004). “Core mechanisms in ‘theory of mind’.” Trends in Cognitive Sciences 8(12): 528–533.
Leslie, Ein. M. and Polizzi, P. (1998). “Inhibitory processing in the false belief task: two conjectures.” Development Science 1: 247–254.
Leslie, A.M. and Thaiss, L. (1992). “Domain specificity in conceptual development: Neuropsychological evidence from autism.” Cognition 43: 225–251.
Lewis, D. (1972). “Psychophysical and theoretical identifications.” Australasian Journal of Philosophy, 50: 249–258.
Lohman, H. and Tomasello, M. (2003). “The role of language in the development of false belief understanding: A training study.” Child Development 74: 1130–1144.
Maibom, H. L. (2003). “The mindreader and the scientist.” Mind & Language 18(3): 296–315.
Maibom, H. L. (2007). “Social systems.” Philosophical Psychology 20(5): 557-578.
Malle, B. F. and Ickes, W. (2000). “Fritz Heider: Philosopher and psychologist.” In G. Ein. Kimble and M. Wertheimer (Hrsg.), Portraits of Pioneers in Psychology. Washington (Gleichstrom), American Psychological Association, Bd. IV, pp. 195–214.
Morton, Ein. (1980). Frames of Mind. Oxford, Oxford University Press.
Mukamel, R., Ekstrom, A.D., Kaplan, J., Iacoboni, M. and Fried, ICH. (2010). “Single-Neuron Responses in Humans during Execution and Observation of Actions.” Current Biology 20: 750–756.
Nagel, E. (1961). The Structure of Science. New York, Harcourt, Brace, and World.
Nichols, S. and Stich, S. (2003). Mindreading. Oxford, Oxford University Press.
Nichols, S., Stich, S., Leslie, Ein. and Klein, D. (1996). “Varieties of Off-Line Simulation.” In P. Carruthers and P. Schmied (Hrsg.). Theories of Theories of Mind. Cambridge, Vereinigtes Königreich, Cambridge University Press, 39–74.
Nisbett, R. E. and Bellows, N. (1977). “Verbal reports about causal influences on social judgments: Private access versus public theories.” Journal of Personality and Social Psychology, 35: 613–624.
Nisbett, R. and Wilson, T. (1977). “Telling more than we can know: Verbal reports on mental processes.” Psychological Review 84: 231–259.
Onishi, K. H. and Baillargeon, R. (2005). “Do 15-month-old infants understand false beliefs?” Science 308: 255–258.
Perner, J. (1991). Understanding the Representational Mind. Cambridge, MA, MIT Press.
Perner, J. and Aichhorn, M. (2008). “Theory of Mind, language, and the temporo-parietal junction mystery.” Trends in Cognitive Sciences 12(4): 123–126.
Perner, J., Bäcker, S. and Hutton, D. (1994). “Prelief: The conceptual origins of belief and pretence.” In C. Lewis and P. Mitchell (Hrsg.), Children’s Early Understanding of Mind. Hillsdale, NJ, Erlbaum, pp. 261–286.
Perner J., and Kuhlberger, Ein. (2005). “Mental simulation: Royal road to other minds?” In Malle, B. F. and Hodges, S. D. (Hrsg.), Other Minds. New York, Guilford Press, pp. 166–181.
Perner, J. and Leekam, S. (2008). “The curious incident of the photo that was accused of being false: Issues of domain specificity in development, autism, and brain imaging.” The Quarterly Journal of Experimental Psychology 61(1): 76–89.
Perner, J., Zauner, P. and Sprung, M. (2005). “What does ‘that’ have to do with point of view? Conflicting desires and ‘want’ in German.” In J. W. Astington and J. Ein. Baird (Hrsg.), Why Language Matters for Theory of Mind. Oxford, Oxford University Press, pp. 220–244.
Ramsey, W. (2011). “Eliminative Materialism.” In E.N. Zalta (ed.), Die Stanford Encyclopedia of Philosophy (Spring 2011 Edition).
Rizzolatti, G., Fogassi, L. and Gallese V. (2002). “Motor and cognitive functions of the ventral premotor cortex.” Current Opinion in Neurobiology 12:149–154.
Robbins, P. (2006). “The ins and outs of introspection.” Philosophy Compass 1(6): 617-630.
Ruffman, T. and Perner, J. (2005). “Do infants really understand false belief?” Trends in Cognitive Sciences 9(10): 462-463.
Samuels, R. (1998). “Evolutionary psychology and the massive modularity hypothesis.” The British Journal for the Philosophy of Science 49: 575–602.
Samuels, R. (2000). “Massively modular minds: Evolutionary psychology and cognitive architecture.” In P. Carruthers and A. Kammerherr (Hrsg.). Evolution and the Human Mind. Cambridge, Cambridge University Press, pp. 13–46.
Samuels, R. (2006). “Is the mind massively modular?” In R. J. Stainton (ed.), Zeitgenössische Debatten in der Kognitionswissenschaft. Oxford, Blackwell, pp. 37–56.
Saxe, R. (2005). “Against simulation: The argument from error.” Trends in Cognitive Science 9: 174–179.
Saxe, R. (2009). “The neural evidence for simulation is weaker than I think you think it is.” Philosophical Studies 144: 447-456.
Saxe, R. and Kanwisher, N. (2003). “People thinking about thinking people: The role of the temporo-parietal junction in ‘theory of mind’.” NeuroImage 19: 1835–1842.
Schwitzgebel, E. (2010). “Introspection.” In E. N. Zalta (ed.), Die Stanford Encyclopedia of Philosophy (Fall 2010 Edition).
Sellars, W. (1956). “Empiricism and the philosophy of mind.” In Science, Perception and Reality. London und New York, Routledge & Kegan Paul, 1963, 127–96.
Simpson, T., Carruthers, P., Laurence, S. and Stich, S. (2005). “Introduction: Nativism past and present.” In P. Carruthers, S. Laurence and S. Stich (Hrsg.), The Innate Mind: Structure and Contents. Oxford, Oxford University Press, pp. 3–19.
Slors, M. and Macdonald, C. (2008). “Rethinking folk-psychology: Alternatives to theories of mind.” Philosophical Explorations 11(3): 153–161.
Spelke, E.S. and Kinzler, K.D. (2007). “Core knowledge.” Developmental Science 10: 89–96.
Stich, S. (1983). From Folk Psychology to Cognitive Science: The Case Against Belief. Cambridge, MA, MIT Press.
Stich, S. and Nichols, S. (1992). “Folk Psychology: Simulation or Tacit Theory?" Geist & Language 7(1): 35–71; reprinted in M. Davies and T. Stone (Hrsg.), Folk Psychology. Oxford, Blackwell, 1995, pp. 123–158.
Stich, S. and Nichols, S. (1995). “Second Thoughts on Simulation.” In M. Davies and A. Stone (Hrsg.). Mental Simulation: Evaluations and Applications. Oxford, Blackwell, 87–108.
Stich, S. and Nichols, S. (2003). “Folk Psychology.” In S. Stich and T. Ein. Warfield (Hrsg.), The Blackwell Guide to Philosophy of Mind. Oxford, Blackwell, pp. 235–255.
Stich, S. and Ravenscroft, ICH. (1994). “What is folk psychology?” Cognition 50: 447–468.
Stueber, K. R. (2006). Rediscovering Empathy: Agency, Folk Psychology, and the Human Sciences. Cambridge, MA, MIT Press.
Surian, L., Caldi, S. and Sperber, D. (2007). “Attribution of beliefs by 13-month-old infants.” Psychological Science 18(7): 580–586.
Wellman, H. M. (1990). The Child’s Theory of Mind, Cambridge, MA, MIT Press.
Wellman, H. M., Kreuzen, D. and Watson, J. (2001). “Meta-analysis of theory-of-mind development: The truth about false belief.” Child Development 72: 655–684.
Wilson, D. (2005). “New directions for research on pragmatics and modularity.” Lingua 115: 1129–1146.
Wimmer, H., Hogrefe, G. and Perner, J. (1988). “Children’s understanding of informational access as a source of knowledge.” Child Development 59: 386-396.
Wimmer, H. and Perner, J. (1983). “Beliefs about beliefs: Representation and constraining function of wrong beliefs in young children’s understanding of deception.” Cognition 13: 103–128.
Varley, R., Siegal, M. and Want, S.C. (2001). “Severe impairment in grammar does not preclude theory of mind.” Neurocase 7: 489–493.
Zaitchik, D. (1990). “When representations conflict with reality: The preschooler’s problem with false beliefs and ‘false’ photographs.” Cognition 35: 41–68.

Informationen zum Autor

Massimo Marraffa
E-Mail: [email protected]
University Roma Tre
Italy

(1 Mal besucht, 1 Besuche heute)