Resource Bounded Agents
Resource bounded agents are persons who have information processing limitations. All persons and other cognitive agents who have bodies are such that their sensory transducers (such as their eyes and ears) have limited resolution and discriminatory ability; their information processing speed and power is bounded by some threshold; and their memory and recall is imperfect in some way. While these general facts are not controversial, it is controversial whether and to what degree these facts should shape philosophical theorizing.
Wohl, resource bounded agents pose the most serious philosophical challenges to normative theories in a number of domains, and especially to theories of rationality and moral action. If a normative theory endorses a standard for how an agent ought act or think, or if a normative theory aims to provide recommendations for various kinds of conduct, such a theory will have commitments regarding the descriptive facts about the agent’s cognitive limitations. There are two major responses. These theories may either (1) argue to dismiss these descriptive facts as irrelevant to the normative enterprise (siehe Abschnitt 2) oder, stattdessen, (2) attempt to accommodate these facts in some way (see section 3). Historisch, normative theories that have attempted to accommodate facts about cognitive limitations have done so by either (Ich) augmenting the proposed normative standard, oder (Ii) using facts about cognitive limitations to show that agents cannot meet the proposed normative standard.
After a brief discussion of some empirical work addressing human cognitive limitations, this article will discuss idealization in philosophy and the status of the normative bridge principle “ought implies can,” which suggests that “oughts” are constrained by descriptive limitations of the agent. Nächste, the article explores several theories of rationality that have attempted to accommodate facts about cognitive limitations.
As an introductory and motivating example, consider the claim that human agents ought not to believe inconsistent propositions. Anfänglich, such a claim seems perfectly reasonable. Perhaps this is because a collection of inconsistent propositions is guaranteed to include at least one false proposition. But Christopher Cherniak (1986) has pointed out that when one has as few as 140 (logically independent) Überzeugungen, there are approximately 1.4 tredecillion (a number with 43 digits) pairs of beliefs to check for potential inconsistency. No human could ever check that many items for consistency. Tatsächlich, an ultra-fast supercomputer would take 20 billion years to complete such a task. Daher, for some epistemologists, the empirical fact of the impossibility of a complete consistency-check of a human’s belief corpus has provided reason for thinking that complete consistency of belief is not an appropriate normative standard. Whether such a response is ultimately correct, Jedoch, concerns the status of resource bounded agents in normative theorizing.
Inhaltsverzeichnis
Cognitive Limitations and Resource Bounds
Limitations of Memory
Limitations of Visual Perception
Limitations of Attentional Resources
Idealization
Idealization Strategies
Problems with the Idealization Strategy
Ought Implies Can
Accommodating Cognitive Limitations
Changing the Normative Standard
Simon’s “Satisficing View” of Decision Making
Pollock’s “Locally Global” View of Planning
Cherniak’s “Minimal Rationality” and “Feasible Inferences”
Gigerenzer’s “Ecological Rationality”
Failing to Meet the Standard
Kahneman and Tversky’s “Heuristics and Biases” Program
Referenzen und weiterführende Literatur
References
Further Reading
1. Cognitive Limitations and Resource Bounds
Every known cognitive agent has resource and cognitive limitations. Christopher Cherniak refers to this necessary condition as the “finitary predicament”: because agents are embodied, localized, and operate in physical environments, they necessarily face informational limitations. While philosophers have acknowledged this general fact, the precise details of these resource and cognitive limitations are not widely discussed, and the precise details could matter to normative theorizing. Revisiting the example from above, it is obvious that humans cannot check 1.4 tredecillion pairs of beliefs for consistency. But it is not obvious how many beliefs a human agent can check. If it could be experimentally demonstrated that humans could not occurrently check twelve beliefs for consistency, even this minimal consistency check might not be rationally required. Daher, the precise details of cognitive limitations need to be addressed.
Before turning to the details of cognitive limitations, it is important to note that there are two senses of the term ‘limitation’. To see the distinction, consider a simple example. Very young children are limited in their running abilities. This limitation can be described in two ways: (Ich) young children cannot run a mile in under four minutes, und (Ii) young children are not excellent runners. The important difference in these (WAHR) descriptions is that way (Ich) uses non-normative language and way (Ii) uses normative language. This distinction is crucial when the main objective is an evaluation of the normative standard itself. Zum Beispiel, challenging whether (Ich) is true involves non-normative considerations while challenging whether (Ii) is true fundamentally involves normative considerations. Schlechthin, the kinds of cognitive limitation under discussion in this article will primarily concern non-normative limitations.
Im Folgenden, this article will survey some findings from cognitive psychology to illustrate various attempts to measure human cognitive limitations. These findings are not exhaustive and should be thought of as representative examples.
An. Limitations of Memory
Memory is the general process of retaining, accessing, and using stored information. Short-term memory is the process of storing small amounts of information for short periods of time. In 1956 George Miller published a paper that helped measure the limitations of human short-term memory. This paper was an early example of the field that would later be known as cognitive psychology. In “The Magical Number Seven, Plus or Minus Two”, Miller argued that short-term memory is limited to approximately seven items (plus or minus two). Das heißt, Miller argued that for typical adult humans, short-term memory is bounded by about nine items. Later work such as Cowan (2001) has suggested that the capacity of short-term memory might be smaller than previously thought, perhaps as small as four items.
In some ways, Miller’s result should be puzzling. Humans are often able to recite long sentences immediately after reading them, so how would this ability square with Miller’s experimental results? Miller also introduced the idea of “chunking” in his famous 1956 paper. To “chunk” items is to group them together as a unit (often by a measure of similarity or meaningfulness). This is an information compression strategy. Zum Beispiel, suppose the task is to remember the following eight words: catching, dog, apples, city, rot, frisbees, park, yellow. Likely, this would be somewhat difficult. Stattdessen, suppose the task was to remember the four phrases: yellow dog, red apples, catching frisbees, city park. This should be less difficult, even though the task still involves eight words. The explanation is that the eight items have been “chunked” down to four informational items (to be “uncompressed” later when needed). Noch, the existence of chunking strategies does not mean that short-term memory is unbounded. Typical humans cannot remember more than seven (plus or minus two) chunks, nor is it the case that just any string of information can be chunked. For many subjects, it would be exceedingly difficult to chunk the following eight strings of letters: rucw, mxzq, exef, cfiw, uhss, xohj, mnwf, ofhn.
Long-term memory is the process of storing information for long periods of time. Long-term memory also features kinds of limitation. It may be tempting to think that stored memories are like photographs or video, which may be retrieved and then reviewed as an unaltered representation of an event. But this is not how human memory works. Psychologists have known for a long time that many aspects of memory are “constructive”. Das heißt, factors such as expectation, Erfahrung, and background knowledge can alter memories. Humans are prone to omit details of events and even add details that never occurred. Consider the classic example of Bartlett’s “War of the Ghosts” experiment. In 1932 Fredrick Bartlett read British subjects a story from aboriginal Canadian folklore. He then asked the subjects to recall the story as accurately as they were able. This established a baseline of subject performance. Nächste, Bartlett used the experimental technique of “repeated reproduction” and had subjects retell the story after longer and longer periods of time. Bartlett found that as more time passed, subjects’ retelling of the story became shorter and more and more details were omitted. Auch, many subjects added details to the story that reflected their own culture, rather than the cultural setting of the story. As one example, instead of recalling the canoes that were mentioned in the story, many subjects retold the story as concerning boats, which would be more familiar to a British participant.
It has also been demonstrated that for some kinds of information, retrieving an item from memory can reduce the likelihood of successfully retrieving a competing or related item. As a simple example, trying to remember where one last put one’s keys would be much more difficult if competing memories such as where one put the keys two days ago or three days ago were just as likely to be recalled. Stattdessen, it appears as though there is an inhibitory mechanism that suppresses the recall of competing memories (in diesem Fall, the older “key location” memory). While potentially beneficial in some respects, this “retrieval-induced forgetting” effect might be harmful in some academic settings. Macrae and MacLeod (1999) gave subjects 20 “facts” about a fictional island. Nächste, subjects were evenly divided into two groups: group one practiced memorizing only a select 10 of the 20 facts and group two did not practice memorizing any of the 20 facts. nicht überraschend, group one had better recall than group two on the select 10 facts. Aber, interessant, group two had better recall than group one on the other 10 facts. Das heißt, by attempting to memorize some subset of the 20 facts, group one had impoverished recall in the unpracticed subset of facts. This result might have implications for students that attempt to cram for an exam: in cramming for an exam, students may reduce their performance on unstudied material.
In addition to the above limitations, humans also suffer from age related performance decreases in memory. Humans also typically have difficulty in remembering the source of their information (das ist, how they initially learned the information). Weiter, misinformation and suggestion can alter subjects’ memories and even create “false memories”. Eyewitness reports of a crime scene may omit relevant information when a gun is present (known as “weapon focus”), due to the narrow attentional focus on the gun. Auch, subtle feedback to an eyewitness report (Zum Beispiel, a police officer says “thanks for helping identify the perpetrator”) can strengthen the eyewitness’ feeling of confidence, but not their reliability.
b. Limitations of Visual Perception
Humans are able to visually detect wavelengths between roughly 400 and 700 nanometers, corresponding to colors from violet to red. Daher, unaided human vision cannot detect much of the information in the electromagnetic spectrum, including infrared and ultraviolet radiation. Under ideal conditions, humans can discriminate between wavelengths in the visible spectrum that differ by only a few nanometers.
It is a mistake to think that, for humans, the entire visual field is uniformly detailed. This is surprising, because it seems (phenomenologically, Mindestens) that most of the visual field is detail rich. Recall the experience of studying the brushstrokes of an artwork at approximately five feet of distance. The uncritical experience suggests that vision always provides highly detailed information—perhaps this is because everywhere one looks there appears to be detail. Noch, there is a sense in which this is an illusion. In the human eye, the fovea is responsible for providing highly detailed information, but the fovea is only a small part of the retina. Eye movements, called saccades, change the location of foveal vision to areas of interest, so details can be extracted where they are wanted. Much of the visual field in humans does not provide detail rich information, and might be described in lay terms as being similar to “peripheral vision”. This non-foveal part of the visual field has limited acuity and results in impoverished perceptual discriminatory ability.
Just as it is incorrect to think that memory works like a photograph, human color vision does not simply provide the color of an object in the way a “color picker” does in a image editing computer program. The color an object appears is often highly sensitive to the amount of light in the environment. Color judgments in humans can be highly unreliable in low light environments, such as when distinguishing green from purple. Human vision is also subject to color constancy in some circumstances. Color constancy occurs when objects appear to stay the same color despite changing conditions of illumination (which change the wavelengths of light that are reflected) or because of their proximity to other objects. Zum Beispiel, the green leaves of a tree may appear to stay the same color as the sun is setting. Color constancy may be helpful for the tracking or re-identification of an object through changing conditions of illumination, but it may also increase the unreliability of color judgments.
c. Limitations of Attentional Resources
Attention is the capacity to focus on a specific object, stimulus, or location. Many occurrent cognitive processes require attentional resources. Lavie (1995, 2005) has proposed a model that helps explain the relationship between the difficulty of various tasks and the ability to successfully deploy attentional resources. Lavie’s idea is that total cognitive resources are finite, and difficult cognitive tasks take up more of these resources. A direct implication is that comparatively easier tasks allow for available cognitive resources to process “task-irrelevant” information. Processing task-irrelevant information can be distracting and even reduce task performance. For an example of this phenomenon, consider the difference between taking an important final exam and casually reading at a coffee shop. Applying Lavie’s model, taking an important final exam will often use all of one’s cognitive resources, und damit, no task-irrelevant information (such as the shuffling of papers in the room or the occasional cough) will be processed. In this particular instance, the task-irrelevant stimuli cannot be distracting. Im Gegensatz, causally reading at a coffee shop typically is not a “high-load” task and does not require most of a subject’s cognitive resources. While reading casually one can still overhear a neighboring conversation or the sound of the espresso machine, sometimes hindering the ability to concentrate on one’s book.
As an example of competition from task-irrelevant stimuli, consider the well-known Stroop effect. First conducted by J.R. Stroop in 1935, the task is to name as quickly as possible the color of ink used to print a series of words. For words such as ‘dog’, ‘chair’ and ‘house’, each printed in a different color, the task is relatively easy. But Stroop had subjects read words such as ‘green’, ‘blue’, and ‘red’ printed in non-representative colors (so ‘red’ might be printed in blue ink). This version of the task is much more challenging, often taking twice as much time as the version without color words. One explanation of this result is that the task-irrelevant information of the color word is difficult to ignore, perhaps because linguistic processing of words is often automatic.
Attentional resources are also deployed in tracking objects in the environment. Object-based attention concerns representing and tracking objects. Xu et al. (2009) report that due to limits on processing resources, the visual system is able to individuate and track about four objects. Sears and Pylyshyn (2000) also cite limits on the capacity to process visual information and have shown that subjects are able to track about five identical objects in a field of ten objects.
2. Idealization
This section will discuss one dismissive response to problems posed by resource bounded agents. The basic idea behind this response is that descriptive facts about cognitive limitations are irrelevant to the normative enterprise.
An. Idealization Strategies
In drafting various normative theories (concerning, Zum Beispiel, rational belief or moral action), some philosophers have claimed to be characterizing “ideal” agents, rather than “real” or “non-ideal” agents like humans (where real or non-ideal agents are those agents that have cognitive limitations). This strategy can be defended on a number of lines, but one defense appeals to theory construction in the physical sciences. In drafting physical theories it is often helpful to first begin with theoretically simple constraints and add in complicating factors later. Zum Beispiel, many introductory models about forces omit mention of complicating factors such as friction, air resistance, and gravity. Ebenfalls, a philosopher might claim that the proper initial subject of normative theorizing is the ideal agent. Schlechthin, descriptive details of the cognitive limitations of non-ideal agents are simply not relevant to initial theorizing about normative standards, because ideal agents do not have cognitive limitations. Noch, the thought is, theories of ideal agents might still be useful for evaluating non-ideal agents. Continuing with the analogy with scientific models, the proposed strategy would be to first determine the normative standard for ideal agents, and then evaluate non-ideal human agents as attempting to approximate this standard.
As one example of this strategy, return to the issue of believing inconsistent propositions. Because ideal agents do not have memory or computational limitations, these agents are able to check any number of beliefs for inconsistency. It then seems that these agents ought not to believe inconsistent propositions. Perhaps the reason for this is that one ought not to believe false propositions, and a set of inconsistent propositions is guaranteed to have at least one false member. This result might serve as one dimension of the normative standard. Jetzt, turning attention to resource bounded agents such as humans, it might be thought that these agents ought to try to approximate this standard, however imperfectly. Das heißt, the best reasoners imaginable will not believe inconsistent propositions, so humans ought to try to approximate the attitudes or behaviors of these reasoners. Aus dieser Sicht, better human reasoners believe fewer inconsistent propositions.
A second defense of the idealization strategy appeals directly to the kinds of concepts addressed by normative theories. Many normative concepts appear to admit of degrees. It might be thought that there can be better and worse moral decisions and better and worse epistemic attitudes (given a collection of evidence). If this is correct then, plausibly, ideal agents might be thought to be the best kind of agent and correspondingly the proper subject for normative theorizing. Consider the following example. Suppose a person witnesses an unsupervised child fall off a pier into a lake. In a real case, the human observer might feel paralyzing stress or anxiety about the proper response and thus momentarily postpone helping the child. Such a response may seem less than optimal—it would be better if the agent responded immediately. Considering these optimal responses might necessarily involve imagining ideal agents, Weil (plausibly) every real agent will have some amount of stress or anxiety. Because ideal agents do not have psychological limitations, an ideal agent would not become paralyzed by stress or anxiety and would respond immediately to the crisis. In this regard, after abstracting away from complicating factors arising from human psychology, ideal agents might help reveal better moral responses.
As briefly mentioned above, idealization strategies often offer a bridge principle, linking the proposed normative standard to real human action and judgment. Natürlich, human agents are not ideal agents, so how do ideal normative standards apply to real human agents? One common answer is that human agents ought to try to approximate the ideal standards, and better agents more closely approximate these standards. Zum Beispiel, it is clear that no human agent could achieve a pairwise check of all of their beliefs for logical consistency. But it still might be the case that better agents check more of their beliefs for consistency. Plausibly, young children check few of their beliefs for consistency whereas reflective adults are careful to survey more of the claims they endorse for consistency and coherence. On this measure it is not obviously unreasonable to judge the reflective adult as more rational than the young child.
b. Problems with the Idealization Strategy
One potential problem with the idealization strategy is the threat of incoherence. If every cognitive agent is physically embodied, then every cognitive agent will face some kinds of resource limitation. Daher, it is unclear that ideal agents are either physically possible or even conceivable. What kind of agents are ideal cognizers anyway? Do ideal cognizers even reason or make inferences, given the immediate availability of their information? Should we really think of them as reasoners or agents at all? Ideal cognizers are certainly unlike any cognitive agent with which we’ve ever had any experience. Schlechthin, the thought is that little weight should be placed on claims such as “ideal agents are able to check any number of beliefs for inconsistency”, because it is not clear such agents are understandable.
An idealization theorist might respond by leaning on the analogy with model construction in the physical sciences. Introductory models of forces that omit friction, sagen, may describe or represent physically impossible scenarios but these models nonetheless help reveal actual structural relationships between force, mass, and acceleration (Zum Beispiel). Perhaps, so too for normative theorizing about ideal agents.
A second potential problem with the idealization strategy concerns possible disanalogies between theorizing in philosophy and the physical sciences. Introductory models of forces in the physical sciences do not yield ultimate conclusions. Das heißt, the general relationship between force and mass that is established in idealized models is later refined and improved upon with the addition of realistic assumptions. These updated models are thought to be superior, at least with respect to accuracy. Im Gegensatz, Jedoch, many philosophers who claim to be theorizing about ideal agents take their results to be either final or ultimate. As previously mentioned, some epistemologists take belief consistency to be a normative ideal, and adding realistic assumptions to the model does not produce normatively better results. If such a stance is taken, then this weakens the analogy with theory construction in the physical sciences.
A third potential problem with the idealization strategy is that it is not clear that there are unique ideal agents or even unique idealized normative standards. Why should we think that there is one unique ideally rational agent or one unique ideally moral agent, rather than a continuum of better agents (perhaps just as there is no possible fastest ideal marathon runner)? The worry is clear in this respect: if there are only better and better agents (with no terminally best agent) then the study of any particular idealized agent cannot yield ultimate normative standards. It is also not clear that there are always unique idealized normative standards. Zum Beispiel, it is often assumed that there are optimal decisions or optimal plans for ideal agents to choose. Noch, John Pollock (2006) has argued that there is “no way to define optimality so that it is reasonable to expect there to be optimal plans”. The consequence of this result, if it can be maintained, is that there is no unique optimal plan or set of plans that an ideal agent could choose. Daher, an idealization strategy, one that abstracts away from time and resource constraints on the agent, could not represent ideal plans. It is more controversial as to whether there are optimal belief states that ideal reasoners would converge to, given unbounded time and unbounded cognitive resources.
c. Ought Implies Can
A fourth potential problem with the idealization strategy concerns the well-known and controversial “ought implies can” principle. If true, this principle states that the abilities of the agent constrain normative demands or requirements on the agent. Consider an example from the moral domain. Suppose that, after an accident, a ten ton truck has pinned Abe to the ground and is causing him great harm. Ought a fellow onlooker, Beth, lift the truck and free Abe? Many would claim that because Beth is unable to lift the truck, she has no duty or obligation to lift the truck. Mit anderen Worten, it might seem reasonable to think that Beth must be able to lift the truck for it to be true that she ought to lift the truck. There may well be other things that Beth ought to do in this situation (perhaps make a phone call or comfort Abe), but the idea is that these are all things that Beth could possibly do.
If “ought implies can” principles are true in various normative domains such as ethics or epistemology, then the corresponding idealization strategy would face the following problem. Idealization strategies, by definition, abstract away from the actual abilities of agents (including facts about memory, Argumentation, perception, und so weiter). Daher, these strategies will not produce normative conclusions that are sensitive to the actual abilities of agents, as “ought implies can” principles require. Daher, idealization strategies are defective.
Said differently, “ought implies can” principles suggest that descriptive facts matter to normative theorizing. As Paul Thagard (1982) has said, epistemic principles “should not demand of a reasoner inferential performance which exceeds the general psychological abilities of human beings”. Natürlich, idealization strategies necessarily disagree with this claim. If “ought implies can” principles are true then we have reason to reject idealization strategies.
Are “ought implies can” principles true? Intuitiv, the Abe and Beth case above seems plausible and reasonable. This provides prima facie evidence that there is something correct about a corresponding moral “ought implies can” principle in the moral domain. Aber, in epistemology, there are reasons to think that “epistemic oughts” do not always imply “epistemic cans”.
In defending evidentialism, Richard Feldman and Earl Conee (1985) have argued that cognitive limits do not always constrain theories of epistemic justification. As they say, “some standards are met only by going beyond normal human limits”. Feldman and Conee give three examples. The first concerns a human agent whose doxastic attitude a best fits her evidence e, but forming a is beyond the agent’s “normal cognitive limits”. To fill in the details, suppose that the doxastic attitude that best fits Belinda’s evidence is believing that her son is guilty of the crime, but also suppose that Belinda is psychologically unable to appropriately assess her evidence (given its disturbing content). Feldman and Conee think that the intuitive response to such a case would be to think that (believing in guilt) “would still be the attitude justified by the person’s evidence”, even though in this case Belinda faces the impossible task of assessing her evidence. In der Tat, it seems that this is a standard response one might have toward family members of guilty defendants: given the evidence, they ought to believe that their loved one is guilty, despite its impossibility. If such a response is correct, then “ought implies can” principles are not always true in epistemic domains.
The second and third examples Feldman and Conee give are the following:
Standards that some teachers set for an “A” in a course are unattainable for most students. There are standards of artistic excellence that no one can meet, or at least standards that normal people cannot meet in any available circumstance.
These latter examples are surely weaker than the first. It would be completely unreasonable for a teacher to adopt a standard for an “A” that was impossible for any student to satisfy (“to get an “A” a student must show that 0 = 1″). Aber, part of the difficulty here is that the relevant notion of “can” is either vague or ambiguous. Does “can” mean some students could satisfy the standard some times? Or does “can” mean that at least one student could satisfy the standard once? It would not be unreasonable for a teacher to adopt a standard for an “A” that one particular class of students could not attain. The art example is even more difficult. Erste, the art example is unlike the Abe pinned under the truck example. In that case it was physically impossible for Beth to lift the truck. The art example, Jedoch, contains a standard that “normal people cannot meet in any available circumstance”, with the implication that some humans can meet the standard. The difference between these examples is that one is indexed to Beth’s abilities and the other is indexed to human artistic abilities, allgemein. The worry is that some standards might be “community standards” and hence the relevant counterexample would be a case where no one in the community could meet the standard. In der Tat, it would be an odd artistic standard such that no possible human could ever satisfy it.
zuletzt, it is unclear whether Feldman and Conee’s remarks can be generalized to other normative domains. Even if Feldman and Conee are correct in thinking that various “epistemic oughts” do not imply “epistemic cans”, it is not obvious whether similar considerations hold in the domain of morality or rational action.
3. Accommodating Cognitive Limitations
The second major kind of response to resource bounded agents is to accommodate the descriptive facts of cognitive limitations into one’s normative theory. Proponents of this response claim that facts about cognitive limitations matter for normative theories. To continue with the example of believing inconsistent propositions, a theorist that adopted a version of this response might attempt to argue that resource bounded agents ought not to believe “feasibly reached” or, stattdessen, “obvious” inconsistent propositions. This response would accommodate facts about cognitive limitations by relaxing the standard “never believe any set of inconsistent propositions”.
There are two ways in which one might attempt to accommodate cognitive limitations into one’s normative theorizing. Erste, similar to the above example, one might “change the normative standard” and argue that resource bounded agents show that normative standards should be relaxed in some way. Versions of this response will be discussed in section 3a. Zweite, one might instead argue that cognitive limitations show that the agents being investigated cannot meet the proposed normative standard, und damit, are inherently defective in some dimension. This response will be discussed in section 3b.
An. Changing the Normative Standard
In this subsection, the article discusses several prominent views that accommodate descriptive facts about cognitive limitations by augmenting or changing normative standards.
Ich. Simon’s “Satisficing View” of Decision Making
One way to accommodate the cognitive limitations that agents face is to relax the traditional normative standards. In the domain of rational decision making, Herbert Simon (1955, 1956) replaced the traditional “optimization” view of the rationality of action with the more relaxed “satisficing” view. To illustrate the difference between optimization procedures and satisficing procedures, consider the well-known “apartment finding problem”. Vermutlich, when searching for an apartment one values several attributes (perhaps cost, Größe, distance from work, quiet neighborhood, und so weiter). How ought one choose? The optimization procedure recommends maximizing some measure. Zum Beispiel, one way to proceed would be to list every available apartment, assess each apartment’s total subjective value under the various attributes, determine the likelihoods of obtaining each apartment, and then calculate this “weighted average” and choose the apartment that optimizes or maximizes this measure. Simon noticed that such an optimization procedure is typically not feasible for humans: it is too computationally demanding. Für eine, the complete information about apartment availability or even complete information about apartment attributes is often unavailable. Zweitens, the relevant probabilities are crucial to an optimization strategy, but these probabilities are too cognitively demanding for typical human agents. Zum Beispiel, what is the probability that apartment B will still be available if the initial offer for apartment A gets rejected? How would one calculate this probability? Stattdessen, Simon suggests that humans ought to make decisions by “satisficing”, or deciding to act when some threshold representing a “good enough”, but not necessarily best or optimal, outcome is achieved. To satisfice in the apartment finding problem, one determines some appropriate threshold or aspiration level of acceptability (representing “good enough”), and then one searches for an apartment until this threshold is reached. A satisficer picks the first apartment that surpasses this threshold.
Es ist wichtig sich das zu merken, under a common interpretation, Simon is not recommending the satisficing procedure as a next best alternative to the optimization procedure. Stattdessen, Simon is suggesting that the satisficing procedure is the standard by which to judge rational action. Entsprechend, human agents who do not optimize in the sense described above are not normatively defective qua rational decision maker.
One claimed advantage of satisficing over optimization concerns computational costs. A satisficing strategy is thought to be less computationally intensive than an optimization strategy. Optimization strategies require the computation of “expected values” based on a network of probabilities and subjective values, and also the computational resources to store and compare these values. Satisficing strategies, im Gegensatz, only require that an agent is able to compare a possible choice with a threshold value, and there is no need to store past assessments (other than the fact that a past choice was assessed). A second advantage of satisficing is that it seems to come close to describing how humans actually solve many decision problems and, sowie, appears to be predictively successful. For better or worse, humans do seem to pick apartments, cars, perhaps even mates that are “good enough” rather than optimal (and note that someone like Simon would say this is “for the better”).
Two criticisms of satisficing concern its stability over time and the setting of satisficing thresholds or aspiration levels. A benefit of the optimization procedure is that an agent can be confident that her decision is the best in a robust sense—in comparison with any other alternative, the optimal option will be superior to this alternative. Aber, if one picks option a under a satisficing procedure, one cannot be confident that option a will be superior to any other future alternative option b. Tatsächlich, one cannot be confident that the next alternative option is not better than the current option. This is potentially problematic in the following sense. If one sets one’s satisficing threshold too low, one may quickly find a choice that surpasses this threshold, but is nonetheless unacceptable in a more robust sense. Zum Beispiel, buying the first car one sees on the sales lot is often not recommended, however easy this strategy is to follow. In this example the threshold for “good enough” is clearly too low. This leads to the second broad criticism. When factoring in the calculation needed to determine how low or high to set the satisficing threshold, it is not obvious whether satisficing procedures retain their computational advantage. As previously mentioned, a satisficing threshold that recommends buying the first car one sees on the sales lot is too low. But what threshold should count as representing a “good enough” car? In most cases this is a difficult question. Intuitiv, a “good enough” car is one that has some or many desirable features. But is this a probabilistic measure—must these desirable features be known to obtain with the choice selection or are they merely judged to be probable? Weiter, how does one compute the relationship between some particular feature of the car and its desirability? The worry is that setting appropriate satisficing thresholds is as difficult as optimizing. Serious concern with these kinds of issues puts pressure on the claim that satisficing procedures have clear computational advantages.
Ii. Pollock’s “Locally Global” View of Planning
John Pollock is also critical of optimization strategies for theories of rational decision making, for reasons concerning cognitive limitations. Aber, rather than focus on the rationality of individual decision problems (such as the apartment finding problem or the car buying problem mentioned above), Pollock’s view concerns rational planning. To see the difference between individual decision problems and planning problems, Betrachten Sie das folgende Beispiel. In deciding what to do with one’s afternoon, one might decide to go to the bank and go to the grocery store. By deciding, one has solved an individual decision problem. Aber, there are two important issues that are still unresolved for the decision: (1) how to implement the decisions “go to the bank” and “go to the grocery store” (go by car or by bus or walk?) und (2) how to structure the order of individual decisions (go to the bank first, then go to the grocery store second?). Planning generally concerns the implementation and ordering issues illustrated in both (1) und (2). When agents engage in planning they attempt to determine what things to do, how to do them, and how to order them.
Planning is often regarded as more broad than the field of “decision theory”, which typically focuses on the rationality of individual actions. Research in artificial intelligence concerning action almost exclusively focuses on planning. One reason for this focus is that many AI researches want to build agents that operate in the world, and operating in the world requires more than just deciding whether to perform some particular action. As illustrated above, there are often many ways to perform the same action (one may “go to the bank” by traveling by car or by boat or by jet pack). Auch, actions are performed in temporal sequence with other actions, some of which potentially conflict (Zum Beispiel, if the bank closes at 4pm, then it is impossible to go to the bank after one goes to the grocery store).
Jetzt, how ought rational agents to plan? One suggestion is that rational agents choose optimal plans, in a way similar to the optimization procedure mentioned in section 3ai above. An optimal plan is a plan that maximizes some measure (such as expected utility, Zum Beispiel). A simple version of a plan-based optimization procedure might include the following: (Ich) survey all possible plans and (Ii) choose the plan that maximizes expected utility. Many of the claimed virtues for the optimization procedure of individual decisions discussed in section 3ai above also count as virtues of the plan-based optimization procedure.
John Pollock has argued that real, non-ideal agents ought not use plan-based optimization procedures. Part of his argument shares reasons given by Herbert Simon: resource bounded agents such as humans cannot survey and manage the information required to optimize. Weiter, Pollock responds to this situation in a similar way to Simon. Rather than claim that informational resource limitations show that humans are irrational, Pollock argues that the correct normative standard is actually less demanding and can be satisfied by human agents.
One feature of Pollock’s argument is similar to Christopher Cherniak’s (1986) observation about the inherent informational complexity of a complete consistency check on one’s belief corpus. Pollock argues that because plans are constructed by adding parts or “sub-plans”, the resulting complexity is such that it is almost always impossible to survey the set of possible plans. Zum Beispiel, suppose an agent considers what plan to adopt for the upcoming week. In a week, an agent might easily make over 300 individual decisions, and a plan will specify which decision to implement at each time. Weiter, suppose that there are only 2 alternative options for each individual decision. This entails that there are 2^300 possible plans for the week to consider, oder, approximately 10^90 plans, a number greater than the estimated number of elementary particles in the universe. Offensichtlich, human agents cannot survey or even construct or represent the set of possible plans for a week of decisions. Eigentlich, the situation is much worse. Rational planning includes what things to do, how to do them, and how to order them, and additionally what may be called “contingency plans”. One might adopt a plan to drive to the airport on Sunday, but this plan might also include the contingency plan “if the car won’t start, call a taxi”. Optimization procedures would require selecting the maximally best contingency plans for a given plan (it would typically not be recommended to try to swim to the airport if one’s car won’t start), but additionally surveying and constructing the set of all possible contingency plans only furthers the computational complexity problem with the optimization procedure.
Instead of optimization, Pollock argues that non-ideal human agents should engage in “locally global” planning. Locally global planning involves beginning with a “good enough” master plan (an idea Pollock acknowledges is reminiscent of Simon’s satisficing view), but continually looking for and making small improvements to the master plan. As Pollock claims, “the only way resource bounded agents can efficiently construct and improve upon master plans reflecting the complexity of the real world is by constructing or modifying them incrementally”. The idea is that resource bounded agents ought to defeasibly adopt a master plan which is “good enough”, but continually seek improvements as new information is obtained or new reasoning is conducted.
iii. Cherniak’s “Minimal Rationality” and “Feasible Inferences”
Chistopher Cherniak’s (1986) Minimal Rationality is a seminal work in the study of resource bounded agents, and it discusses the general issue of the relationship between cognitive limitations and normative standards. He begins by arguing against both idealized standards of rationality (“finitary” agents such as humans could never satisfy these conditions) and a “no standards” view of rationality (unlike agents we recognize, such agents would never generate any predictions on their behavior). The third alternative, that of “minimal rationality” suggests “moderation in all things, including rationality”. Cherniak claims that many of the minimal rationality conditions can be derived from the following principle:
(MR) If A has a particular belief-desire set, A would undertake some, but not necessarily all, of those actions that are apparently appropriate.
Zum Beispiel, Cherniak is clear in suggesting that rational agents need not eliminate all inconsistent beliefs. This generates the following “minimal consistency condition”:
(MC) If A has a particular belief-desire set, then if any inconsistencies arose in the belief set, A would sometimes eliminate some of them.
In support of (MC), Cherniak argues that non-minimal, ideal views of rationality (ones that suggest agents ought to eliminate all inconsistencies) would actually entail that humans are irrational. As he claims, “there are often epistemically more desirable activities for [human agents] than maintaining perfect consistency”. The idea is that given the various cognitive limitations that humans face (the “finitary predicament”), it would be irrational for any human to attempt to satisfy the Sisyphean task of maintaining a consistent belief corpus.
There are two prominent objections to Cherniak’s minimal consistency condition. Erste, as Daniel Dennett and Donald Davidson have pointed out in various works, it is difficult to understand or ascribe any beliefs to agents that have inconsistent beliefs. Zum Beispiel, suppose that Albert believes that p, and that p entails q, but also suppose that Albert believes that q is false. What is Albert’s view of the world? In one sense, it may be argued that Albert has no view of the world (and hence no beliefs) Weil, letzten Endes, Albert might be interpreted as believing both q and ¬q, and there is no possible world that could satisfy such conditions. In Beantwortung, Cherniak invokes an “ought implies can” principle. He suggests that once an agent meets a threshold of minimal rationality, “the fact that a person’s actions fall short of ideal rationality need not make them in any way less intelligible to us”. Schlechthin, Cherniak’s response could be understood in a commonsense way: typical human agents have some inconsistent beliefs, but we nonetheless ascribe beliefs to them.
A second objection to Cherniak’s minimal consistency condition concerns the permissiveness of the condition. As Appiah (1990) has worried, “are we left with constraints that are sufficiently rich to characterize agency at all”? As an example, an agent that eliminates a few inconsistent beliefs only on Tuesdays would satisfy (MC). Yet there is something intuitively defective about such a reasoner. Stattdessen, it seems that what is wanted is a set of constraints on reasoners, Argumentation, and agency that are more strict and more demanding than Cherniak’s minimal rationality conditions. Perhaps anticipating objections similar to Appiah’s, Cherniak developed what he calls a theory of “feasible inferences”. A theory of feasible inferences recruits descriptive facts about cognitive limitations to provide more restrictive normative requirements. Zum Beispiel, a theory of “human memory structure” describes what information is cognitively available to human agents, given various background conditions. In general terms, when information is cognitively available to an agent, more normative constraints are placed on the agent. Entsprechend, conditions such as (MC) would thereby be strengthened.
Aber, it is unclear whether a theory of human memory structure will provide enough detail to propose a “rich structure of constraints” on rationality or agency. Für eine, Cherniak’s theory of human memory structure describes typical humans. There is even a sense in which “typical human” is an idealized notion since no individual is a typical human. Given that there are individual differences in memory abilities between humans, which constraints should be adopted? If an inference to q is obvious for Alice but it would not be obvious for a typical human, is Alice required to believe q (on pain of irrationality) or is it merely permissible for her to believe q? Note that proponents of idealization strategies (as discussed in section 2) are able to provide a rich structure of constraints and do not have to worry about individual differences in cognitive performance.
iv. Gigerenzer’s “Ecological Rationality”
Gerd Gigerenzer views rationality as fundamentally involving considerations of the agent’s environment and the agent’s cognitive limitations. Similar to many of the theorists discussed above, Gigerenzer also cites Herbert Simon as an influence. Many aspects of Gigerenzer’s view may be understood as responding to the influential project of psychologist Daniel Kahneman, to which this article will turn next.
Gigerenzer (2006) is clear in his rejection of “optimization” views of rationality, which he sometimes calls “unbounded rationality”. As he claims,
. . . it is time to rethink the norms, such as the ideal of omniscience. The normative challenge is that real humans do not need. . . unlimited computational power.
In place of optimization procedures, Gigerenzer argues that resource bounded agents ought to use “heuristics” which are computationally inexpensive and are tailored to the environment and abilities of the agent (and are, somit, “fast and frugal”). Rationality, for Gigerenzer, consists in the deployment of numerous, however disparate, fast and frugal heuristics that “work” in an environment.
To understand Gigerenzer’s view, it is helpful to consider several of his proposed heuristics. For the first example, consider the question of who will win the next Wimbledon tennis championship. One way to answer this question, perhaps in line with the optimality view of rationality, would be to collect vast amounts of player performance data and make statistical predictions. Surely, such a strategy is computationally intensive. Stattdessen, Gigerenzer suggests that in some cases it would be rational to use the following heuristic:
Recognition Heuristic: If you recognize one player but not the other, then infer that the recognized player will win the particular Wimbledon match.
Erste, the recognition heuristic is obviously computationally cheap—it does not require informational search or deep database calculations, or the storage of large amounts of data. Zweite, the recognition heuristic is incredibly fast to deploy. Dritte, this heuristic is not applicable in all environments. Some agents will not be able to use this heuristic because they do not recognize any tennis player, and some agents will not be able to use this heuristic because they recognize every tennis player. Fourth, it is essential to note that proper use of the recognition heuristic, in Gigerenzer’s view, results in a normatively sanctioned belief or judgment. Das heißt, when agents use the recognition heuristic in the appropriate environment, the resulting belief is rational. Zum Beispiel, if Mary only recognizes Roger Federer in the upcoming match between Federer and Rafael Nadal, then it is rational for her to believe that Federer will win.
Some may find this last result surprising or counterintuitive—after all, Mary may know very little about tennis, so how can she have a rational belief that a particular player will win? Gigerenzer would reply that such surprise or counterintuitiveness probably results from holding an optimality view of rationality. Gigerenzer’s project is an attempt to argue that rationality does not consist in gathering large amounts of information and making predictions on this basis. Eher, Gigerenzer thinks that rationality consists in using limited amounts of information in efficient or strategic ways, with the caveat that the proper notion of efficiency and strategy are not idealized notions, but concern the agent’s cognitive limitations and environment.
Now turn to the important question: does the recognition heuristic work? Gigerenzer (2007) found that in approximately 70% of Wimbledon matches, the recognition heuristic predicted the winning player. Das heißt, for agents that are “partially ignorant” about tennis (those that know something about tennis but are not experts) the recognition heuristic gives better-than-chance predictive success.
Consider another heuristic. Humans need to track objects in the environment such as potential threats and sources of food. One way to track an object would be to calculate its trajectory using properties of force, mass, velocity and a series of differential equations. Some AI systems attempt to do just this. It is clear that humans do not explicitly solve differential equations to track objects, but it is also not obvious that humans do this even at a subconscious or automatic level. Gigerenzer (2007) proposes that humans use a “gaze heuristic” in specific situations. Zum Beispiel, consider the problem of tracking an oncoming plane while flying an airplane. One way to infer where an approaching plane will be is to use a series of mathematical formulae involving trajectories and time. A second way would be to use the following gaze heuristic:
Gaze Heuristic: Find a scratch or mark in your airplane windshield. If the other plane does not move relative to this mark, dive away immediately.
As with the recognition heuristic, the gaze heuristic is computationally cheap and fast. Weiter, this heuristic is not liable to induce calculation errors (as may be the case with the mathematical equations strategy).
Gigerenzer has also argued that a version of the gaze heuristic is used by outfielders when attempting to catch fly balls. This heuristic consists of the following instructions: fix your gaze on the ball, start running, and adjust your running speed so that the image of the ball rises at a constant rate. Interessant, Shaffer et al. (2004) attached a small camera to dogs when they were fetching thrown frisbees, and it appears that dogs may too use the gaze heuristic. Wenn ja, a plausible explanation seems fitting with Gigerenzer’s proposal: in the face of resource limitations, many agents use inference strategies that are fast and frugal, and work in their environment.
One initial worry for Gigerenzer’s project of finding fast and frugal heuristics is that it is not clear there are enough heuristics to explain humans’ general rationality. If a non-expert correctly infers that an American will hit the most aces during Wimbledon, was this an inference based on the recognition heuristic (it is not obvious that it must be), or is there an additional heuristic that is used (perhaps a new heuristic that only concerns aces hit in a tennis match)? Gigerenzer is clear in his rejection of “abstract” or “content-blind” norms of reasoning that are general purpose reasoning strategies, but his alternative view may be forced to posit a vast number of heuristics to explain humans’ general rationality. Weiter, a cognitive system that is able to correctly deploy and track a vast number of heuristics does not obviously have a clear computational advantage.
A second worry concerns the “brittleness” of the proposed heuristics. Zum Beispiel, referencing the above mentioned recognition heuristic, what ought one to infer in the case of a tennis match where the recognized player becomes injured on court? Natürlich, the recognition heuristic is not adaptable enough to handle this additional information (with the idea being that injured players, however excellent, are typically unlikely to win). Also, there may be instances in which it is rational to override the use of a heuristic. But positing a cognitive system that monitors relevant additional information and judges whether and when to override the use of a specific heuristic might erase much of the alleged computational advantages that heuristics seem to provide.
b. Failing to Meet the Standard
This article will now address the remaining response by theorists to accommodate the facts of cognitive limitations into their normative theorizing. Some philosophers and psychologists have used facts about cognitive limitations to argue that humans fail to meet various normative standards. Zum Beispiel, one might argue that humans’ inherent memory limitations and corresponding inability to check beliefs for logical consistency entail that humans are systematically irrational. One might argue that humans’ inherent inability to survey all relevant information in a domain entails that all humans are systematically deluded in that domain. Oder, concerning morality, one might attempt to argue that cognitive limitations entail that humans must be systematically immoral, because no human could ever make the required utility calculations (Natürlich, under the assumption of a particular consequentialist moral theory).
Though all of the example positions in the above paragraph are somewhat simplistic, they all roughly share the following features: (Ich) the claim of a somewhat idealized or “difficult to obtain” normative standard and (Ii) the claim that facts about cognitive limitations are relevant to the normative enterprise and show that agents cannot meet this normative standard. As a quick review of material covered in previous sections, theorists such as Herbert Simon, John Pollock, Christopher Cherniak, and Gerd Gigerenzer would reject feature (Ich), Weil, in very general terms, they have argued that cognitive limitations provide reason for thinking that the relevant normative standards are not idealized and are not “difficult to obtain”. Proponents of the idealization strategy, such as many Bayesians in epistemology, would reject (Ii), because they view the cognitive limitations of particular cognitive agents as irrelevant to the normative enterprise.
Ich. Kahneman and Tversky’s “Heuristics and Biases” Program
Daniel Kahneman and Amos Tversky are responsible for one of the most influential research programs in cognitive psychology. Their basic view is that human agents reason and make judgments by using cognitive heuristics, and that these heuristics produce errors. Hence the label “heuristics and biases”. Though Kahneman and Tversky have taken a nuanced position regarding the overall rationality of humans, others such as Piatelli-Palmarini (1994) have argued that work done in the heuristics and biases program shows that humans are systematically irrational.
Before discussing some of Kahneman and Tversky’s findings, it is important to note two things. Erste, though both Gigerenzer and Kahneman and Tversky use the name “heuristics”, these theorists plausibly mean to describe different mechanisms. For Gigerenzer, reasoning heuristics are content-specific and are typically tied to a particular environment. For Kahneman and Tversky, heuristics are understood more broadly as a “shortcut” procedure for reasoning or as a reasoning strategy that excludes some kinds of information. Notoriously, Gigerenzer is critical of Kahneman and Tversky’s characterization of heuristics, claiming that their notion is too vague to be useful. Zweite, Gigerenzer and Kahneman and Tversky evaluate heuristics differently. For Gigerenzer, heuristics are normatively good (in situations where they are “ecologically rational”), and they are an essential component of rationality. Kahneman and Tversky, Jedoch, typically view heuristics as normatively suspect since they likely lead to error.
To begin, consider Kahneman and Tversky’s heuristic of “representativeness”. As they say, “representativeness is an assessment of the degree of correspondence between a sample and a population, an instance and a category, an act and an actor or, more generally, between an outcome and a model”. By using the representativeness heuristic, for one example, a subject might infer that a typical summer day is warm and sunny because it is a common and frequent event, und damit, representative.
Kahneman and Tversky claim that the representativeness heuristic drives some proportion of human probability judgments. They also claim that the use of this heuristic for probability judgments leads to systematic error. In one experiment Tversky and Kahneman (1983) gave subjects the following description of a person and then asked them a probability question about this description. This is the well-known “Linda the bank teller” description: “Linda is 31 years old, einzel, outspoken and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations”. Nächste, Kahneman and Tversky asked subjects which of the two statements was more probable (given the truth of above description): (T) Linda is a bank teller, oder (T&F) Linda is a bank teller and is active in the feminist movement. Kahneman and Tversky report that approximately 85% of subjects judge (T&F) as more probable than (T). Before discussing the alleged incorrectness of this judgement, why might subjects make this judgment? The thought is that, given the description of Linda being an activist in social justice movements and perhaps a philosophy major, (T&F) is more representative of Linda than (T). If Kahneman and Tversky are right in thinking that representativeness drives judgment about probabilities, then their model could explain the result of the Linda case.
But ought agents to judge that (T&F) is more probable than (T), given the description of Linda? This is the important normative question. Kahneman and Tversky rely on the probability calculus as providing the normative standard. According to many versions of the probability calculus, prob(An) ≥ prob(An&b), regardless of the chosen a or b. This may be called “the conjunction rule” for probabilities. The basic idea is that a narrower or smaller class of objects is never more probable than a larger class, and that the overlap of two classes cannot be larger than one of the individual classes. Zum Beispiel, which class is larger, the class of all trucks (Tr) or the class of all white trucks (W&Tr)? Deutlich, the answer is the class of all trucks, because every white truck is also a truck. Also, which is more probable, that there is a truck parked in front of the White House right now (Tr) or that there is a white truck parked in front of the White House right now (W&Tr)? Plausibly, it is more likely that there is a truck parked in front of the White House (Tr), because any white truck is also a truck, and hence would also count toward the likelihood of there being a truck parked there.
Kahneman and Tversky appeal to the probability calculus as providing the normatively correct rule of reasoning for the Linda case. Because 85% of subjects responded that (T&F) was more probable than (T), against the conjunction rule, Kahneman and Tversky claim that most subjects made an incorrect judgment. Also, on their view, this is a case where resource limitations cause human agents to use shortcut procedures such as the representativeness heuristic, and the representativeness heuristic gets the wrong answer. Daher, the representativeness heuristic is responsible for a cognitive bias.
The alleged cognitive bias in the Linda case provides just one part of Kahneman and Tversky’s overall program of heuristics and biases. They have argued that human subjects make errors with insensitivity to prior probabilities, insensitivity to sample size, misconceptions of chance, and misconceptions of regression. Wichtig, these claims rely on the probability calculus as providing the correct normative standard. But should we think that the probability calculus provides the correct normative standard for rationality?
One straightforward reason to think that the probability calculus provides the correct normative standard for rational belief concerns logical consistency. Violation of the standard axioms of the probability calculus entails a set of inconsistent probabilistic statements. Schlechthin, degrees of belief that satisfy the probability calculus are often called “coherent” degrees of belief. For reasons similar to those given in the introduction to this article, it is often thought that it is not rational to believe a set of inconsistent propositions. Daher, it seems rational to obey the probability calculus.
Aber, there are significant worries with thinking that the probability calculus provides the correct normative standard for rationality. Erste, following the rules of the probability calculus is computationally demanding. Independent of Kahneman and Tversky’s experimental results, we should anticipate that few humans would be able to maintain coherent degrees of probabilistic belief, for reasons of computational complexity alone. This observation would entail that humans are not rational, yet this goes against our commonsense view that humans are often quite rational. In der Tat, it might be difficult to explain how we’re able to predict human behavior without the corresponding view that humans are usually rational. Insofar as our commonsense view of human rationality is worth preserving, we have reason to think that the probability calculus does not provide a correct normative standard.
A second worry concerns tautologies. According to standard interpretations of probability, every tautology gets assigned probability 1. But if the probability calculus provides a normative standard for belief, then it is rational for us to believe every tautology (for any set of evidence e). But this seems wrong. There are many complex propositions that are difficult to parse or interpret or even understand, but are nonetheless tautologies. Until one recognizes these propositions as instances of a tautology, it does not seem rational to believe just any tautology.
A third and final worry concerns the psychological nature and phenomenology of belief. If the probability calculus provides the correct normative standard for belief then most of our contingent beliefs (Zum Beispiel, “the coffee cup is on the desk”) will have a precise numerical probability assignment, and this number will be less than 1. Call beliefs that are less than 1 but greater than 0.5 “likely beliefs”. Many of our familiar contingent beliefs will be likely beliefs (somit, getting some number assignment such as 0.99785), but it is unclear that our cognitive systems would be able to store or even compute vast amounts of probabilistic information. Belief seems to not work this way. Es gibt, Natürlich, projects in artificial intelligence that attempt to model similar probabilistic systems, but their results have not been universally convincing. Zweitens, the phenomenology of belief suggests that many of our contingent beliefs are not “graded” entities that admit of some number, but are binary or “full” beliefs. When one believes that “the coffee cup is on the desk” it often feels like one “fully” believes it, rather than merely “partially” believing it (as would be required if the belief were assigned probability 0.99785). As an example, when reasoning about contingent matters of fact, we often treat our beliefs as full beliefs. Daher, the following reasoning seems both commonplace and acceptable, and does not require probabilities: “I think the coffee cup is in the office, so I should walk there to get the cup”. Daher, the phenomenology of belief gives a possible reason to doubt that the probability calculus provides the correct normative standard for belief.
4. Referenzen und weiterführende Literatur
An. References
Appiah, Anthony. (1990). “Minimal Rationality by Christopher Cherniak.” The Philosophical Review, 99 (1): 121–123.
Bartlett, Fredrick C. (1932). Remembering: A Study in Experimental and Social Psychology, Cambridge, Cambridge University Press.
Cherniak, Christoph. (1986). Minimal Rationality, Cambridge, MIT Press.
An important work in the study of resource bounded agents. Discusses idealization in theories of rationality and conditions for agenthood.
Cowan, N. (2001). “The Magical Number 4 in Short-Term Memory: A Reconsideration of Mental Storage Capacity.” Behavioral Brain Science, 24: 87–185.
Feldman, Richard and Conee, Earl. (1985). “Evidentialism.” Philosophical Studies, 48: 15–34.
Contains a discussion of “ought implies can” principles in epistemology.
Gigerenzer, Gerd. (2006). “Bounded and Rational.” In Stanton, Robert J. (ed.) Zeitgenössische Debatten in der Kognitionswissenschaft, Oxford, Blackwell.
Gigerenzer, Gerd. (2007). Gut Feelings: The Intelligence of the Unconscious, New York, Wikinger.
Summarizes and illustrates Gigerenzer’s program of “fast and frugal” heuristics, and is intended for a wide audience.
Lavie, N. (1995). “Perceptual Load as a Necessary Condition for Selective Attention.” Journal of Experimental Psychology: Human Perception and Performance, 21: 451–468.
Lavie, N. (2005). “Distracted and Confused? Selective Attention Under Load.” Trends in Cognitive Science, 5: 75–82.
Macrae, C.N. and MacLeod, M.D. (1999). “On Recollections Lost: When Practice Makes Imperfect.” Journal of Personality and Social Psychology, 77: 463–473.
Müller, George A. (1956). “The Magical Number Seven, Plus or Minus Two: Some Limits On Our Capacity For Processing Information.” The Psychological Review, 63 (2): 81–97.
Classic paper on memory limitations and an early example of the fields of cognitive science and cognitive psychology.
Piattelli-Palmarini, Massimo. (1994). Inevitable Illusions: How Mistakes of Reason Rule Our Minds, New York, John Wiley und Söhne.
Applies elements of the “heurisitics and biases” program and argues that these results help reveal common errors in judgment.
Pollock, John. (2006). Thinking About Acting: Logical Foundations for Rational Decision Making, Cambridge, Oxford University Press.
Applying work from epistemology and cognitive science, Pollock proposes a theory of rational decision making for resource bounded agents.
Sears, Christopher R. and Pylyshyn, Zenon. (2000). “Multiple Object Tracking and Attentional Processing.” Canadian Journal of Experimental Psychology, 54 (1): 1–14.
Shaffer, Dennis M., Krauchunas, Scott M., Eddy, Marianna, and McBeath, Michael K. (2004). “How Dogs Navigate to Catch Frisbees.” Psychological Science, 15 (7): 437–441.
Simon, Herbert A. (1955). “A Behavioral Model of Rational Choice.” The Quarterly Journal of Economics, 69 (1): 99–118.
Simon, Herbert A. (1956). “Rational Choice and the Structure of the Environment.” Psychological Review, 63 (2): 129–138.
An early description of the satisficing procedure.
Stroop, J.R. (1935). “Studies of Interference In Serial Verbal Reactions.” Journal of Experimental Psychology, 18: 643–662.
Thagard, Paul. (1982). “From the Descriptive to the Normative in Psychology and Logic.” Philosophy of Science, 49 (1): 24–42.
Tversky, Amos and Kahneman, Daniel. (1983). “Extensional Versus Intuitive Reasoning: The Conjunction Fallacy in Probability Judgment.” Psychological Review, 90 (4): 293–315.
Contains the well-known “Linda” example of the conjunction fallacy in probabilistic judgment.
Xu, Yaoda and Chun, Marvin. (2009). “Selecting and Perceiving Multiple Visual Objects.” Trends in Cognitive Science, 13 (4): 167–174.
b. Further Reading
Bishop, Michael A. and Trout, J.D. (2005). Epistemology and the Psychology of Human Judgment, Oxford, Oxford University Press.
Discusses and offers critiques of various epistemic norms, often citing important work in cognitive science and cognitive psychology.
Christensen, David. (2005). Putting Logic in its Place, Cambridge, Oxford University Press.
Provides discussion about the use of idealized models. Argues that the unattainability of idealized normative standards in epistemology does not undermine their normative force.
Gigerenzer, Gerd and Selten, Reinhard (Hrsg.). (2001). Bounded Rationality: The Adaptive Toolbox, Cambridge, MIT Press.
An influential collection of papers on bounded rationality.
Goldstein, E. Bruce. (2011). Kognitive Psychologie: Connecting Mind, Research, and Everyday Experience. Belmont, Wadsworth.
Introductory text in cognitive psychology. Some of the examples of cognitive limitations from section 1 were drawn from this text.
Kahneman, Daniel. (2011). Thinking Fast and Slow. New York, Farrar, Straus, and Giroux.
Provides an overview of the “heuristics and biases” program and the two-system model of judgment.
Morton, Adam. (2012). Bounded Thinking: Intellectual Virtues for Limited Agents, Oxford, Oxford University Press.
A virtue-theoretic account of bounded rationality and bounded thinking. Addresses how agents should manage limitations.
Rubinstein, Ariel. (1998). Modeling Bounded Rationality, Cambridge, MIT Press.
Provides examples of formal models for resource bounded agents.
Rysiew, Patrick. (2008). “Rationality Disputes — Psychology and Epistemology.” Philosophy Compass, 3 (6): 1153–1176.
Good discussion and overview of the “rationality wars” debate in cognitive science and epistemology.
Simon, Herbert A. (1982). Models of Bounded Rationality, Vol. 2, Behavioral Economics and Business Organization. Cambridge, MIT Press.
Collection of some of Simon’s influential papers on bounded rationality and procedural rationality.
Weirich, Paul. (2004). Realistic Decision Theory: Rules for Nonideal Agents in Nonideal Circumstances, Oxford, Oxford University Press.
Argues for principles of decision making that apply to realistic, non-ideal agents.
Informationen zum Autor
Jacob Caton
E-Mail: [email protected]
Arkansas State University
U. S. Ein.