The Safety Condition for Knowledge
A number of epistemologists have defended a necessary condition for knowledge that has come to be labeled as the “safety” condition. Timothy Williamson, Duncan Pritchard, and Ernest Sosa are the foremost defenders of safety. According to these authors an agent S knows a true proposition P only if S could not easily have falsely believed P. Disagreement arises, cependant, with respect to how they capture the notion of a safe belief.
Unlike Pritchard and Sosa, who have gone on to incorporate the safety condition into a virtue account of knowledge, Williamson distances himself from the project of offering reductive analyses of knowledge. Williamson’s project can best be thought of as an illumination of the structural features of knowledge by way of safety. The maneuvers of Pritchard and Sosa into the domain of virtue epistemology are not discussed here.
This article is a treatment of the different presentations and defenses of the safety condition for knowledge. Special attention is first paid to an elucidation of the various aspects or features of the safety condition. Following a short demonstration, of the manner in which the safety condition handles some rather tough Gettier-like cases in the literature, some problems facing safety conclude this article.
Table des matières
Historical Background
The Safety Condition as a Necessary Condition for Knowledge
Timothy Williamson
Duncan Pritchard
Ernest Sosa
Elucidating the Safety Condition
What Counts as a Close World?
The Time Factor
What Type of Reliability does Safety Require?
Méthodes
Skepticism
How do the Safety and Sensitivity Conditions Differ?
The Semantics of Safety
Safety in Action
Gettier and Chisholm
Fake Barns
Matches
Problems for Safety
Knowledge of Necessarily True Propositions
Knowledge of the Future
Williamson’s Response
Pritchard’s Response
Safety and Determinism
Références et lectures complémentaires
1. Historical Background
Knowledge is incompatible with accidentally true belief. C'est-à-dire, if an agent S is lucky that her belief P is true, S does not know P. This feature of knowledge was made explicit by Bertrand Russell (1948: 170) et, more famously, by Edmund Gettier (1963) who demonstrated that a justified true belief (JTB) is insufficient for knowledge. Gettier provided us with cases in which there is strong intuitive pull towards the judgment that S can have a justified true belief P yet not know P because S is lucky that S’s belief P is true. To use Russell’s case, suppose S truly believes it’s noon as a result of looking at a clock that correctly reads noon. Toutefois, unbeknownst to S this clock broke exactly twelve hours prior. Even though S has good reasons to believe it’s noon and S’s belief is true, S does not know it’s noon since S is lucky that her belief is true.
Several notable attempts were made to improve the JTB analysis of knowledge; en particulier, some were attracted to the idea that a stronger justification condition would resolve Gettier problems (Shope 1983: 45-108). Thus began the vast literature on the nature of epistemic justification. Autres, though disagreeing among themselves about the place of justification in an account of knowledge, sought a solution to the Gettier problem in a new anti-luck condition for knowledge. (The majority of these accounts dropped the justification requirement.) One of these attempts is particularly relevant here. Fred Dretske (1970) and Robert Nozick (1981) proposed accounts of knowledge central to which were a counterfactual condition, Nozick’s being the more popular of the two. Nozick proposed the following counterfactual as a necessary condition for knowledge (1981: 179): S knows P via a method M only if, were P false, S would not believe P via M [¬P ☐→ ¬B(P)]. This came to be termed the sensitivity condition for knowledge. To satisfy this condition it must be the case that in the closest world in which P is false S does not believe P. C'est, S must track the truth of P to know P (where possible worlds are ordered as per their similarity to the actual world).
Nozick’s account enjoyed widespread popularity because of its anti-skeptical capabilities. Following Nozick, I count as knowing that there is tree in my garden since I would not believe that if none were planted there, c'est, in the closest world in which there is no tree in my garden (par exemple, when none is planted there), I do not believe that there is a tree in my garden. Worlds where radically skeptical scenarios are true count as further off since those worlds are more dissimilar to the actual world than the world in which no tree is planted in my garden. That I would believe falsely in those worlds is thus irrelevant. Autrement dit, that I would falsely believe in such a far off world is inconsequential to whether I believe truly in the actual world.
Nozick’s account came with two significant costs, cependant. Premièrement, it cannot accommodate the very intuitive principle that knowledge is closed under known entailment. À peu près, this principle states that if S knows P and S knows that P entails Q then S knows Q. It follows, alors, that if I know that I have hands, and I know that if I have hands entails that I am not a handless brain in the vat, then I know that I am not a handless brain in the vat. Toutefois, I fail to know that I am not a handless brain in the vat since I would falsely believe I was not a handless brain in the vat in the closest world in which the proposition “I am not a handless brain in the vat” is false (c'est, the world in which I am a handless brain in the vat). Autrement dit, the sensitivity condition for knowledge cannot be satisfied when it comes to the denial of radically skeptical hypotheses. Seeing no way to redeem his account from this problem, Nozick (1981: 198ff) was forced into the rather unorthodox position of having to deny the universal applicability of closure as a feature of knowledge.
Deuxièmement, Nozick admits that the sensitivity condition cannot feature as a condition for knowledge of necessarily true propositions as there is no world in which such propositions are false since, par définition, necessarily true propositions are true in every possible world. The scope of the sensitivity condition is thus limited to knowledge of contingently true propositions. That the sensitivity condition cannot, par exemple, illuminate the nature of our mathematical or logical knowledge makes it less preferable, ceteris paribus, than a condition that can.
At the end of the twentieth century and the beginning of the twenty-first, several authors proposed a novel and relatively similar condition for knowledge that has come to be known as the safety condition, the elucidation of which being the objective here. As the relevant features of the safety condition are presented and explained, the following salient points will emerge. The safety condition is similar to the sensitivity condition in that it too is a modal condition for knowledge. That’s where any significant similarity ends. As shall be demonstrated at length, safety differs from sensitivity in the following ways. Premièrement, et, surtout, safety permits knowing the denial of a radically skeptical hypothesis in a manner that maintains the closure principle. This advantage by itself acts as a strong point in favor of the safety condition. Deuxièmement, most formulations of the safety condition are not in the form of a counterfactual. Troisièmement, the safety condition is more expansive than the sensitivity condition in that its scope includes knowledge of both necessarily true and contingently true propositions. Lastly, epistemologists since then generally believe the safety condition opens the way to a more enlightened response to skepticism.
2. The Safety Condition as a Necessary Condition for Knowledge
The literature on the safety condition is challenging for even the seasoned philosopher. Seeing that Williamson, Prichard, and Sosa have developed their thoughts over a lengthy period of time and in a large number of publications, it has become quite a task to keep track of the epicycles in the conceptual development and defense of the safety condition. En plus, each of its advocates is motivated to formulate the safety condition in a distinct way, where even slight differences in formulation make for significant conceptual divergence. In light of these considerations, it is best to begin with a separate treatment of each author’s presentation of the safety condition before proceeding to an overall elucidation of the safety condition.
À. Timothy Williamson
Williamson (2000) is involved in the project of illuminating several structural features of knowledge. His safety condition is both the result of this project and an integral part of it.
Williamson, in stark opposition to the standard practice in the post-Gettier period, resists being drawn into offering a conceptual analysis of knowledge in terms of non-trivial and non-circular necessary and jointly sufficient conditions for knowledge, a project he thinks is futile given its history of repetitive failures. Knowledge, for Williamson, requires avoidance of error in similar cases. This is to be taken as a schema by which to elucidate the structural features of knowledge. The basic idea, alors, is that S knows P only if S is safe from error; c'est, there must be no risk or danger that S believes falsely in a similar case. The relevant modal notions of safety, risk, and danger are cashed out in terms of possible worlds such that there is no close world surrounding the actual world in which S falls into error: “safety is a sort of local necessity” (Williamson 2009d: 14). These possible worlds in which S truly believes P act as a kind of “buffer zone” from error and thereby prevent the type of epistemic luck that characterize Gettier cases. In Russell’s case, par exemple, S does not know that it’s noon since there is a close world in which S falsely believes that it’s noon, par exemple, one in which S looks at the clock slightly before or after noon.
Despite Williamson’s opposition to the project of analyzing knowledge into non-trivial necessary and sufficient conditions, it seems clear enough that Williamson should be read as putting forward safety as a necessary condition for knowledge given that he presents the safety condition as a conditional using the appropriate locution only if. And this is how his critics typically interpret him. As Williamson formulates the safety condition in a number of different ways on different occasions, it is impossible to pin down one formulation as representing Williamson’s view. Here is one formulation that will suffice for the time being:
(SF) If one knows, one could not easily have been wrong in a similar case (2000: 147)
Néanmoins, the manner in which Williamson expresses the safety condition still separates him from those who offer necessary conditions for knowledge. As Williamson emphasizes time and again, the safety condition, as he states it, is to be taken as a circular necessary condition in that whether or not a case β counts as a relevantly similar case to α is in part determined by whether we are inclined to attribute knowledge to the agent in α; sécurité (reliability in similar enough cases) cannot be defined without reference to knowledge, and knowledge without reference to safety (2000: 100; 2009d: 9-10). Safety, by Williamson’s lights is thus not to be taken as a recipe by which to determine whether or not the agent is to be attributed knowledge in each and every case. Plutôt, it is a model by which to illuminate the structural features of knowledge and by which we can begin talking about knowledge in individual cases. As Williamson observes, “the point of the safety conception of knowing is not to enable one to determine whether a knowledge attribution is true in a given case by applying one’s general understanding of safety, without reference to knowing itself. If one tried to do that, one would very likely get it wrong” (2009d: ibid.).
b. Duncan Pritchard
Pritchard is attracted to the idea that a conceptual analysis of our concept “luck” will yield sufficient insight from which to build a satisfactory anti-luck condition for knowledge. Autrement dit, with a conceptual mastery of the nature of luck in hand, the problem of epistemic luck can, in theory, be overcome. Pritchard’s safety condition is thus formulated in a manner which reflects his work on the conceptual analysis of luck.
For Pritchard (2005: 128) an event E counts as lucky for an agent S if and only if E is significant to S and E obtains in the actual world but does not obtain in a wide class of nearby possible worlds in which the relevant initial conditions for that event are the same as in the actual world. Par exemple, one is lucky not to be killed by a sniper’s bullet since in the actual world the bullet misses but it does not miss in the close worlds. Surtout, luck comes in degrees. One naturally counts as lucky if the sniper missed by a meter. But one counts as luckier if the sniper missed by a centimeter. The agent counts as luckier in the latter case, claims Pritchard (2009b: 39), because the world in which one is killed in that case is much closer to the actual world than the world in which one is killed in the former case (since the second sniper is better). Close worlds, alors, can be roughly divided into two classes—near and non-near, and are to be weighted accordingly—the near close worlds count for more than the non-near close worlds.
The foregoing analysis of luck motivates the following analysis of epistemic luck: S is very lucky that her belief P is true in the actual world if P is false in at least one of the near close worlds. And S is lucky, but not as lucky, that her belief P is true if P is true in the actual world but false in at least one of the non-near close worlds. Stated otherwise, false belief in a very close world is incompatible with knowledge while false belief in a non-near close world is compatible with knowledge. Here is a formulation of the safety condition by Pritchard (2007, 2008, 2009a) as a non-circular necessary condition in a standard account of knowledge:
(SF*) S’s belief is safe if and only if in most nearby possible worlds in which S continues to form her belief about the target proposition in the same way as in the actual world, and in all very close nearby possible worlds in which S continues to form her belief about the target proposition in the same way as in the actual world, the belief continues to be true.
C'est-à-dire, as long as S truly believes P using the same method in all of the very close worlds and in nearly all of the non-near close worlds, S’s belief P is safe. Par conséquent, the agent in Russell’s case fails to know that it’s noon since there is a very close world in which she falsely believes that it’s noon, par exemple, the very close world in which that clock stops slightly before or after noon.
Pritchard’s focus is knowledge of contingently true propositions. Toutefois, Prichard (2009a: 34) claims that to extend the safety condition to handle knowledge of a necessarily true proposition P, which is true in all possible worlds, his safety condition can easily be augmented in such a way as to require that S not falsely believe a different proposition Q in a very close world and nearly all non-near close worlds using the same method that S used in the actual world.
Of the many ways in which Pritchard’s condition differs from Williamson’s, it is evident that they differ in one very important respect: Pritchard’s condition permits relatively few cases of falsely believing P in non-near close worlds. Pritchard is motivated to make this concession since his safety condition is informed by his account of luck, which counts S lucky vis-à-vis an event E even though E occurs in some non-near close worlds. Williamson’s condition, par contre, has a zero tolerance policy for false belief in any close world.
c. Ernest Sosa
Sosa arrives at his formulation of the safety condition as a necessary condition for knowledge by way of working through some of the fundamental shortcomings he identifies in Goldman’s relevant alternatives condition and Nozick’s sensitivity condition. As Sosa puts it, both Goldman and Nozick failed to adequately capture the way in which the proposition believed must be modally related to the truth of that proposition. For Sosa, an agent S counts as knowing P only if S believes P by way of a safe method or, in Sosa’s words, a safe “indication” or “deliverance.”
Sosa’s formulation of the safety condition differs from both Williamson’s and Pritchard’s in that it is stipulated in the form of the following counterfactual (1999a: 146):
(SF**) If S were to believe P, P would be true [B(P) ☐→ P]
The following short note on counterfactuals helps explain the logic of (SF**). Premièrement, according to Lewis’s semantics for counterfactuals (1973), a counterfactual of the form P ☐→ Q is true at a world W only if some world in which P and Q are true is closer to W than any world in which P is true but Q false. Since Lewis thinks that the closest world to W is W itself, the counterfactual P → Q is trivially true at W if P and Q are both true at W. Par conséquent, when the antecedent of a counterfactual is true, it follows that P & Q entails P ☐→ Q. Nozick (1981: 176, 680 n.8) finds this result untenable and rejects Lewisian semantics for counterfactuals with true antecedents. Sosa concurs with Nozick. On their semantics for counterfactuals with true antecedents, the counterfactual B(P) ☐→ P is true at a world W if and only if S truly believes P by method M in W and in all close worlds in which S believes P by method M, P is true. It follows, alors, that like (SF) and unlike (SF*), Sosa’s condition entails zero tolerance for false belief in any close world.
Deuxièmement, though remarkably similar to the sensitivity condition, (SF**) is not logically equivalent to the sensitivity condition [¬P ☐→ ¬B(P)] since contraposition is invalid for counterfactuals. The following example, from Lewis (1973: 35) demonstrates that contraposition (A→B) ↔ (¬B → ¬ A) is invalid for counterfactuals. Consider the following counterfactual: (UN) If Boris had gone to the party, then Olga would still have gone. It should be clear that (UN) is not equivalent to its contraposition (B) If Olga had not gone to the party then Boris would still not have gone, because althoughwhile (UN) est vrai (B) is false since Boris would have gone had Olga been absent from the party.
In light of these considerations about counterfactuals, Sosa’s formulation of safety explains why the agent in Russell’s case lacks knowledge. Since there is a close world in which he uses the same method as he does in the actual world at a slightly earlier or later time, namely consulting a broken clock, and thereby comes to falsely believe it to be noon, his belief in the actual world is not safe.
Sosa has since moved on from defending a “brute” safety condition as a necessary condition for knowledge. Sosa (2007, 2009) argues that an agent’s belief must be apt and adroit to count as knowledge, where such virtues differ from safety considerations.
3. Elucidating the Safety Condition
The presentation of the safety condition thus far has been intentionally bare-boned for introductory purposes. This section is devoted to spelling out the finer details or characteristics of the condition, which is a rather challenging task given the presence of some vague patches in the safety literature.
It goes without saying that for epistemic purposes possible worlds W1, …, WN count as relevantly closer to or further from the world W in which S believes P at time T on a case by case basis relative to most or all of the following factors: the belief P, time T, the agent S, and the method M by which S formed the belief P at T in W. Autrement dit, the conditions of belief formation, represented by the set {S, P, J, M}, play a constitutive, though not exclusive, role in a determination of closeness. (With respect to safety, one can either think of these possible worlds as branching possibilities à la Hawthorne and Lasonen-Aarnio (2009) or as concentric circles surrounding a subject-centered world, as Lewis (1973: 149) does in his semantics for counterfactuals.) It follows, alors, that the adequacy of the safety condition will turn on, entre autres, how close worlds are to be specified, the time of the belief formation, what type of reliability is at play, and how safety theorists understand methods. A foray into these important questions follows.
À. What Counts as a Close World?
Before attempting to answer this important question, four points must be made. Premièrement, to satisfy the safety condition it is not the case that in every close world the agent must truly believe the relevant proposition; c'est-à-dire, S can safely believe P in W even though there are close worlds in which the agent does not form the belief P, par exemple, where S does not believe the target proposition in several of the close worlds because S is distracted or preoccupied. Par exemple, in world W S comes to believe that a car is approaching when S sees a car coming down the road. There may very well be a close world in which S is standing in exactly that same position at that very time but does not form the belief that a car is approaching because S turns her head in the opposite direction to look at a squirrel in a tree. The lack of belief about the approaching car in these close worlds does not prevent S from safely believing in W that a car is approaching. In light of such considerations, it is useful to consider close worlds as divided into two broad categories—relevant and irrelevant—a distinction which will prove important in the discussion on skepticism. ci-dessous.
Deuxièmement, as Williamson points out (2000: 100), the safety condition is notoriously vague owing to knowledge and reliability being vague concepts. As such it is unlikely that we will arrive at a very determinate answer as to exactly which worlds count as close; our expectations must be lowered. The problems this vagueness generates will become evident in section 4.
Troisièmement, as Hawthorne (2004: 56) Remarques, proximité, as it pertains to safety, cannot be cashed out in terms of the notion of similarity found in counterfactuals. A counterfactual of the form P ☐→ Q is non-vacuously true at a world W only if some world in which P and Q are true is closer to W than any world in which P is true but Q false. When determining the truth conditions for counterfactuals, the history of both the actual world and the close world in which the antecedent is true are held fixed. When it comes to safety, possible worlds with a different history to W can nevertheless count as close, as will become evident below. En plus, unlike the similarity of two worlds for epistemic purposes, the similarity of worlds for the purposes of counterfactuals need not be agent-relative.
Lastly, it is unclear whether believing a truthvalueless proposition (par exemple, one that fails to refer) in a close world should count as a knowledge-denying error possibility. Aubépine (2004: 56) thinks it should since these count as “failed attempts at a true belief.” None of the safety theorists discuss this type of case.
Now on to the four main determinants of closeness:
J’ai. The Time Factor
As far as safety goes, two worlds W and W* may count as similar at a time T with respect to the set {S, P, M} yet count as distant from each another, with respect to that same set, at a time prior to or following T. Par conséquent, if S falsely believes P in W* at T, then S’s belief P in W at T is unsafe. The following two cases illustrate that for the purposes of safe belief closeness must be understood as indexed to a point in time.
Cases concerning knowledge of the future demonstrate that similarity between two worlds at the time of belief formation trumps dissimilarity at a later time. Supposer, for the sake of illustration, that in a world W at time T (sometime in May 2009) an agent S truly believes that London will host the 2012 Olympics as a result of reading so in a local newspaper. In many possible worlds S similarly believes as much from reading the paper. Yet in some of these worlds things in 2012 may be radically different from the way things will be in W in 2012 when the Olympics indeed take place in London. Par exemple, in one of these worlds W* the British economy collapses and no Olympics take place in London in 2012. In W* S’s belief at T is thus false. Néanmoins, W and W* may count as close at T despite these significant differences between these two worlds in 2012 given how similar these two worlds are with respect to the set {S, M, P, J} c'est à dire. the details of S’s belief episode at T in which she comes to believe that London will host the 2012 Olympics as a result of reading so in the newspaper.
It is not the case, cependant, that for a world W* to be close to world W at T it must share a complete history with W up to and including time T. The following case elucidates this point. It is taken for granted that if in W Sally walks into a showroom displaying red shoes under red overhead lights she does not know that there are red shoes on display if there is a close world W* in which there are white shoes on display but which look red under red lights. Notice that W* counts as close to W at T even though they do not share an identical history: at T-N, where N is some duration, the factory owner in W* is placing white shoes on the display shelves and turning red lights on.
En plus, insisting on shared histories would make safety trivially true in some cases where the target proposition believed is true and concerns the present or the past; à savoir, were close worlds only those worlds which share complete histories with the actual world until the moment at which the belief is formed, then it would follow that in some cases the proposition believed would be trivially safe, which is an unacceptable consequence. Par conséquent, if I recall going to the gym yesterday then I know I went to the gym yesterday only if there is no close world which differs from the actual world with respect to my going to the gym yesterday and in which I falsely believe I went to the gym yesterday.
There is room to think that the conceptual content of “could easily have falsely believed” permits playing around with the time of the belief formation itself. It stands to reason, alors, cases of belief formation in a possible world W* which occur shortly before or after the belief formation in W should be factored into knowledge determinations as well. If S forms a false belief in those cases then S’s belief in W is unsafe. The motivation for permitting this flexibility with the time factor is that it allows safety to handle a wide variety of cases in which time is part of the content of the proposition believed, as exemplified by the Russell case. Par exemple, S looks at two people kissing at a new year’s party and forms the true belief that it’s the new year. S does not count as knowing that it’s the new year if there is a close world in which just prior to midnight these two people begin kissing slightly before midnight, as a result of which S falsely believes it’s the new year.
Ii. What Type of Reliability does Safety Require?
Reliability, as a property of a belief-forming method, comes in different kinds, two of which are important for the purpose at hand—local and global. The latter refers to a method M’s reliability in producing a range of token output beliefs in different propositions P, Q, R, …, et ainsi de suite. A method M is globally reliable if and only if it produces sufficiently more true beliefs than false beliefs in a range of different propositions. Par exemple, M could be the visual process and P the proposition that there is a pencil on the desk, Q the proposition that there are clouds in the sky, and R the proposition that the bin is black. If a sufficiently high number of P, Q, R, … are true then method M is globally reliable. A method M is locally reliable with respect to an individual target belief P if and only if M produces a sufficient ratio of more true beliefs than false beliefs in that very proposition P.
Accounts of knowledge in the post-Gettier period differ with regards to which type of reliability is necessary for knowledge. Nozick and Dretske think only local reliability is needed, McGinn (1999) requires global reliability to the exclusion of local reliability, and Goldman (1986: 47) requires both. Where do Williamson, Prichard, and Sosa fall on this spectrum? With respect to (SF**), it is evident from the manner in which Sosa formulates his safety condition that he thinks that only local reliability is necessary for knowledge in so far as (SF**) concerns truly believing a specific proposition P; c'est, no mention is made of not falsely believing a different proposition Q. Notice, cependant, that as far as safety goes, Sosa requires that the agent exhibit perfect local reliability; c'est, there can be no close world in which S falsely believes P.
Unlike Sosa, Prichard, in order to handle knowledge of necessarily true propositions, requires global reliability, but a nuanced version thereof. Recall that Pritchard permits some false beliefs in non-near close worlds but has a zero tolerance for false beliefs in the nearer close worlds. Donc, Pritchard can be classified as requiring perfect global reliability in the near close worlds and regular global reliability in the non-near close worlds. En plus, both Pritchard and Sosa permit falsely believing P in a close world via a different method than the one used in the actual world.
With regards to Williamson, it is much harder to pin down the type of reliability at work in (SF). As mentioned, Williamson formulates the safety condition in different ways in different places. Some of these formulations clearly advocate for local reliability only, while others incorporate global reliability. Et, further still, others push for subtler versions of both. Starting with local reliability, consider this formulation:
(SF1) "[je]n a case α one is safe from error in believing that [a condition] C obtains if and only if there is no case close to α in which one falsely believes that C obtains” (2000: 126-7).
A condition, for Williamson (ibid.: 52), is specified by a “that” clause relative to an agent and a time. Ainsi, “S believes that the tree is i inches tall” counts as S believing that a certain condition obtains. According to Williamson (ibid.: 114ff), a typical agent who looks at a tree and believes “that the tree is i inches tall” does not know “that the tree is i inches tall” because there is a close world in which the agent uses that same method and comes to falsely believe “that the tree is i inches tall” when in fact it is i+1 inches tall. Most people are unreliable about the height of trees to the nearest inch because our eyesight is not that powerful; we cannot tell the height of a tree to the nearest inch just by looking at it. This case demonstrates that Williamson requires local reliability since this is a case in which the agent lacks knowledge because there is a close world in which he falsely believes the same proposition using the same method as that used in the actual world. Given that for Williamson safe belief entails a zero tolerance for false belief in a close world, Williamson requires perfect local reliability.
Here is another formulation of the safety condition by Williamson (2000: 124):
(SF2) “One avoids false belief reliably in α if and only if one avoids false belief in every case similar enough to α.”
This formulation seems to rule out knowledge in the following case. Pat is pulling cards out of a hat on which sentences are written. Pat pulls the first out and upon reading it truly believes that oranges are fruits. Pat then pulls a second card out and upon reading the sentence written on it falsely believes that America is a province of Australia. Pat’s true belief that oranges are fruits is unsafe because Pat does not avoid false belief in a similar case; c'est, Pat could easily have falsely believed a different proposition using the same method in a close world. Because Pat uses a globally unreliable method she lacks knowledge. Given that for Williamson, safe belief entails a zero tolerance for false belief in a close world, Williamson therefore also requires perfect global reliability.
Yet further formulations of safety by Williamson advocate for subtler versions of local and global reliability. Recall that as Pritchard and Sosa present the safety condition, knowing P is compatible with falsely believing P via a different method in a close world. Williamson agrees, but with a caveat:
(SF3) “P is required to be true only in similar cases in which it is believed on a similar basis” (2009: 364-5).
So for S to safely believe P via M not only must S not falsely believe P in any close world via M, S must also not falsely believe P using a relevantly similar method to M. Williamson extends this principle in a way that results in a non-standard version of global reliability:
(SF4) If in a case α one knows P on a basis B, then in any case close to α in which one believes a proposition P* close to P on a basis [B*] close to B, then P* is true (2009: 325).
Autrement dit, to safely believe P via M in α it must also be the case that one does not falsely believe P* via M* in a close case. For ease of reference, here is a gloss in the vicinity of Williamson’s conception of a safe belief:
(SF!) S safely believes P via a method M in world W if and only if there is no close world to W in which:
(J’ai) S falsely believes P via M or a relevantly similar method M*; ou
(Ii) S falsely believes any proposition via M; ou
(iii) S falsely believes a relevantly similar proposition P* using a relevantly similar method M*.
Williamson is thus committed to S knowing P in W at T only if S (SF!)-safely believes P. Since Williamson’s picture of “could easily have falsely believed” is richer than Pritchard’s or Sosa’s, more is needed to be safe from error for Williamson than for the latter two.
There are reasons independent of any of these three authors that suggest that knowledge should require both global and a local reliability. Premièrement, the problem of vagueness supports a global reliability formulation of safety as follows. Some vague concepts may have different meanings in different worlds. It follows, alors, that sentences with the same words can express different propositions in different worlds even when these worlds are very close (Williamson 1994: 230-4). Par exemple, the property expressed by bald in the actual world might be having less than twenty hairs on one’s head while the property expressed by bald in a close world W might be having less than eighteen hairs on one’s head. If this is the case, then the sentence Pollock is bald expresses different propositions in these two worlds. Hence if Jackson, in the actual world, believes of Pollock that he is bald (Pollock having nineteen hairs on his head) then his belief will turn out to be unsafe as there is a close world, namely W, in which Jackson falsely believes of Pollock that he is bald. In cases such as these, for an agent to know P via M it must be the case that the agent could not easily have falsely believed P* via M (where P* counts as a different proposition in that close world).
Knowledge of propositions with singular content requires safety to be formulated in a globally reliable way. Consider the case in which Jones, looking at a real barn surrounded by fake barns, forms the true belief that “that is a barn.” The intuition is to deny Jones knowledge despite the fact that there is no close world in which that very barn is not a barn (assuming that a barn is essentially a barn). Since Jones could easily have falsely believed of a fake barn that “that is a barn,” which expresses a different and false proposition, Jones is denied knowledge.
iii. Méthodes
Methods can be individuated in a variety of ways: internally or externally, and in a coarse-grained or fine-grained way.
A way of individuating methods is internal if it respects the constraint that agents who form a belief P and who are internal duplicates share the same method; and external if it does not respect that constraint. Alternativement, if method individuation supervenes solely on brain states, then methods are internally individuated; if two agents can be in the same brain state yet be using different methods, then methods are individuated externally.
A way of individuating methods is coarse-grained if methods are described broadly or generally for example, the visual method. D'autre part, a way of individuating methods is fine-grained if methods are described in detail for example, the visual method for large objects at close range under favorable lighting conditions. As the degree of detail to which a method can be described is a parameter along a continuous spectrum, fine-grained and coarse-grained individuation permit of a wide range of generality or detail. Specifying the relevant detail for each method is known as the generality problem for reliabilism. Given that reliably believing is part of safety, safety faces the generality problem, something Williamson acknowledges (2009: 308).
Nozick (1981: 233) argues for an accessibility constraint on method individuation; c'est, regardless of how methods are individuated, a difference in methods must always be accessible to the agent. It is evident, alors, that an accessibility constraint is in tension with both external and fine-grained individuation since, ex hypothesi, neither the difference between seeing and hallucinating nor the difference between two finely-grained methods would be detectable by the typical agent.
Williamson and Pritchard deny such an accessibility constraint, thereby opening the way for external, fine-grained individuation of methods. For Williamson the accessibility constraint assumes that methods are a luminous condition, where a luminous condition is defined as a condition such that whenever it obtains the agent is in a position to know that it obtains (Williamson 2000: 95). Mais, as Williamson (ibid.: 96-8) argumente, no non-trivial condition is luminous. Therefore the accessibility condition should be disregarded.
Prichard (2005: 153) argues that safety will get the wrong result in some cases unless the accessibility condition is dropped because agents are fallible when it comes to determining which methods they use. Par exemple, S might incorrectly think that she believes P via method M when in fact she believes it via M*. In some cases M delivers safe belief while M* doesn’t. Were the relevant method for a determination of safety the method the agent considers to be the one by which she believes, safety would get the wrong result in such cases.
One further argument against the accessibility condition is that it generates an infinite regress: S must be aware of which method she uses to believe P, the method she used to determine that, the method she used to determine that, et ainsi de suite. Although these three arguments do not entail that internal and coarse-grained individuation are unsustainable, they do show that one reason in favor of such positions is unpromising.
We typically talk about methods or bases of belief in a coarse-grained way. Williamson, cependant, adopts a fine-grained, external individuation of bases. Par exemple, Williamson (2009b: 307, 325 n.13) thinks that, toutes choses étant égales par ailleurs, seeing a daschund and seeing a wolf count as different bases; believing that one is drinking pure, unadulterated water on the basis of drinking pure, unadulterated water from a glass is not the same basis as believing as much when drinking water from a glass that has been doctored with undetectable toxins by conniving agents; believing that one was shown x number of flashes after drinking regular orange juice does not count as the same basis as believing that one was shown x number of flashes after drinking a glass of orange juice with a tasteless mind-altering drug; et, enfin, believing that S1 is married by looking at S1’s wedding ring and believing that S2 is married by looking at S2’s wedding ring count as different methods if S1 reliably wears her ring and S2 does not.
Williamson is inclined towards external, (super) fine-grained individuation of methods owing to his position vis-à-vis luminosity and skepticism. Regarding the former, in some cases the circumstances of a case can change in very gradual ways that the agent fails to detect such that at the start of the case the basis of belief is reliable while unreliable at the end of the case. Considérer, par exemple, a case in which I see a pencil on a desk in front of me under favorable conditions. Assumedly I know that there is a pencil on the desk. I then begin to gradually walk backwards from the desk all the while keeping my eyes on the pencil until I reach a point at which it appears as a mere blur in the distance. At that point beliefs I form based on vision are no more than guesses. At each point in my growing distance from the desk my visual abilities start deteriorating slowly such that at some indiscernible point my eyesight no longer counts as reliable with respect to the pencil. Were bases of belief individuated in an internal, coarse-grained manner such that my looking at the pencil close-up and my looking at the pencil at a distance count as the same method, then I would fail to know that there is a pencil on the desk when close to the table since there is a close world in which I look at it from a distance and form a false belief that there is pen on the desk, which is intuitively the incorrect result. Par conséquent, minimal changes in the external environment result in a difference in the basis of belief formation.
iv. Skepticism
One of the selling points of safety is that it, unlike the relevant alternatives and sensitivity conditions, permits one to know the denial of skeptical hypotheses, thereby maintaining closure. Here is the skeptical argument from closure:
(1) I know I have hands.
(2) If I know I have hands then I know I am not a brain in the vat.
(3) I don’t know that I am not a brain in the vat.
This triad is inconsistent because, claims the skeptic, one cannot know the denials of skeptical hypotheses; c'est, one cannot know that one is not in the bad case (the denial of (3)). Autrement dit, if I know I have hands, then by closure I should know I am not a handless brain in the vat. Mais, claims the skeptic, one is never in a position to know that one is not a handless brain in the vat. It follows, alors, that I do not know that I have hands.
Prichard (2005: 159) claims that if one is in the good case then one sees that one has hands based on perception. In the bad case one does not see that one has hands; plutôt, one is fed images of hands. As a result of this difference in method, the bad case automatically counts as irrelevant since only those cases in which one forms beliefs based on veridical perception count as relevant: “only those nearby possible worlds in which the agent formed her belief in the same way as in the actual world are at issue” (ibid. 161). Depuis, by definition of the cases, the brain in the vat is not using the same method as the agent in the good case, one can consequently know the denial of the skeptical hypothesis entailed by one’s knowledge of everyday propositions since there is no close world to the good case in which one falsely believes the denial of the skeptical hypothesis.
Williamson resists skepticism by exposing and undermining those claims that tempt us towards (3); namely the idea that a brain in the vat and the agent in the good case have exactly the same evidence. According to Williamson (2000: 9) “one’s total evidence is simply one’s total knowledge.” Since the agent in the good case has good evidence and the brain the vat has bad evidence, this constitutes a sufficient dissimilarity between the cases. Donc, the false belief in the bad case counts as irrelevant to true belief in the good case. Alternativement, Williamson can be read as saying that individuating methods externally and in a fine-grained manner leads to the conclusion that believing truly on the basis of good evidence is sufficiently dissimilar to believing falsely on the basis of bad evidence (ibid.: 169). The epistemic impoverishment of the brain in the vat is thus irrelevant. Williamson (2009d: 21) has made the following further claim:
The idea is that when one knows p “on basis b,” worlds in which one does not believe p “on basis b” do not count as close; but knowing “on basis b” requires p to be true in all close worlds in which one believes p “on basis b;” thus p is true in all close worlds. En ce sens, the danger from which one is safe is p’s being false, not only one’s believing p when it is false.
Thus the bad case counts as far off because in the bad case P is false. This difference between the good and bad cases constitutes a sufficient dissimilarity to permit one to know in the good case.
Since Sosa is not as explicit about how he builds methods into his safety condition, all three strategies are compatible with what he says. Par exemple, he sometimes talks as if the bad case is far off (1999a: 147; 2000: 15), while at other times (1999b: 379) he can be read as thinking that even if it were close it would be irrelevant because the agent is using a different method in that case.
There are thus three different strategies a safety theorist can employ to oppose skepticism:
(J’ai) Since the agent in the bad case uses a different method from the agent in the good case, the bad case is sufficiently dissimilar from the good case and thus does not count as close;
(Ii) The bad case counts as close to the good case yet is irrelevant given that the agent in the bad case uses a different method from the agent in the good case;
(iii) While the agents in the good and bad cases use the same method, the bad case counts as far off given the overall dissimilarities between it and the good case;
The safety condition is therefore a powerful tool against skepticism. For skepticism to be an appealing theory the skeptic would have to provide some reason for thinking that in every case α involving an agent S, method M, time T, and proposition P, there is a close and relevant case β in which a skeptical hypothesis is true such that S could easily have failed to be locally or globally reliable in α with respect to P at T (where the definitions of local and global reliability differ depending on which safety theorist is in question).
b. How do the Safety and Sensitivity Conditions Differ?
Given that the sensitivity condition for knowledge enjoyed such prominence, it is important to determine how the safety condition differs from it. Such a comparison will shed light on some virtues of the safety condition relative to the sensitivity condition.
In some cases sensitivity is the more stringent condition, while in others safety is. The following two points of logic elicit the difference between the safety and sensitivity conditions. When it comes to cases concerning knowledge of the denial of skeptical hypotheses, the safety principle is less demanding than the sensitivity principle. The latter principle requires that the agent not believe P in the nearest possible world in which P is false. As such no agent can know the denial of skeptical hypotheses by the simple sensitivity test, par exemple, I am not a brain in the vat, because in the nearest possible world in which the agent is a brain in the vat the agent continues to believe (faussement) that he is not a brain in the vat. So while agents typically satisfy the sensitivity condition with respect to everyday propositions and thus count as knowing many everyday propositions, they cannot satisfy the sensitivity condition with respect to the denial of skeptical hypothesis. Hence the incompatibility of the sensitivity condition and single-premise closure, for knowledge of everyday propositions entails knowledge of the denial of skeptical hypotheses incompatible with those propositions.
The safety principle, cependant, is compatible with single-premise closure for it permits knowing the denial of skeptical hypotheses. By the safety principle I count as knowing the everyday proposition P “that I have hands” by method M only if I safely believe P. It follows, alors, that if I safely believe P then there is no close world in which I am a brain in the vat and am led to falsely believe that I have hands by M (as explained in the previous section). Par conséquent, if I know that I have hands and I know that that entails that I am not a brain in the vat, then I know that I am not a brain in the vat.
D'autre part, cases can be constructed in which safety is more demanding than sensitivity. Consider the following case: S truly believes P via M in the actual world but (J’ai) in the closest world in which P is false S does not believe P, et (Ii) there is a close world in which S falsely believes P via M. In this case S satisfies the sensitivity condition but fails to satisfy the safety condition. A case by Goldman (1986: 45) can be used to illustrate this point. Mary has an unreliable thermometer in her medicine cabinet which she uses to measure her temperature. It just so happens to correctly read her temperature of 38°C in this case. Toutefois, in the nearest world in which her temperature is not 38°C and she uses this thermometer to take her temperature, she is distracted by her son and she doesn’t form any belief about her temperature. She accordingly satisfies the sensitivity condition for knowledge since she does not believe P in the nearest world in which P is false. Toutefois, there is some other close world in which she uses this thermometer to take her temperature and forms a false belief thereby. Mary thus fails to satisfy the safety condition. It follows, alors, that the following pair of conditionals are false:
If S safely believes P then S sensitively believes P.
If S sensitively believes P then S safely believes P.
The logic of these conditionals makes explicit the respects in which safety is similar to and different from the sensitivity condition.
c. The Semantics of Safety
In a non-epistemic context it is easy to see that “safe” can function as a gradable adjective. Par exemple, if S has three paths to choose from to get to her destination, it is perfectly acceptable to say that although path X is safe, Y is safer, and that path Z the safest of the three paths. “Similarity” also comes in degrees: London is more similar to Manchester than to Kabul. Possible worlds can thus be closer to or further from the actual world on a sliding scale of similarity. S’s belief P, donc, can be safer than S’s belief Q. Although “safe” is a gradable adjective, the safety condition is not presented within the framework of a contextualist semantics for “knowledge,” where, roughly speaking, contextualism about ”knowledge” is the claim that the truth conditions of the proposition “S knows P” depend on the context of the attributer. Autrement dit, “knowledge” picks out different relations in the different contexts of attribution where said contexts are a function of the varying interests of the attributer, not the possessor, of knowledge. Contextualism has gained its popularity through, entre autres, its proposed solution to the skeptical challenge from closure. Sosa, Williamson, and Pritchard are all standard invariantists about the semantics of knowledge, invariantism being the denial of contextualism. (See Williamson (2009d: 18) for two different ways in which the gradability of safety can be accommodated without adopting a contextualist semantics for “know.”) If one’s main concern is skepticism, then the safety theorist has no need for a contextualist semantics for “knowledge” given the three strategies available to them for opposing skepticism (listed above). Néanmoins, it is easy enough to see how one could model the safety condition along contextualist lines if one had independent reasons for adopting a contextualist semantics for “knowledge”—those factors that weigh in on the similarity measure of close worlds will be those salient to the attributer, not the agent.
4. Safety in Action
To get a better feel for how the safety condition works, it proves beneficial to undertake an exercise in seeing how safety handles some of the troubling cases in the literature. Obviously each case can be modified in such a way as to make things harder or easier for the safety theorist. For present purposes such cases will be ignored.
À. Gettier and Chisholm
Jones is told by his boss that Smith will get the promotion. Jones then sees Smith putting ten coins in his pocket. Jones accordingly infers that the man who will get the promotion has ten coins in his pocket. Toutefois, Jones (not Smith) gets the job and Jones just so happens to have ten coins in his pocket. According to Gettier (1963) Jones’s belief does not amount to knowledge. How does the safety condition handle this case? Jones’s belief is unsafe because there are close worlds in which (À) in which Carter gets the job but has no coins in his pocket, ou (b) in which Jones get the job but has nine coins in his pocket.
The same reasoning applies to Chisholm’s (1977) case in which Jones believes that there is a sheep in the field upon seeing a fluffy white animal in the distance. Toutefois, while what Jones sees is a white dog there is indeed a sheep in the field lying behind a rock hidden from Jones’s sight. According to Chisholm, Jones doesn’t count as knowing that there is a sheep in the field. The safety condition captures this intuition. Jones’s belief is unsafe because there is a close world in which there is no sheep behind the rock and Jones falsely believes that there is a sheep in the field; c'est, the method of inferring the presence of sheep by seeing dogs is unreliable.
b. Fake Barns
Jones is in an area with many fake barns. Jones sees a real barn in the field and forms the belief that there is a barn in the field. Does Jones know that there is a barn in the field? À première vue, Jones’s belief counts as unsafe as there is a close world in which he looks at a fake barn and falsely believes that it is a (réel) Grange.
Toutefois, this case turns out to be a little harder to explain because the details of the case can be manipulated into yielding bizarre intuitions in similarly structured cases (Hawthorne and Gendler 2005). What if, par exemple, Jones would not have come across such a fake barn because he wasn’t in walking distance of it? The permutations of the standard setup of this case abound (see for example, Peacocke (1999: 324), Neta and Rohrbaugh (2004: 399), Comesaña (2005: 396), and Lackey (2006)). Similar permutations can be made for the Gettier and Chisholm cases, par exemple, where circumstances are such that the person who gets the job in all close worlds has ten coins in his or her pocket or that in all close worlds there is a sheep behind the rock.
This is one of those cases that manifests the vagueness present in the safety condition. As Williamson (2000: 100; 2009b: 305) indique, there will be cases in which whether or not one thinks that there is a close world in which the agent falsely believes depends on whether or not one is inclined to attribute knowledge to that agent in that case; the vagueness in “relevantly similar,” “reliable,” and “knowledge” knowledge determinations in some cases notoriously difficult. Par conséquent, the direction of one’s intuitions about whether or not Jones knows in each permutation of these cases will influence whether or not one thinks Jones has a false belief in a close world, et vice versa.
There is one significant permutation of this case that requires attention. Suppose the details of the case remain identical except that instead of forming the belief P that there is a barn in the field, Jones forms the belief Q that that is a barn (Hawthorne 2004: 56). Recall that Q is a singular proposition but P is not, où, à peu près, a singular proposition is one that is constitutively about some particular. Sosa would have to find other reasons to deny Jones knowledge in this case, if he thinks he lacks it, given that his safety condition requires local reliability and true singular propositions are true in all close worlds. According to Williamson and Pritchard, Jones lacks knowledge in this case because there is a close world in which Jones looks at a fake barn and his belief that that is a barn expresses a different and false proposition.
c. Matches
Jones is about to light a match and forms the belief that the match will light once struck since all dry matches of this brand that he has struck have lit after being struck. Toutefois, the match doesn’t light because it was struck but rather does so because of some rare burst of radiation (adapted from Skyrms 1967: 383). Stipulate further that in all close worlds the match lights by friction. Is Jones’s belief safe?
The safety theorist seems drawn into denying knowledge in this case because there is a sense in which Jones is still lucky, in an epistemically malignant way, that his belief is true. When described in this way, this case is a stronger version of many of the Gettier cases mentioned so far because Jones’s belief is true by luck in the actual world but not so in any close world. Such cases would demonstrate that safety is not necessary for knowledge.
One way around this difficulty would be via Williamson’s claim that worlds which differ as far as trends go count as far off (see below B(J’ai)). Donc, only worlds in which the match lights by a freak burst of radiation count as close. If worlds are ordered in this way, the example is presented in a flawed way that incorrectly indicates a problem for safety. Since the match lights in all those close worlds via radiation, Jones knows that his match will light.
5. Problems for Safety
As epistemologists ponder the details of the safety condition, it is to be expected that some will identify what they perceive to be its weaknesses or its failures. This section is devoted to three problem areas for safety.
À. Knowledge of Necessarily True Propositions
A necessarily true proposition is one which is true in all possible worlds. One might think, donc, that knowledge of such propositions presents a problem for safety since there can be no close world in which S falsely believes such propositions. It should be clear at this point that this is a problem primarily for Sosa since his condition requires local reliability only; c'est, not falsely believing P in close worlds. Autrement dit, the counterfactual B(P) ☐→ P will be trivially true for any proposition P which is necessarily true. So knowledge of necessarily true propositions is going to be a problem for any account of knowledge that requires local reliability only.
Williamson and Pritchard have no such problems with knowledge of necessary truths since both require global reliability. There are cases that demonstrate that the method used to believe a necessarily true proposition can be globally unreliable. Par exemple, suppose I use a coin to decide whether to believe 42 x 17 = 714 or to believe 32 ÷ 0.67 = 40, where I have no idea which is true without the use of a calculator. If the coin lands in such a way indicating that I should believe the first, which is necessarily true, then I am lucky to believe the necessary truth and not the necessary falsehood. I consequently do not know that 42 x 17 = 714 as I could just have easily have falsely believed the different proposition expressed by 32 ÷ 0.67 = 40.
b. Knowledge of the Future
The following lottery puzzle is particularly troublesome for safety. On the assumption that a proposition about a future state of affairs is either true or false, we take ourselves to know many things about the future, par exemple, that the Lakers game is next Tuesday, or that the elections will be held next month. This being the case, intuitively at least, Suzy knows that she won’t be able to afford to buy a new house this year. D'autre part, we deny that Suzy knows that her lottery ticket will lose (even if the draw has already taken place and Suzy has not yet learnt of the draw result). This state of affairs, cependant, presents the following puzzle: assuming single-premise closure true, if Suzy knows that she won’t be able to afford to buy a new house this year, and knows that this entails that her ticket is a loser, then Suzy should be in a position to know that her ticket will lose (by deduction). But it is commonly held that agents do not know that their lottery tickets will lose. (The aptness of this intuition is often demonstrated by the impropriety of flatly asserting that one knows that one’s ticket will lose, or selling one’s ticket for a penny before learning of the draw results.) The intuitive pull of single-premise closure is in tension with intuitions about what can be known about the future and about lottery tickets.
Problems involving lotteries generalize (Hawthorne 2004: 3). Par exemple, we are willing to say that Peter knows that (P) he will be living in Sydney this coming year. Yet we are hesitant to say that Peter knows that (Q) he won’t be one of those unfortunate few to be involved in a fatal car accident in the coming months. Assuming single premise closure true, if we are willing to attribute to Peter knowledge of P, and Peter knows that P entails Q, we should then be willing to attribute Peter knowledge of Q.
One way of explaining why agents do not know that their lottery tickets will lose or that they won’t die in unexpected accidents is that both events have a non-zero objective probability of occurring. C'est, events with a non-zero probability of occurring can occur in close worlds. Naturally, alors, one might think that the world in which one’s lottery ticket wins or in which one dies from an unexpected motor accident is close and that therefore one’s beliefs that one will lose the lottery or not die in an accident are unsafe.
This line of thinking is devastating for safety, cependant, as it would effectively rule out knowledge of any propositions the content of which regards the future since, assuming indeterminism true, there is a non-zero probability that any proposition about the future will be false; c'est, for any true proposition P about the future there will be a close world in which P is false and one believes P. If safety leads directly to skepticism about knowledge of the future this would be a good reason to give up safety.
One line of thought for a safety theorist to pursue in response to this problem is to support the following high-chance-close-world principle (HCCW): if there is a high objective chance at T1 that the proposition P believed by S at T1 will be false at T2 given the state of the world at T1 and the laws of nature, then S does not know P at T1 as P is unsafe (even if P is true). The thinking behind this response is that if there is a high chance of some event occurring then that event could easily have occurred, which indicates that there is a natural connection between high chance and danger. Par exemple, if there is a high objective chance that the tornado will move in the direction of Kentucky, then it seems natural to say that Kentucky’s inhabitants are in danger.
Hawthorne and Lasonen-Aarnio (2009) demonstrate that HCCW presents some rather unwelcomed problems for the safety theorist. Premièrement, HCCW is in tension with knowledge by multi-premise closure. Supposer, by way of example, that at T1 a subject S knows a range of chancy propositions P, Q, R, … about the future; c'est, there is no close world in which any of those propositions are false. Cela dit, while there may be a low probability for each proposition in that set that it will be false, for a sufficiently high number of propositions the probability at T1 that the conjunction of {P, Q, R, …} will be true at T2 will be very low . Par conséquent, the probability of the negation of {P, Q, R, …} is very high at T1. By the lights of HCCW there will then be a close world in which that conjunction is false. Donc, while an agent may know each conjunct in a set of chancy propositions about the future, the safety theorist who is committed to HCCW must deny that the agent knows the conjunction of those propositions. HCCW is therefore incompatible with multi-premise closure.
HCCW also creates problems for single-premise closure. Consider Plumpton who is about to begin a significantly long series of deductions from a true premise P1 towards a true conclusion PN. Suppose that at every step there is a significantly low objective probability that Plumpton’s deductive faculty will misfire leading him towards a false belief. If the chain is sufficiently long then there will be a high enough probability that the belief at the end of Plumpton’s deductive chain will be false, in which case, by HCCW, such a possibility counts as close. If closeness of worlds is cashed out in terms of HCCW, then Plumpton does not know PN if he deduced it from PN-1, which is effectively the denial of single-premise closure for whenever the chance that the next step will be false is high enough (par exemple, the step leading from PN-1 to PN in Plumpton’s case) the deduction from that previous step will be ruled out as unsafe. The same problem arises for knowing a proposition at the end of a very long testimony or memory chain when there is a non-zero objective probability that the process will go astray at any given link of the chain.
De plus, HCCW also struggles to explain the inconsistency of why, dans certains cas, we do attribute knowledge to agents concerning events with substantially low probabilities of occurring while in some case we do not. Par exemple, we are happy to say, following Greco (2007) and Vogel (1999), that a veteran cop knows that his rookie partner will fail to disarm the mugger by shooting a bullet down the barrel of the mugger’s gun, or that not all sixty golfers will score a hole-in-one on the par three hole, or that this monkey will not type out a copy of War and Peace if placed in front of a computer. Yet it is common to deny knowledge in the lottery case where the chances are substantially lower.
The safety theorist, donc, owes us some story about how close worlds calibrate in cases involving objective chance.
J’ai. Williamson’s Response
Williamson denies that there is a straightforward correlation between safety and objective probability. When it comes to knowledge there are two conceptions of safety that one can have—a no risk conception or a small risk conception. Williamson (2009d) rejects the latter owing to the way we use the concepts of safety and danger in ordinary, non-epistemic contexts. By way of argument, Williamson (ibid.: 11) asks us to consider the following two valid arguments that involve the use of our ordinary, non-epistemic concept “safe,” where the context is held fixed between premises and conclusion:
Argument Asafety S was shot
───────────────────────
S was not safe from being shot
Argument Bsafety S was safe from being shot by X
S was safe from being shot by Y
S was safe from being shot by Z
S was safe from being shot other than by X, Y, or Z
───────────────────────────
S was safe from being shot
Williamson then asks us to consider which of the two competing conception of safety (the “small risk” or “no risk”) secures the validity of these arguments by plugging in these conceptions in the relevant premises and conclusions:
Argument Asmall risk S was shot
─────────────────────────
S’s risk of being shot was not small
Argument Bsmall risk S’s risk of being shot by X was small
S’s risk of being shot by Y was small
S’s risk of being shot by Z was small
S’s risk of being shot other than by X, Y, or Z was small
───────────────────────────────────────
S’s risk of being shot was small
Argument Ano risk S was shot
───────────────────────────
S was at some risk of being shot
Argument Bno risk S was at no risk of being shot by X
S was at no risk of being shot by Y
S was at no risk of being shot by Z
S was at no risk of being shot by other than X, Y, or Z
──────────────────────────────────────
S was at no risk of being shot
With regards to the “small risk” conception of safety, the argument A small risk is invalid since even events with a small risk of occurring in a world W do occur in W, par exemple, lottery wins. Argument B small risk is invalid because small risks add up to large ones. D'autre part, the “no risk” conception of safety fairs much better for these reasons. Since S was shot in some world close to W, and W being the closest world to itself, S was at some risk of being shot, which demonstrates the validity of Argument A no risk. This explains why S is not safe from being shot in W at a time T. De la même manière, Argument B no risk is valid since if S was not shot by X in any close world to W at T, and so on with respect to Y, or Z or anyone else, then there is no close world in which S was shot. This exercise with the ordinary conception of safety demonstrates that the ordinary conception thereof is not in terms of small risk or probability. (Peacocke (1999: 310-11) likewise understands the concept of safety in this way: “The relevant kind of possibility is one under which something’s not being possible means that in a certain way one can rely on its not obtaining” (original emphasis).) Donc, argues Williamson, the notion of a safe belief is not one correlated with probability.
In light of this divergence between safety and probability, one counts as safely believing a conjunction, by Williamson’s lights, if and only if one safely believes the conjunction on a basis that includes safely believing each conjunct. De la même manière, if one safely believes P and safely believes P → Q, then one safely believes Q if and only if the basis on which one believes Q includes the basis on which one believes P and P → Q, for in that case there will be no close world in which one believes Q and Q is false. It stands to reason then, that there will be cases in which S safely believes P and safely believes P → Q, yet does not safely believe Q since the basis on which S believes the latter does not include the basis of the former two beliefs. One must safely derive that which is entailed by what one already safely believes before one counts as safely believing the entailment: “We might say that safe derivation means that one makes a ‘knowledgeable’ connection from premises to conclusion, rather than that one knows the connection” (Williamson 2009d: 27).
Given these arguments, Williamson (ibid.: 19), demonstrates that in some cases knowing and objective probability dramatically diverge. Par exemple, suppose I designate the winning lottery ticket “Lucky” and then believe that Lucky will win the lottery (where “Lucky” is a rigid designator). Néanmoins, I count as knowing in advance that Lucky will win despite each ticket having the very same low probability of winning.
For these reasons the cases involving knowledge of risky propositions do not bother a no risk conception of safety so long as one safely believes the conjunction on a basis that includes the bases on which one safely believes each conjunct. The same applies to very lengthy derivations. It stands to reason, alors, that Plumpton knows PN despite there being a very high objective probability that PN is false. And so long as there is no close world in which one falsely believes a proposition P about the future, then one safely believes P despite there being a non-zero-probability that P is false, par exemple, that no monkey will type out War and Peace, that not all sixty golfers will score a hole-in-one, or that the rookie will not disarm the mugger. With respect to knowledge of the future, Williamson (2009c: 327) writes that “the occurrence of an event in β that bucks a relevant trend in α may be a relevant lack of closeness between α and β, even though the trend falls well short of a being a strict law.” Trends are further indicators of closeness between cases. So in a case α an agent S can be in a position to know a proposition P about the future even though there is a non-zero probability that P will be false since the case β in which it is false is sufficiently distant from α owing to P’s being false in β bucking a trend in α.
Matters involving lottery puzzles remain troublesome for Williamson, cependant. In the cases where the known proposition entails a risky proposition about the future for example, that one will be healthy for the rest of the year, Williamson is happy to admit that one does safely believe that risky proposition given the divergence between safety and small risk (as explained above). Toutefois, this seems to indicate that Williamson is happy to permit that one can safely infer that one’s lottery ticket will lose, which is problematic since it contradicts a widely-held intuition and goes against Williamson’s prior commitment in print that one does not know that one’s ticket will lose (2000: 117, 247). In conversation Williamson has made two salient remarks in response to these points. D'abord, he still maintains that one does not know one’s ticket will lose when this belief is formed on the basis of reflecting on the low odds of it winning. He is open to one’s knowing that one’s ticket will lose by other bases of belief, par exemple, safe derivation from known propositions about the future. So in some lottery puzzles Williamson will concede that one can know that one’s ticket will lose. Deuxième, Williamson has emphasized that lottery puzzles are unstable since one readily attributes knowledge about the future only to retract it when the lottery entailment becomes salient. Since Williamson’s concerns are the structural features of knowledge, he is not overly perturbed by problems generated by specific cases which rest on very unstable intuitions.
Ii. Pritchard’s Response
Prichard (2008: 41; 2009: 29), like Williamson, argues that the relationship between objective probability and safety is not one of direct correlation but motivates his claim using intuitions from a different lottery case. We say that S does not know by reflecting on the extremely long odds of winning a lottery that her ticket will lose (even if the draw has already occurred and S is unaware of the results) but that S does know that it lost from reading the result in the newspaper. This is a somewhat surprising result given that the objective probability of being wrong in the former case is lower than in the latter case since newspapers do sometimes print errors. Were closeness determined according to the HCCW principle, the the intuitions should be the converse. Safety, argues Pritchard, captures the intuitions in this case since the world in which one wins a lottery is very much like the actual world since all that differentiates the two worlds in this context is a bunch of balls falling differently. That is why one cannot know that one’s ticket will lose. Toutefois, given the copious editing processes at newspapers, quite a bit would have to go wrong for there to be a printing error.
Using this understanding of closeness Pritchard believes he can answer the lottery puzzles Hawthorne raises. Pritchard contends that we are mistaken in thinking that these are puzzles because our intuitions are being misled by a lack of detail in the presentation of the cases (ibid. 43-8). If S has a lottery ticket in his pocket for a draw taking place tomorrow, Pritchard claims that we ought to resist attributing knowledge to S that he won’t have enough money to go on a safari this year since the world in which he wins a major prize in the forthcoming lottery is close. Dans ce cas, argues Pritchard, the agent also does not know that his ticket will lose. inversement, if S does not have a lottery ticket in hand, then S knows he won’t go on safari and knows that he won’t win the lottery. Either way closure is preserved.
In a similar fashion Pritchard argues away some of the other lottery-like puzzles. If we are told that S is a healthy person then we are prone to affirm that S knows that S will be living in Wyoming this coming year and that S knows that S won’t have a heart attack since the world in which S, a healthy person drops dead from a totally unexpected heart attack, is far off. De même, if we are told that S has very high cholesterol, then we will deny that S knows that S will be living in Wyoming this year and that S won’t have a heart attack. Closure is maintained in both cases.
Some might have reservations about the adequacy of Pritchard’s response, cependant. It is a matter of differing intuitions whether or not there is a relevant difference between the actual world and worlds in which perfectly healthy people die from a sudden and unexpected heart attack or are involved in a fatal car accident. If such worlds are relevantly similar to the actual world, then such worlds should accordingly count as close on Pritchard’s line of thought. Donc, contrary to Pritchard, such agents should be denied knowledge of their future whereabouts. The same line of reasoning can be applied to the lottery and newspaper case; the world in which the type setting machine prints an error owing to a technical glitch is much closer to the actual world than the world in which the seven balls corresponding to the numbers on one’s lottery ticket fall into the dedicated slots because much less has to change in the former case than in the latter case. If closeness of worlds is determined by how much the two worlds actually differ on the details of the case, then one ought to be unable to know stuff by reading the newspaper, which is an untenable result. Enfin, it is also unclear how Pritchard’ strategy can handle the troublesome cases involving multi-premise closure that Hawthorne and Lasonen-Aarnio describe. The world in which Plumpton makes a mistake in the very long deductive chain he is about to embark upon seems very similar to the actual world. A natural reading of “at each step the chance that he will make a mistake is exceedingly low but that he will make a mistake overall exceedingly high” is that the two worlds are very similar; not much change is required for Plumpton to make a mistake somewhere along the way.
Despite these concerns, the disparity between closeness and objective probability that Pritchard is urging does seem to handle the Vogel and Greco cases quite well. Events in the actual would have to change significantly for sixty golfers all to score holes-in-one, or for the rookie to disarm the mugger, or for the monkey to type War and Peace. The angles of the club face, timing, ball spin, wind speeds, strength of swing, et ainsi de suite. would all have to somehow fall together in such a way on sixty different occasions for all sixty golfers to succeed in scoring holes-in-one. Similar thoughts apply to the rookie and monkey cases.
c. Safety and Determinism
The safety theorist argues that if S knows P then S could not have easily been wrong. Supposer, for the sake of argument, that our world is a deterministic world in the sense that the state of the world at TN is determined by the state of the world at TN—1 plus the laws of nature.. In what sense, alors, could S have easily gone wrong since, if determinism is true, S could not but have believed P truly? Williamson (2000: 124, 2009: 325) argues that “determinism does not trivialize safety.” Williamson demonstrates this point by way of an example of a ball balanced on the tip of a cone. Such a ball, even in a deterministic world, is not safe from falling because, argues Williamson, the initial conditions could easily have been different such that the ball falls. By the “initial conditions” he means “the time of the case, not to the beginning of the universe” (2000: 124).
The suggestion, alors, seems to be that in a case α in a deterministic world W, S safely believes P if and only if had the initial conditions of the case been slightly different S would still have truly believed P. What remains unclear, cependant, is why Williamson says that only the initial conditions of the case need to be changed and not the initial conditions of the universe, pour, après tout, altering the initial conditions of the case in a deterministic world can only be achieved if one alters the initial conditions of the universe itself. So altering the initial conditions of the case necessitates altering the initial conditions of the universe. De plus,, on some conceptions of determinism, small scale changes at the beginning of the universe generate large-scale changes further down the chain of events. Par conséquent, it is unclear whether altering the initial conditions of the universe will generate sufficiently similar cases in which S falsely believes P. Enfin, it seems somewhat odd to say that even if the actual world is a deterministic world, then even though I am currently typing in Oxford I could just have easily been hunting bears in Mongolia since a slight alteration in the initial conditions of the universe would have resulted in my being a bear hunter.
One maneuver a safety theorist can make in response to the foregoing difficulties is to adopt a move Lewis makes in his work on the semantics of counterfactuals. Suppose a world W is a deterministic world and in a case α in W S truly believes P at T. The safety theorist could argue that S safely believes P at T in W if and only if had there been a small miracle at T or some time shortly before T such that different conditions prevailed in a case β very similar to α, S truly believes P. (Williamson has raised such an option in conversation.)
Some might be wary of such a metaphysics since they would assume that miracles are not the kind of things we want in our ontology or epistemology. So it appears that unless the safety theorist wishes to adopt a somewhat unorthodox metaphysics, sécurité, despite Williamson’s insistence to the contrary, is hostage to determinism. But in the safety theorist’s defense, our best physics seems to provide a better case for indeterminism than determinism. It remains the case, néanmoins, that the safety theorist needs be more forthcoming about the relationship between the physical conditions of the world and the modality of the safety condition.
6. Références et lectures complémentaires
Armstrong, D. 1973. Belief, Vérité, and Knowledge. Cambridge: la presse de l'Universite de Cambridge.
An important early contribution to the study of knowledge in the post-Gettier period, particularly the idea that knowledge requires reliability.
Chisholm, R. 1977. Théorie de la connaissance. 2e édition. New Jersey: Prentice Hall.
One of the notable works in the early period of contemporary analytic epistemology.
Comesaña, J. 2005. “Unsafe Knowledge.” Synthese 146: 395-404.
This author argues that safe belief is not necessary for knowledge.
Dretske, F. 1971. “Conclusive Reasons.” Australasian Journal of Philosophy 49: 1-22.
In this paper Dretske argues for a sensitivity condition for knowledge.
Gettier, E. 1963. “Is Justified True Belief Knowledge?” Analysis 23: 121-3.
Here the famous counterexamples to the justified true belief account of knowledge are presented.
Homme d'or, UN. 1967. “A Causal Theory of Knowing.” The Journal of Philosophy, 64 (12): 357-372.
Goldman presents a causal account of knowledge, which was an early attempt at solving the Gettier problem. Goldman later abounded this account in favor of his relevant alternatives account, a position he still maintains today.
Homme d'or, UN. 1976. “Discrimination and Perceptual Knowledge.” The Journal of Philosophy, 73: 771-91.
The relevant alternatives condition for knowledge is explicated and defended.
Homme d'or, UN. 1986. Epistemology and Cognition. USA: Presse universitaire de Harvard.
A contemporary classic that presents Goldman’s epistemics—his multidisciplinary project of bringing the developments in cognitive psychology to bear on questions in individual and social epistemology.
Homme d'or, UN. 2007. “Philosophical Intuitions: Their Target, Their Source, and Their Epistemic Status.” Grazer Philosophische Studien 74: 1-26.
In this paper Goldman discusses the place of intuition in philosophy and the epistemic status of such intuitions, which is currently a hot topic in epistemology.
Greco, J. 2007. “Worries about Pritchard’s Safety.” Synthese 158: 299-302.
Problems for Pritchard’s safety condition are raised.
Aubépine, J. 2004. Knowledge and Lotteries. Oxford: Presse universitaire d'Oxford.
This is a masterful treatment of the lottery problem and includes a helpful comparative assessment of the various semantic solutions proposed to this problem.
Aubépine, J. & Gendler, J. 2005. “The Real Guide to Fake Barns.” Philosophical Studies 124: 331-352.
A humorous and pointed display of how some Gettier cases can be manipulated into yield even tougher cases for accounts of knowledge to handle.
Aubépine, J. & Lasonen-Aarnio, M. 2009. “Knowledge and Objective Chance”. Dans: Greenough, P. & Prichard, D. (éd.). Williamson on Knowledge. Oxford: Presse universitaire d'Oxford, pp. 92-108.
In this piece the problematic relationship between safety and probability is identified.
Lackey, J. 2006. “Pritchard’s Epistemic Luck.” Philosophical Quarterly 56: 284-9.
This author argues for inadequacies in Pritchard’s work on safety.
Louis, D. 1973. Counterfactuals. Oxford: Puits noir.
Here Lewis presents his modal semantics for counterfactuals.
McGinn, C. 1999. “The Concept of Knowledge.” In: McGinn, C. Knowledge and Reality: Selected Essays. Oxford: Presse universitaire d'Oxford, pp. 7-35.
In this collection of his essays, McGinn defends his favored account of knowledge.
Neta, R. & Rohrbaugh, g. “Luminosity and the Safety of Knowledge.” Pacific Philosophical Quarterly 85 (2004) 396–406.
Arguments against safety are presented.
Nozick, R. 1981. Philosophical Explanations. Oxford: Presse universitaire d'Oxford.
Chapter 3 contains Nozick’s defense of his sensitivity condition for knowledge.
Peacocke, C. 1999. Being Known. Oxford: Presse universitaire d'Oxford.
In §7.5 Peacocke presents a useful elaboration of the notion of “could easily have.”
Prichard, D. 2005. Epistemic Luck. Oxford: Presse universitaire d'Oxford.
A masterful exposition of the place luck plays in epistemology.
Prichard, D. 2007. “Anti-Luck Epistemology.” Synthese 158: 277-98.
An argument for a refined safety condition for knowledge.
Prichard, D. 2008. “Knowledge, Luck, and Lotteries.” In: Hendricks, V. & Prichard, D. (éd.). New Waves in Epistemology. Londres: Palgrave Macmillan, pp. 28-51.
Here Pritchard discusses the lottery problem for safety at length.
Prichard, D. 2009a. “Safety-Based Epistemology: Whither Now?” Journal of Philosophical Research 34: 33-45.
Further refinements of the safety condition.
Prichard, D. 2009b. Knowledge. Londres: Palgrave Macmillan
A general and accessible introduction to knowledge.
Russel, B. 1948. Human Knowledge: Its Scope and its Limits. Londres: Allen & Ne pas gagner.
Here Russell lays out a general treatment of human knowledge part of which discusses his famous clock case.
Sainsbury, R.M. 1997. “Easy Possibilities.” Philosophy and Phenomenological Research 57(4): 907-919.
A discussion of easy possibility with respect to S not easily falsely believing P.
Shope, R. 1983. An Analysis of Knowing: A Decade of Research. Princeton: Presse de l'Université de Princeton.
A helpful overview of the early post-Gettier literature.
Skyrms, F.B. 1967. “The Explication of ‘X knows that P’.” Journal of Philosophy 64: 373-89.
An early work in epistemology in the post-Gettier period.
Sosa, E. 1999a. “How to Defeat Opposition to Moore.” Philosophical Perspectives 13: 141-54.
A discussion of safety in the context of skepticism.
Sosa, E. 1999b. “How must knowledge be modally related to what is known?” Philosophical Topics 26 (1&2): 373-384.
Sosa argues that a contraposition of Nozick’s sensitivity condition of knowledge is a superior condition for knowledge.
Sosa, E. 2007. A Virtue Epistemology: Apt Belief and Reflective Knowledge Volume I. Oxford: Presse universitaire d'Oxford.
Sosa, E. 2009. Reflective Knowledge: Apt Belief and Reflective Knowledge Volume II. Oxford: Presse universitaire d'Oxford.
In these two publications Sosa develops and refines his work on safety into a virtue account of knowledge that does not lean so heavily on a “brute” safety condition for knowledge. Par conséquent, Sosa (and Pritchard) can no longer be strictly labeled as a safety theorist. En effet, Sosa is open to there being cases of lucky knowledge.
Vogel, J. 1999. “The New Relative Alternative Theory”. Philosophical Perspectives 13: 155-80.
Here one finds, entre autres, some interesting cases involving luck.
Williamson, J. 1994. Vagueness. Londres: Routledge.
Here Williamson presents a case for his epistemic conception of vagueness.
Williamson, J. 2000. Knowledge and its Limits. Oxford: Presse universitaire d'Oxford.
A contemporary classic in epistemology in which Williamson argues for some rather iconoclastic positions about knowledge and evidence, among other important questions.
Williamson, J. 2009a. “Reply to Cassam.” In: Greenough, P. & Prichard, D. (éd.). Williamson on Knowledge. Oxford: Presse universitaire d'Oxford, pp. 285-292.
This is a collection of concerns several authors raise about various aspects of Williamson’s work in epistemology. The book concludes with Williamson’s replies.
Williamson, J. 2009b. “Reply to Goldman.” In: Greenough, P. & Prichard, D. (éd.). Williamson on Knowledge. Oxford: Presse universitaire d'Oxford, pp. 305-312.
Williamson, J. 2009c. “Reply to John Hawthorne and Maria Lasonen-Aarnio.” In: Greenough, P. & Prichard, D. (éd.). Williamson on Knowledge. Oxford: Presse universitaire d'Oxford, pp. 313-29.
Williamson, J. 2009d. “Probability and Danger.” The Amherst Lecture in Philosophy.
A clarification of the relationship between safe belief and probability.
Informations sur l’auteur
Dani Rabinowitz
Messagerie: [email protected]
Oxford University
United Kingdom