Metaphysics of Science
Metaphysics of Science is the philosophical study of key concepts that figure prominently in science and that, prima facie, stand in need of clarification. It is also concerned with the phenomena that correspond to these concepts. Exemplary topics within Metaphysics of Science include laws of nature, causation, dispositions, natural kinds, possibility and necessity, explanation, reduction, emergence, grounding, and space and time.
Metaphysics of Science is a subfield of both metaphysics and the philosophy of science—that is, it can be allocated to either, but it exhausts neither. Unlike metaphysics simpliciter, Metaphysics of Science is not primarily concerned with metaphysical questions that may already arise from everyday phenomena such as what makes a thing (a chair, a desk) the very thing it is, what its identity criteria are, out of which parts is it composed, whether it remains the same if we exchange a couple of its parts, and so forth. Nor is it concerned with the concrete entities (superstrings, molecules, genes, and so forth) postulated by specific sciences; these issues are the subject matter of the special philosophies of science (for example, of physics, of chemistry, of biology).
Metaphysics of Science is concerned with more abstract and general concepts that inform all of these sciences. Many of these concepts are interwoven with each other. For example, metaphysicians of science inquire whether dispositionality, lawhood, and causation can be accounted for in nonmodal terms; whether laws of nature presuppose the existence of natural kinds; and whether the properties of macrolevel objects supervene on dispositional or nondispositional properties.
This article surveys the scope (section 1), historical origin (section 2), exemplary subject matters (section 4), and methodology (section 5) of Metaphysics of Science, as well as the motivation that drives it (section 3).
Table of Contents
What Is Metaphysics of Science?
Metaphysics and Metaphysics of Science
Philosophy of Science and Metaphysics of Science
Explication
Metaphysics of Science in the 20th (and Early 21st) Century
The Logical Empiricist Critique of Metaphysics
The Return to Metaphysics
Naturalized Metaphysics and Inductive Metaphysics
Why Do We Need Metaphysics of Science?
Sample Topics in Metaphysics of Science
Dispositions
Counterfactuals and Necessities
Laws of Nature
Causation
Natural Kinds
Reduction, Emergence, Supervenience, and Grounding
Space and Time
The Methodology of Metaphysics of Science
Theoretical Virtues
Inference to the Best Explanation
Indispensability and Serviceability Arguments
Extensional Adequacy and the Canberra Plan
References and Further Reading
1. What Is Metaphysics of Science?
Metaphysics of Science is a subdiscipline of philosophy concerned with philosophical questions that arise at the intersection of science, metaphysics, and the philosophy of science. The term “Metaphysics of Science,” which combines the names of these disciplines, is of 20th century coinage. In order to fully understand what Metaphysics of Science is, it is helpful to clarify how it differs from both metaphysics simpliciter and philosophy of science.
a. Metaphysics and Metaphysics of Science
Metaphysics simpliciter seeks to answer questions about the existence, nature, and interrelations of different kinds of entities—that is, of existents or things in the broadest sense of the term. It enquires into the fundamental structure of the world. For example, it asks what properties are, how they are connected to the entities which have them, and how the similarity of objects can be explained in terms of their properties. The subject matter of metaphysics is somewhat heterogeneous: topics include the composition of complex entities (such as tables, turtles, and angry mobs), the identity and persistence of objects, problematic kinds of entities (that is, entities about which it is unclear whether or in what sense they exist at all, like numbers and fictional objects such as unicorns), and many more. Metaphysics is usually understood as working at an abstract and general level: it is not concerned with concrete individual things or particular relations but rather with kinds of things and kinds of relations.
Metaphysics of Science is not completely disjoint from metaphysics simpliciter. Not only does it draw on the pool of methodological tools employed in metaphysics, but there is also substantial overlap regarding subject matter. Metaphysicians have their own reasons, independently of science, to investigate causation, modality, and dispositional properties, for example. Like space and time, these concepts pertain also to everyday phenomena. Although Metaphysics of Science, too, is usually attentive to our everyday intuitions and opinions about such phenomena, it engages in a specific investigation of the roles these concepts play in scientific contexts.
Metaphysicians of science often take scientific realism for granted—that is, they hold the philosophical stance that the sciences are apt to find out what the world is really like, that they track the truth, and that the entities they postulate exist. Antirealism about science, on the other hand, often coincides with a skeptical or agnostic attitude towards metaphysics. In the context of some broader metaphysical inquiries, scientific endeavors might well be seen as but one way to the truth. A mainly science-guided metaphysics might even be seen as mistaken (as, for example, in phenomenological approaches (compare Husserl 1936; 1970)).
Moreover, metaphysicians of science demand of themselves that they pay attention to discourses within the sciences. For example, some physicists like Richard Feynman (1967) speak of fundamental symmetry principles and conservation laws as being constraints on other, less fundamental laws of nature (they are the laws of laws, so to speak), rather than being laws about what is going on in the world. Metaphysicians working to develop a philosophical theory of nomicity (lawhood), therefore, should allow for the possibility of there being laws of nature as well as laws of laws.
In short, Metaphysics of Science is that part of metaphysics that enquires into the existence, nature, and interrelations of general kinds of phenomena that figure most prominently in science. Also, Metaphysics of Science grants the sciences authority in their categorization of the world and in their empirical findings.
In terms of content, the transition between Metaphysics of Science and science might well be smooth with no clear border, so the distinction might be one that can only be made sociologically, regarding the departmental structure of universities or focusing on the practitioners and their methods of inquiry. Whereas many physicists (although perhaps not all: see theoretical physics) engage in experimental work, metaphysicians are happy merely to consult the findings of their empirically working colleagues from the science departments.
b. Philosophy of Science and Metaphysics of Science
On the other hand, Metaphysics of Science may just as well be called a part of the philosophy of science. Philosophy of science consists of the philosophical reflection on the preconditions, practices, and results of science in general and of the particular sciences (such as physics, biology, mathematics, sociology, and so forth). Many philosophers of science are engaged in debates surrounding science as a (putative) source of knowledge: what makes scientific results especially reliable? That is, what distinguishes science from non- or pseudoscience, everyday knowledge, and philosophy? Which kinds of methods do and should scientists employ? What is scientific progress? Are scientific theories true (despite being fallible)? Are we ever justified in advocating a particular scientific theory, given that most scientific theories of the past have been replaced by others (like, for example, Newtonian mechanics was replaced with relativistic mechanics)? Can the sciences be unified into one big Theory of Everything? Together, these questions constitute the epistemology of science, that part of the philosophy of science which studies scientific knowledge.
Metaphysics of Science complements the epistemology of science. Whereas the latter asks questions of the sort, “How do we know of x?” Metaphysics of Science enquires, “What is the nature of x?” where “x” is a placeholder for some (kind of) entity, state of affairs, or fact discovered or postulated by science.
The task of Metaphysics of Science is not simply to list these entities or facts. Rather, it operates at a higher level of abstraction. For example, whereas the particular sciences inquire into specific causal relations—or, differently put, into some particular relation that holds between two particular measurable quantities, like the concentration of a drug and the soothing effect it has on headaches—Metaphysics of Science attempts to say what causation is in general. That is, it asks exactly which features a relation must have in order to count as a causal relation (like regular occurrence or modal force), and what the respective relata are. In short, Metaphysics of Science enquires into the key concepts of science not at the empirical but at a more abstract and general level.
c. Explication
Philosophers disagree about which key concepts constitute the subject matter of Metaphysics of Science. Some (like Mumford and Tugby 2013, 6) argue for a narrow interpretation of the term and claim that Metaphysics of Science is primarily concerned with concepts which are relevant to all branches of science, because without these central concepts, science would not be possible. For example, they suggest (16) that kindhood, lawhood, and causation are concepts of this kind. Others, for example the Society for the Metaphysics of Science, are more permissive: they also include in the domain of Metaphysics of Science issues that arise in only some branches of science, such as problems regarding species (biology), intentionality and consciousness (psychology), and social kinds (social science). Probably due to the emphasis that 20th century philosophy of science placed on physics, the larger part of debates within Metaphysics of Science revolves around topics that occur most prominently within the realm of physics, but which figure or bear connections to the other sciences as well:
laws of nature, causation, and dispositions
necessity, possibility, and probability
(natural) kinds and essences
reduction, emergence, and grounding
space and time.
Regardless of whether philosophers defend a narrow or a more permissive notion of Metaphysics of Science, they agree that the concepts in question are in need of explanation. At the very least, such an explanation must show how the concepts cohere. Some metaphysicians take one or more of the concepts they discuss (alongside their related phenomena) to be primitive, meaning that these concepts cannot be analyzed in terms of other concepts and their related phenomena cannot be subsumed under other phenomena. Typically, they then proceed to show that other concepts (alongside their related phenomena) can be explicated in terms of these primitive concepts. (For an exemplary account of some potentially primitive concepts and how they cohere, see parts a through d in section 4.)
As a discipline in its own right, Metaphysics of Science is still relatively young, especially when compared to other areas of philosophy (such as epistemology and ethics). Its topics, however, are not. For as long as science has existed, there has been metaphysical reflection on central scientific concepts. Metaphysics of Science of the 21st century differs from natural philosophy of the past in that the aspiration of natural philosophy was to speculatively describe the world as it is, whereas Metaphysics of Science is more concerned with what the world would be like if our best scientific theories were to turn out true (compare Carrier 2007, 42).
2. Metaphysics of Science in the 20th (and Early 21st) Century
a. The Logical Empiricist Critique of Metaphysics
Of the many historical roots of modern philosophy of science, Logical Empiricism (often interchangeably called “Logical Positivism”) stands out. The Logical Empiricists and their sympathizers (especially Rudolf Carnap, Moritz Schlick, Otto Neurath, Hans Reichenbach, Alfred Ayer, and Carl Gustav Hempel) were the progenitors of a new kind of philosophy (that directly relates to the philosophical work of Gottlob Frege, Bertrand Russell, and Ludwig Wittgenstein, which later came to be known as “analytic philosophy”). They influenced many of the most prominent philosophers of the late 20th century (among them Karl Popper and Willard Van Orman Quine). In a sense, it is with them and their themes (laws of nature, causation, counterfactuals) that modern Metaphysics of Science begins, although they would have rejected much that currently goes by that name. Their ideas sparked many of the debates central to Metaphysics of Science.
In the 1930s, the Logical Empiricists proposed an empiricist, positivist program. They held that experience is our only source of nondefinitional knowledge (hence Logical Empiricism) and that the task of philosophy is logical analysis; that is, analysis of the logical features of and relations between sentences (hence Logical Empiricism). According to the Logical Empiricists, all the empirical propositions we believe can be reduced to so-called protocol sentences, which are direct renderings of our perceptual experience, or “the given.” Only if we know how a sentence could in principle be verified—that is, which possible observations would result in our accepting it as true—can we say that the sentence is meaningful. This so-called verifiability criterion of meaning has one purpose in particular, namely, to exclude metaphysical speculation from the realm of meaningful discourse. For example, the metaphysical sentence “every thing has an immaterial substance” cannot be empirically verified; hence, according to the verifiability criterion of meaning, it is meaningless. A radical antimetaphysical stance was one of the key tenets of Logical Empiricism. Note that verificationism recasts the Empiricists’ epistemic doctrine that all factual knowledge comes from sense perception as a semantic doctrine. Indeed, if we believe that what we know is expressed (or at least expressible) in meaningful sentences, then the transition from Empiricist epistemology to semantics is straightforward: all factual knowledge is expressed in meaningful sentences and only those sentences for which we are able to give a method of verification in observation are meaningful.
It soon became apparent, however, that Logical Empiricism, and especially the verifiability criterion of meaning, houses some serious flaws. Two major blows came from Willard Van Orman Quine’s seminal paper, Two Dogmas of Empiricism (1951), which argued that two assumptions the principle of verification has to presuppose are untenable: the first is that there is a clear distinction between analytically true and synthetically true sentences. The second is that each meaningful sentence faces the tribunal of sense experience on its own for its verification or falsification (rather than holistically in concert with other sentences).
Logical Empiricism faces further problems. Clearly, the Logical Empiricists held the sciences in high esteem. Usually, it is taken for granted that the sciences aim to discover natural laws and that they research properties such as electro-conductivity of different materials, reactiveness of chemical compounds, and fertility of organisms. Prima facie, it seems that many laws of nature can be expressed as general statements, that is, as statements of the form “any particular thing x which has property F also has property G” (in logical notation: ∀x(Fx → Gx)). For example, we say that all samples of metal expand when heated. But universal generalisations of this kind cannot ever be proven true by actual empirical observations (because they have far more instances, maybe infinitely many, than could ever be observed and confirmed), so the verifiability criterion rules out (at least some) laws of nature as meaningless. Even if this consequence could be avoided, what the laws of nature say is often taken to not be merely accidentally true, but to ensue with modal force. Empirically, we cannot account for modality: we can only observe what is actually the case, not what else is possibly or necessarily true.
Similarly, Logical Empiricism runs into problems regarding dispositional properties. Everyday properties such as solubility and scientific properties like conductivity cannot easily be reduced to the observable qualities of soluble or conductive objects. For example, a sugar cube is a somewhat solid object, much like a matchstick, but if we were to place the sugar cube in water, it would dissolve, whereas the matchstick would not. Its manifest properties such as solidity, color, and taste provide no clue as to what will happen to the sugar cube if placed in water. What is more, even if a particular sugar cube (or even all the sugar cubes in the world) were never placed in water at all (or if it were placed in water but the water was already supersaturated with sugar so that the sugar cube would not dissolve in that particular situation), it would nevertheless retain its dispositional property of being soluble, although there is nothing about it that we observe which hints at its solubility. An analogous case can be made regarding dispositional properties discussed in the sciences, like conductivity or chemical bonding propensity, and similarly, regarding science’s theoretically postulated, not directly observable, entities like quarks or superstrings. Because dispositional properties, theoretical entities, and universally generalized laws of nature appear to belong to the conceptual inventory of the sciences, Logical Empiricism, which fails to adequately account for them, quickly became an unattractive option. (For more on laws of nature and dispositions, see section 4c and 4a.)
b. The Return to Metaphysics
The failure of Logical Empiricism to cope with some of the key concepts of science eventually led to the development of Metaphysics of Science. Philosophers realized that if concepts such as law of nature and necessity could not be eliminated by reduction to observation terms, it must then be legitimate to examine them thoroughly, by whatever means seem fit. The most likely candidate to fulfill this task is metaphysics. (For an overview of methods commonly applied in Metaphysics of Science, see section 5.)
The development of Metaphysics of Science occurred simultaneously with the revival of metaphysics in the analytic tradition of philosophy, a tradition that was rooted in Logical Empiricism (as well as in the linguistic turn, manifested by the ideal and ordinary language philosophies of the late 19th and mid-20th centuries). Analytic philosophers were initially hostile towards metaphysical questions. They rejected questions which transcended empirical observation or fell outside of the scope of the sciences. However, philosophers like Willard Van Orman Quine (most famously in his essay “On What There Is” (1948)) and Peter Strawson (especially in his monograph Individuals (1959)) soon realized that there is a supposedly innocent way of practicing metaphysics by describing human conceptual schemes rather than by speculatively conjuring up grand metaphysical edifices. Instead of laying claims to knowledge of the unobservable, they focused on finding out how humans in fact conceptualize reality—in their everyday language (Strawson) or their scientific theories (Quine) where, if stronger authority is given to the sciences, the latter may revise the commitments of the former. Quineans favor the revision and are, hence, closer to the attitude of Metaphysics of Science, where Strawsonians give much credibility also to folk’s general metaphysical background assumptions.
Encouraged by the failure of Logical Empiricism and the fact that metaphysical questions were once again beginning to be the subject of philosophical discussion, philosophers developed a renewed interest in metaphysics. They gradually grew confident in talking not merely about observations, semantics, and language, but also about reality.
Another significant step towards the return to metaphysics was the development of modal logic. Begun by Carnap—for example, in his Meaning and Necessity (1947)—the logic of necessity, possibility, and counterfactuality was refined considerably by Ruth Barcan Marcus (1947), Saul Kripke (1963), and David Lewis (1973a). Later, with Kripke’s Naming and Necessity (1980) and Hilary Putnam’s “The Meaning of ‘Meaning’” (1975), the formalisms were given ontological interpretations and the belief in necessity in nature gained new justifications. Building on these developments further still, even (Aristotelian) essences saw their revival: see Kit Fine’s work (1994) and its application within Metaphysics of Science by, for example, Brian Ellis (2001) and Alexander Bird (2007).
The return to metaphysics in the 20th century was not merely a trailblazing event for the development of modern Metaphysics of Science; rather, the two evolved alongside each other. For example, when it became acceptable for metaphysicians to speak of necessities in nature and discuss statements like “Water is necessarily H2O,” this paved the way for a realistic reading of other modalities, like nomological necessity or counterfactuality. These are, as we will see (in section 4b and 4c), central notions in debates on the nature and status of laws of nature in Metaphysics of Science.
c. Naturalized Metaphysics and Inductive Metaphysics
In the early 21st century, some philosophers argued for a naturalization of metaphysics. Their argument typically rests on the fact that the sciences appear to surpass metaphysics in many respects. The sciences, they claim, have a shared stock of accepted theories, a pool of respected methods and institutionalized standards, and they have predictive and technological successes to show for themselves. In contrast, there is long lasting dissent over positions and methods in metaphysics that rarely ever gets dissolved, and it is unclear what would even count as criteria for metaphysical success. As some metaphysical questions—such as “What is the world ultimately made of?” and “What is life?”—also belong to the domain of the sciences (physics and biology, respectively), naturalists insist that we must draw upon scientific findings to properly answer them.
Naturalistic metaphysicians come in all shapes and sizes. Some naturalists wish to prohibit any metaphysics that is not scientifically evaluable (compare Ladyman and Ross 2007). Some suggest that we should take our clues from scientific practice. For example, Tim Maudlin (2007) argues that lawhood is primitive, as working scientists see no need to analyze the concept. (For more on Maudlin’s position, see section 4c.) Others still allow for the possibility of relevant questions which may not have straightforwardly scientific answers. For example, consider the question “What is it for a thing to persist through time?” Imagine we take a ship out to sea and, little by little, replace every single part of it until none of the original parts remain. Certainly, science can describe how the ship changes, but it will not tell us whether the ship we sail home is still the same as the ship that put out to sea. The latter becomes a pressing, genuinely metaphysical problem, especially when we ask an analogous question about a person’s change and persistence through time.
What is important to remember is that although a naturalized metaphysics may, in a sense, also be called a “Metaphysics of Science,” its proponents may have a very different sort of metaphysics in mind than that presented in section 4.
In the 21st century, some philosophers have stressed that Metaphysics of Science could well be an inductive/abductive enterprise that, just as the sciences do, generalizes empirical data and builds explanatory models on that basis (Paul 2012; Williamson 2016; Schurz 2016; the research group Inductive Metaphysics). (Interestingly, precursors of the idea of an inductive/abductive metaphysics developed simultaneously with Logical Empiricism (Scholz 2018).) If so, metaphysical hypotheses might turn out to be fallible, only approximately true, and contingent.
3. Why Do We Need Metaphysics of Science?
In section 1 it was said that Metaphysics of Science examines the key concepts of science. But why do philosophers even bother to argue over issues in Metaphysics of Science? Is it not relatively clear what the basic concepts in science are and what they mean? Surely scientists know very well what they mean to say when they talk about the solubility of sugar, the second law of thermodynamics, and the relativity of space-time?
What inspires Metaphysics of Science is, of course, the idea that there is more to know about these phenomena and the concepts involved than science can say. Think of causation, for example. The concept of causation is commonsensical: we encounter causal processes in everyday life, like when we hit a golf ball with a putter and the ball begins to move, or when we drop a glass and it shatters. We intuitively distinguish these causal processes from noncausal processes. For example, if somebody in the next room sneezes as you raise your arm, you just know that raising your arm was not the cause of the other person’s sneezing. Still, it is quite complicated to say what establishes a causal connection between two events and what exactly distinguishes the putter-and-golf-ball scenario from the raise-arm-and-sneeze incident. Science records measurements and reveals statistical correlations between phenomena. It also has apt intuitions about whether two events are indeed causally connected or whether they merely co-occur accidentally, albeit regularly. Yet science is rarely interested in a general overall theory (detached from particular, concrete cause-effect relations) of what exactly distinguishes causes from accidents. Concepts such as causation or laws of nature, although relevant for science, are rarely the subject matter of science itself.
Science and Metaphysics of Science have different but complementary approaches to reality: the scientist’s work in this respect is predominantly empirical and consists in finding instantiations—describing particular causal interactions, listing things which are disposed in certain ways, pinning down particular laws of nature, and so on—while the metaphysician’s focus is on understanding and clarifying general concepts or the corresponding phenomena (like causation, disposition, and law of nature).
Still, the critic may object that even if the metaphysician’s and the scientist’s approaches to reality are indeed complementary, we can do perfectly well without Metaphysics of Science. For example, if science manages to find out the different variables and constants that determine how things in the world hang together, why do we also need to know what the general characteristics of a law of nature are or how that notion can be analyzed in terms of other notions? Isn’t this superfluous information? Clearly, scientists do not need metaphysicians to tell them about causation or dispositions in order to perform their research. Nevertheless, metaphysicians of science believe that questions regarding the existence and nature of causation, natural kinds, and necessity are valuable in their own right. At the very least, they are pressing questions that cannot be ignored by those who yearn to thoroughly understand the world we live in. By way of example, consider the dispute between defenders of Humean supervenience and antiHumeans, which revolves around the question of whether there are necessities in nature or not. (See 4a for a brief account of the debate.) Clearly, this is not a question that can be answered by purely scientific methods, but it is one that metaphysicians will nevertheless take to be meaningful and profound.
Some of the issues discussed in Metaphysics of Science are also relevant for practical contexts. For example, failure to render assistance (in case of an accident, a medical emergency, or the like) can lead to prosecution or social repercussions due to immoral behavior. However, you can only be held legally and morally responsible for events you are also causally responsible for. Accordingly, both ethics and law require a concept of causality that accounts not just for positive but also for negative causation, that is, causation by the absence of an event or act. If you pass an unconscious person lying on train tracks and fail to alert the authorities or pull him off the tracks, then you are (partly) causally responsible for his death if he is later killed by a train. Thus, although many questions within Metaphysics of Science are primarily aimed at complementing science, its debates may have far-reaching consequences in other fields as well.
To more fully understand the difference between the scientific and the metaphysical approach to the key scientific concepts that constitute the subject matter of Metaphysics of Science, it is helpful to consider samples of actual work in Metaphysics of Science (section 4) and to take a closer look at the methodology employed (section 5).
4. Sample Topics in Metaphysics of Science
As Metaphysics of Science is the study of the key concepts of science, its subject matter depends directly on what the sciences study and which concepts they employ. Because there are many different branches of science, there are also many potential topics for metaphysicians to discuss. It is impossible to name them all in a survey article, much less discuss them in detail. However, it is practically impossible to fully grasp what Metaphysics of Science is from general definitions only. (The same is true of metaphysics in general. No layperson will understand what metaphysicians do from hearing that metaphysics is the study of the fundamental structure of reality.)
In order to give the reader an idea of both the scope of Metaphysics of Science and its practice, this section briefly and tentatively introduces seven debates which have preoccupied metaphysicians of science in the past: counterfactuals and necessities, dispositions, laws of nature, causation, natural kinds, reduction and related concepts, and space and time. (See the respective articles for more information on modal logic and modality, laws of nature, reductionism, emergence, and time.)
a. Dispositions
Some objects have dispositional properties. For example, sugar is soluble, matchsticks are inflammable, and porcelain vases are fragile. Properties like solubility or fragility are often conceived of as becoming manifest only under so-called “triggers” or “stimulus conditions,” which set off the manifestation of the dispositional property. For example, for a sugar cube to manifest its solubility by dissolving, it must be placed in water.
Not all properties are like that. So-called categorical properties need no stimulus; they are always manifest. Just think of the properties of being solid, having a certain molecular structure (for example, being H2O), being rectangular, and so on. The distinction between categorical and dispositional properties is often drawn with the following three features in mind:
(i) Untriggered dispositions are not directly observable, whereas many categorical properties are. For example, from looking at some sort of powder, we cannot tell whether it is soluble or not. Looking at a football, we immediately see that it is round.
(ii) Because dispositional properties bestow objects with possibilities (of behaving in certain ways under certain circumstances), they are said to be modal properties: they imply, by their very nature, what can, might, or (given certain circumstances) must be the case. Categorical properties are not usually conceived of in this way.
(iii) Dispositional properties are often identified with productive powers. For example, scratching a match is not enough for it to light up; the match’s inflammability, too, is causally responsible for the flame. Usually, no such productive, causal force is directly associated with categorical properties.
Dispositional properties are not just a phenomenon we encounter in everyday contexts, but in science as well. For example, the property of being charged appears to fit this profile: it is not directly observable, it determines how objects would behave under certain conditions, and an object’s charge can be a vital factor in causal processes. Dispositionality has hence been of interest to Metaphysics of Science since its very beginning. In fact, the failure of Logical Empiricism to properly account for dispositional properties played a seminal role in the emergence of the discipline (see section 2a).
Because of their shared belief that all of our knowledge ultimately reduces to observational experience, Logical Empiricists like Rudolf Carnap (1936) attempted to account for dispositional properties in terms of observational properties using a simple conditional to connect the trigger to the manifestation: to say that a sugar cube is soluble just means that if we put it in water, it will dissolve. This and similar attempts at reduction fail, however, as they do not account for the modal behavior of disposed objects. For example, they do not supply a basis on which to ascribe (or not to ascribe) solubility to objects which have never been placed in water. This strikes us as odd, as it does not correspond to our everyday practice.
In order to adequately capture the modal nature of dispositions, philosophers soon suggested that we employ a counterfactual connective instead of the simple conditional. To say that some object has a disposition, they argued, means just that if the object were exposed to the trigger conditions, the disposition would manifest. This approach faces at least two problems. First, it requires a theory that specifies truth conditions for counterfactual conditionals (see section 4b). Second, there are some interesting counterexamples to the effect that under certain conditions we would intuitively ascribe dispositions to objects for which the proposed analysis fails (as in Charles Martin’s 1994 electro-fink example).
Although early attempts at reducing dispositions to categorical properties have failed, problems like the above have convinced some philosophers that we should strive for a reductive analysis after all. The philosophical position that holds that all properties are categorical and that supposedly dispositional properties can somehow be reduced to categorical properties is called “categoricalism.” For many categoricalists, a large part of their motivation comes not from Logical Empiricism but a fundamental insight of classical empiricism. David Hume famously observed that necessary connections, like those between causes and their effects, cannot be detected empirically. Hence, Hume concludes, we have no reason to assume that any sort of productive, necessary, or modal connection of events in nature exists. (This has come to be known as Hume’s Dictum.) Twenty-first century Humeans, too, claim that there are no necessary connections in nature. Consequently, they deny that there are irreducible, metaphysically fundamental dispositional properties that seem to imply some sort of necessary or modal connection between the trigger and the manifestation.
However, as reduction proves to be notoriously complicated, other philosophers opt for dispositionalism instead, which is, in its most radical form (pan-dispositionalism), the view that all properties are of a dispositional nature. Both categoricalism and pan-dispositionalism are monistic theories, as both claim that there is, at the fundamental level, only one type of property. It is also possible for philosophers to hold a neutral or dualistic view, according to which there are both categorical and dispositional properties at the fundamental level of reality.
The debate over dispositions has had substantial impact on other debates within Metaphysics of Science and vice versa. For example, some philosophers argue that laws of nature and causation are grounded in dispositional properties: a law of nature like “Like-charged objects repel each other” could well be true because of the dispositional nature of charge, and causal successions of events could be determined by the dispositional properties of objects involved (for example, wood paneling can be a partial cause of a house fire because it is inflammable). Other philosophers see the direction of dependence exactly the other way around: dispositions depend on laws of nature, because if the laws of nature were different, objects might have different dispositions. For example, if the laws of ionic bonding were different, salt might not dissolve in water. Similarly for causation: maybe salt has its disposition to dissolve because its ionic structure is a potential cause of dissolving. Hence, the debate over dispositions should not be viewed in isolation.
b. Counterfactuals and Necessities
We learned above that a central feature of dispositions is that they establish a modal relationship between the disposed object’s being in the trigger condition(s) and the disposition’s manifestation. A plausible candidate for understanding the nature of this modal relationship is counterfactual dependence. The standard notation for counterfactual dependence reads p □→ q: if p were the case, then it would be the case that q. If a sugar cube is soluble, then that means, at least in part, that if it were placed in water, it would dissolve.
The sentential connective □→ is an intensional connective, which means that the truth value of the entire conditional cannot simply be read off the truth values of the antecedent and the consequent. The reason is easily understood: counterfactual conditionals describe counterfactual situations, which means that both the antecedent and the consequent are usually not currently true. Yet some such counterfactuals with a (currently) false antecedent and a (currently) false consequent are true (the above one capturing solubility, for example) and some such counterfactuals are false (such as “If I were to say ‘abracadabra’ a rabbit would appear”). How then can we evaluate the truth of counterfactual conditionals, given that the truth or falsity of its components is not decisive?
An idea proposed by Nelson Goodman (1947, 1955) and Roderick Chisholm (1946) is to have the truth of a counterfactual conditional depend on both the laws of nature and the background conditions on which they operate. On this account, a counterfactual conditional p □→ q is true if and only if there are true laws of nature L and background conditions C which hold, such that p, L, and C communally imply q. (Some further conditions must be met, like that the background conditions must be logically compatible with p.) Obviously, if the laws of nature or the background conditions were different, p □→ q might turn out not to be the case.
An alternative way of thinking about counterfactuals called “possible world semantics” was introduced by David K. Lewis (1973a). Lewis’s most important tool is the concept of a possible world. According to Lewis, our actual world is only one among a multitude of possible worlds. A possible world is best thought of as one way (of many) the actual world could have been: all other things being equal, the word “multitude” in the last sentence could have been misspelled, Lewis could never have been born, or atoms could have been made of chocolate. Robert Stalnaker (1968) proposed a similar account but without defending modal realism (that is, realism regarding possible worlds). To him, possible worlds are tools, and as such no more than descriptions of worlds that do not exist.
Some possible worlds are more similar to ours than others. For example, a world which is like ours in every respect except that “multitude” is misspelled in the preceding paragraph is more similar to the actual world than a world with chocolate atoms. In evaluating a counterfactual’s truth value, this fact plays a seminal role. Consider, for example, the sentence “If David had not overslept, he would not have been late for work.” In a world where all vehicles miraculously disappeared that morning, where the floor of David’s bedroom was covered in super strong instant glue, or where the laws of nature suddenly changed so that movement is no longer possible, he would not have made it into work in time, even if he had gotten up early. But these worlds do not interest us; this is clearly not what we mean by saying that had David not overslept, he would have made it in time. To judge whether the counterfactual conditional is true regarding our world, we need to consider only worlds where the laws of nature remain the same and everything else is rather normal—that is, similar to what actually did happen—except for the fact that David did not oversleep (and maybe some minor differences).
Lewis and Stalnaker suggest that an ordering of worlds with respect to similarity to our world is possible. Naturally, worlds where many facts are different from the facts of our world, and worlds with different laws of nature, count as particularly dissimilar. Counterfactual truth can then be determined as follows: of all the possible worlds where p is the case (for short, the p-worlds), some will be q-worlds and others non-q-worlds (that is, worlds where q is true or not true, respectively). To determine whether the counterfactual conditional p □→ q is true for our world, we need to check whether the p-worlds that are also q-worlds are more similar to our world than the p-worlds that are non-q-worlds. So to find out whether it is true that David would have gotten to work in time had he not overslept, we look at possible worlds where David did not oversleep and check whether the worlds where he makes it into work are more similar to the actual world than worlds where he does not (because, say, all buses disappear or the floor is sticky).
According to this analysis, the consequent need not be true in all possible worlds (but only in similar p-worlds) in order for a counterfactual to be true. For example, had David overslept in a world where objects can be transported via beaming, he might still have made it to work in time. But as it is doubtful whether this technology will ever be available in our world (as it is not clear whether it is compatible with our laws of nature), the world where beaming has been invented is not relevant for the evaluation of the counterfactual conditional.
Related to what has just been said, we can point out a welcome feature of counterfactual conditionals: it can be true both that if David had not overslept, he would not have been late for work; and that if David had not overslept, yet the bus had had an accident, he would (still) have been late for work. This is a feature that necessary conditionals and mere material implications cannot well accommodate (or only with the undesirable implications that it is impossible for David to oversleep together with the bus having been involved in an accident).
In addition to providing a way of understanding counterfactual conditionals, possible world semantics allows us to spell out the modal notions of necessity and possibility in terms of quantification over possible worlds. Thus, a sentence p is necessarily true (in logical notation: □p) if and only if it is true in all possible worlds. If p is necessarily true, there is no way that p could be false; that is, there is no possible world where p is false. Similarly, p is possibly true (in logical notation: ◊p) if and only if it is true in at least one possible world.
Necessity is thus expressed in terms of universal quantification over (all) possible worlds, whereas possibility is existential quantification over (all) possible worlds. Like the general and existential quantifiers, necessity and possibility, too, are interdefinable: if p is necessary, then it is impossible that non-p, and if p is possible, then it is not necessarily the case that non-p.
Note that there are different sorts of necessity which can be easily accounted for if we conceive of necessity and possibility in terms of quantification over possible worlds: Logical, metaphysical, and nomological necessity can be defined by restricting the scope of worlds over which we quantify. For nomological necessity, for example, we restrict quantification to all and only worlds where our laws of nature hold.
Possible world semantics faces several problems, however. For example, it is unclear just how we can know about what is or is not the case in other possible worlds. How do we gain access to possible worlds that are not our own? However, possible world semantics is a valuable tool for understanding some of the most central issues in Metaphysics of Science, such as dispositions and causation. In addition, necessity is a crucial element in theories of laws of nature, essences, and properties. The modalities of necessity, possibility, and counterfactuality are also important in their own right: after all, knowing what would happen if something else were the case or what can or must happen is key to scientific understanding.
c. Laws of Nature
Here are some intuitions philosophers have about laws of nature: laws are true or idealized, objective, universal statements. Laws of nature support counterfactuals, are confirmable by induction, and are explanatorily valuable as well as essential for predictions and retrodictions. Laws have modal power in that they force certain events to happen or forbid them from occurring. Any analysis of the concept will attempt to account for at least some of these features. Roughly, there are five types of theories of laws of nature: regularity accounts, necessitation accounts, counterfactual accounts, dispositional essentialist accounts, and accounts which take laws to be ontological primitives.
The basic idea of early regularity accounts is that a law of nature is a true, lawlike universal generalization (usually of the form “All F are G,” or in formal notation: ∀x (Fx → Gx)). Whether a given generalization is true is, of course, an empirical matter and must be determined by the sciences, but what it means for a statement to be lawlike is left for metaphysics to define. Not all general statements are lawlike. For example, some general statements state logical truths which clearly are not laws of nature (like “All ravens are ravens”). The main challenge for regularity theories is figuring out what makes a universal statement lawlike without appealing to any sort of connection between events other than regularity.
The Best Systems Account (Lewis 1973a) is an example of a sophisticated regularity theory. It asks us to imagine that all facts about the world are known, such that you know of every space-time point what natural properties are instantiated at it. There are many different ways of systematizing this knowledge by using different sets of generalizations. These generalizations make up competing deductive systems. Defenders of the Best Systems Account hold that a (contingent) generalization is a law of nature if and only if it is a theorem within the best such system. Which system is the best is determined by appeal to certain criteria: simplicity, strength (or informational content), and fit.
The Best Systems Account has been criticized for not taking seriously the intuitions that laws of nature are objective, have explanatory value, and hold with modal force. The Best Systems Account yields regularities, but it does not explain why they obtain. Opponents of regularity theories stress that laws do not merely state what is the case, but enforce or produce what happens.
Necessitation accounts are alternatives to the Best Systems Account that endorse this idea. Such accounts have been proposed by David Armstrong (1983), Fred Dretske (1977), and Michael Tooley (1977). For Armstrong, a law of nature is a necessitation relation N between natural properties. (Armstrong speaks of universals.) For two natural properties to be related by necessitation means that one of them gives rise to and must be accompanied by the other (hence necessitation). To give a coarse-grained example: Coulomb’s law (which states, very roughly, that charges exert forces onto other charges), is a true law statement if and only if necessitation holds between the properties of having a certain charge (C) and exerting a certain force (F): N(C, F).
Necessitation accounts have some advantages over regularity theories. For example, they can more easily allow for uninstantiated laws. But how exactly do we know which properties are related by the necessitation relation, and why should we even assume that it exists? Armstrong argues that necessitation can be experienced insofar as it manifests in causal processes. However, not all laws are causal laws. Defenders of necessitation accounts must work out these issues.
The counterfactuals account focuses on a feature related to necessity, namely, the fact that laws of nature are stable under counterfactual perturbations. For example, that nothing can be accelerated beyond the speed of light is a law of nature because it is a fact that no matter what fantastical interventions we were to devise, we still couldn’t travel faster than the speed of light. Versions of the counterfactuals account of laws of nature have been proposed by James Woodward (1992), John Roberts (2008), and Marc Lange (2009).
A bullet that counterfactual accounts have to bite is that the intuitive order of explanation regarding laws and counterfactuals is upside down: whereas the counterfactual theory of laws says that it is a law that all bodies fall down to earth because it is fundamentally true that “were some arbitrary massive body dropped it would fall,” we intuitively believe that “were we to drop this body it would fall” is true because the law of gravitation holds. In other words, it is more intuitive to hold that the laws of nature support counterfactuals rather than that counterfactuals support the laws.
Another prominent way to account for laws of nature is to appeal to dispositional essentialism. Dispositionalists, like Brian Ellis (2001), Alexander Bird (2007), or Mumford and Anjum (2011), believe that some or even all properties are essentially dispositional. For example, if an object has the property of being electrically charged, that just means that it has the dispositional property of being attracted or repelled by other charged objects nearby. In this sense, the property of being electrically charged is essentially dispositional, because no object is electrically charged unless it is disposed to be attracted or repelled in this way.
Now, if natural properties bestow on their bearers dispositions, then that means it is always true that if something has a given natural property (Px), it also has a certain disposition (Dx) and thus it will manifest in a certain way (Mx), given that the disposition’s corresponding trigger occurs (Sx). (In formal notation: ☐∀x((Px ∧ Sx) → Mx)). This is precisely what many metaphysicians ask of laws: that they bring about or make necessary what happens when something else is the case. Dispositional essentialists thus claim that dispositions ground nomological facts: laws arise from the dispositions things have.
Obviously, the dispositional essentialist account of lawhood hinges on non-trivial premises, which must be evaluated in their own right—for example, the premise that dispositions are basic.
If analyzing lawhood is so complicated an affair that it requires elaborate theories and intricate tools, why not assume that lawhood is conceptually and ontologically primitive—that is, that the concept of lawhood cannot be defined in terms of other concepts, and that it cannot be reduced to underlying phenomena? Tim Maudlin (2007) argues that scientists do not seek to analyze laws, but rather accept their existence for a brute fact in their daily practice, and that philosophers should do likewise.
To Maudlin, a law of nature is that which governs a system’s evolution through time and determines what future states can be produced from the current state of the system. As lawhood is a primitive concept for Maudlin, he attempts to utilize it in defining other notions, like causation and counterfactual truth. Whether Maudlin’s approach is viable or not depends to a large part on whether these definitions of causation and counterfactual dependence by means of laws of nature work out or not.
d. Causation
Causation is obviously intimately connected to the laws of nature, as we would expect at least some laws to govern some causal relationships. Causation, however, is not a straightforward notion. For example, philosophers disagree over which kinds of entities are the proper relata in causal relationships, some potential candidates being substances, properties, facts, or events. There are several approaches to understanding causation: regularity theories, counterfactual theories, transfer theories, and interventionist theories.
Regularity theories follow in the footsteps of David Hume’s treatment of causation. According to regularity theories, all that can be said about causation comes down to stating a regularity in the sequence of events. The motivation for regularity theories stems from the fact that instances of a regularity can be observed, unlike the production of one event by another or a necessary relation between events.
One of the most widely known regularity theories is John Mackie’s INUS account of causation (1965). According to Mackie, an event is a cause if it is an Insufficient but Necessary part of an Unnecessary yet Sufficient condition for the effect to occur. For example, a short circuit (C) alone is not sufficient for a house to burn down (E); there must also be inflammable materials nearby (A) and there must not be sprinklers which extinguish the fire (B). Call this a complex condition (ABC). As the absence of sprinklers and the presence of inflammable materials is not enough to cause a fire, the short circuit is necessary within this complex condition, which is then sufficient for the fire. But there may be other complex events (DFG, HIJ, and so on) which could also bring about the same effect. For example, a lit candle in a dried-up Christmas tree may also cause the house to burn down. As the short-circuit scenario (ABC) is only one of many potential causes of a fire, it is not necessary for the effect to occur, but if it occurs, it is sufficient to bring about the fire.
Like other regularity theories, Mackie’s INUS theory has the disadvantage of classifying as causal some regularly co-occurring coincidences that are, for all we know, not causally related. For illustration, consider a simpler type of regularity theory according to which causation is just regular succession. The problem is that if causation were nothing but regular succession, then we would be forced to say that the rise of consumer goods prices in the late 20th century causes the oceans’ water levels to rise. Obviously, these events coincided but are not causally related.
To forgo this problem, philosophers devised counterfactual theories of causation. The initial idea presented by David K. Lewis (1973b) is to equate causal dependence with counterfactual dependence. The idea seems plausible: had the cause not occurred, there would (all else being equal) not have been the effect. More precisely, for event e to (causally) depend on event c, whether e occurs or not must depend (counterfactually) on whether c occurs or not (that is, on whether both c □→ e and ¬c □→ ¬e are true, where ¬ is the negation operator). For example, if the short circuit is the cause of the fire, then the house would have burned down if the short circuit had occurred, and it would not have burned down if the short circuit had not occurred.
Lewis saw that this initial account is flawed as it yields intuitively incorrect results in so-called pre-emption scenarios. Imagine two people, Suzy and Billy, throwing stones at a bottle. Now picture a situation where if Suzy does not throw her rock, Billy will. Suppose Suzy throws her rock, hits, and the bottle shatters. The effect, namely the shattering of the bottle, is evidently caused by Suzy’s throwing the rock. However, the effect would have occurred even if Suzy hat not thrown, because in that case Billy would have thrown his rock and shattered the bottle. In this scenario, we recognize Suzy’s throw as the cause of the shattering, but the latter does not counterfactually depend on the former (because it is incorrect that had Suzy not thrown, the bottle would not have been shattered).
Although more sophisticated counterfactual theories are more successful in dealing with pre-emption and other problems, some philosophers choose to take a different approach. Proponents of transfer or conserved quantity theories like Salmon (1984, 1994), Phil Dowe (1992), and Max Kistler (2006) claim that causation is best understood as a transfer of a physical quantity from one event to another. For example, Suzy is causally responsible for shattering the bottle (and Billy is not) if it was her energy that set the stone in motion to physically interact with the bottle on impact and shatter it. Transferable quantities include energy, momentum, and charge, for example. These quantities are subject to conservation laws, which means that in any isolated system, the sum total of the remaining and the transferred amount of the quantity will always equal the initial amount.
Transfer theories face difficulties in accounting for negative causation. For instance, omitting to water plants may cause them to wither, but there is no transfer of a conserved quantity from anything to the withering. Other problems derive from examples where the supposed causal relationship is not obviously of a physical nature. For example, we may say that wild speculations at the stock market caused the economy to break down or that Suzy’s throwing Billy a kiss causes him to blush.
Of the fourth group of theories of causation, interventionist theories, James Woodward’s approach (2003) is a prime example. Woodward suggests that causation is best characterized by appeal to intervention. Consider the following example: Testing a drug for efficiency consists in finding out whether a group of people who are administered the drug are cured while a group who does not receive the drug remains uncured. In other words, drug testers intervene by giving the drug to some patients and a placebo to others. If the drug intervention leads to recovery while the placebo intervention does not, the drug is said to be causally relevant for the recovery.
Woodward places further constraints on interventions, one of which is that the intervention (of administering the drug or the placebo, respectively) must be performed in such a way that other potential influences are absent. For example, if the drug were given to healthy and young patients while only the elderly and frail receive the placebo, the test might falsely attribute causal efficacy to the drug.
Even when these precautions are taken, Woodward’s theory is at risk of being circular: the analysis presupposes that we understand beforehand what it means to intervene on a system. Intervention, however, is itself a causal notion. Woodward has clarified that his theory is meant to explicate and enlighten our concept of causation, not to reduce causation to other phenomena.
It seems that all theories of causation face difficulties (either in the form of recalcitrant exemplary cases or in that they do not capture certain features of causation). One possible conclusion to draw from this is that causation is not one unified phenomenon but at least two and potentially many more. For example, Ned Hall (2004) argues that our intuitions characterize causation both as production and counterfactual dependence, and that the problems of analyses of causation can be traced back to the attempt of squeezing these into one unified concept.
The debates over the nature of dispositions, modality, laws of nature, and causation are still ongoing. Many promising approaches have been proposed in their course and will continue to be explored in the future. (For a detailed account of the relation between the debates surrounding dispositions, counterfactuals, laws of nature, and causation in Metaphysics of Science, see Schrenk (2017).)
e. Natural Kinds
In everyday contexts we habitually classify objects or group them together. Some of these groupings seem more natural to us than others. Philosophers who believe that nature comes with her very own classifications speak of “natural kinds.” For example, samples of gold closely resemble each other, differ clearly from other chemical elements, and share a common microstructure, whereas sea life comprises organisms of very different sorts (including crustaceans, fish, and mammals). Terms like “sea life” and “tile-cleaning fluid” are convenient for human purposes such as thinking and talking about groups of things, but we do not expect them to reflect the structure of the natural world (which does not mean that the classifications they introduce are entirely arbitrary). Natural kinds, on the other hand, supposedly “carve nature at the joints” (Plato’s Phaedro 265d–266a). They are also highly projectible: we can inductively infer from the behavior of one object to that of all objects of the same natural kind.
If natural kinds exist and contribute to the structuring of the world, then ideally we want the sciences to discover what natural kinds there are. A natural kind enthusiast may claim that physics tells us that electrons and quarks exist, chemistry says that there are chemical elements like gold (Au) and compounds like water (H2O), and biology seems to suggest that organisms are ordered hierarchically along the lines of family, genus, and species. However, there are also conventionalists who believe that so-called natural kinds are not independent of the minds, theories, and ambitions of human beings, or that no way of dividing up the world is inherently better than any other. To illustrate their claims, they remind us that the concept of biological species used to be regarded a prime example for natural kinds, but that, in the meantime, various paradigms (based on the morphology, interbreeding capacities, or shared ancestry of organisms) have been proposed, each leading to a different system of classifications.
If natural kinds exist in nature, then what are they? What makes a natural kind the kind it is? Different ideas have been proposed and have given rise to a multitude of questions: Do objects which belong to natural kinds share at least some properties? Are these special, “natural” properties? Are natural kinds determined by the roles they play in inductive inferences or laws of nature? Is there a hierarchy of natural kinds, such that some kinds are more fundamental than others?
A position that has been particularly influential in the 20th century is the view that natural kinds have essences. It supposedly follows from Hilary Putnam’s Twin Earth thought experiment (1975). Suppose there is a planet just like Earth in every way, but there is a liquid that the inhabitants of Twin Earth call “water” and which resembles water in every respect except for its microstructure, which is not H2O, but XYZ. Intuitively, Putnam claims, XYZ is not water, which leads him to assume that, unlike the superficial properties of being wet, potable, and so on, being H2O is a necessary condition for being water. Similar conclusions can be drawn from Saul Kripke’s argument that if we were to find out that the color we have up to now associated with elementary gold is actually an illusion, we would all agree that gold remains gold so long as it has atomic number 79, no matter what color it is (1980). (Kripke and Putnam’s primary aim is to show that the meaning of the terms “water” and “gold” comes not from our concepts but is determined by the structure of the world. We must, hence, acquire it a posteriori.)
Linked to but distinct from the question of what natural kinds are is the question of whether natural kinds form an ontological category in their own right, or if they can be reduced to other existents like properties. Realists regarding natural kinds believe that talk of natural kinds and successful inferences presupposes the existence of natural kinds in nature. Reductionists, on the other hand, may argue that membership in natural kinds is not only determined by a number of shared properties, but also that it consists in nothing over and above having these properties.
Unsurprisingly, metaphysicians of science are especially interested in finding out which, if any, natural kinds are postulated or discovered by the various branches of science and whether they really identify as natural kinds by the standards of contemporary metaphysical theories, or whether the theories of natural kinds need to be revised.
f. Reduction, Emergence, Supervenience, and Grounding
The world consists of many different things. Philosophers have always dreamed of rendering it more orderly by systematizing it in just the right way. An important step towards doing so seems to entail an analysis of the relationships and dependencies between things which belong to different strata or levels of reality. The world apparently comes structured in levels, with things on higher levels somehow depending on the things on lower levels. For example, a factory consists of machines, conveyor belts, and so forth; machines are made of various interacting cogs, levers, and wires (which, if left to themselves, cannot fulfill the functions they fulfill within the machine); the cogs are made out of molecules, the molecules are made of atoms, and the atoms are made of protons, neutrons, electrons, and so on. Dependencies like these are studied by the various special sciences. (Note that the idea that science suggests that the world comes structured in levels has been contested by some philosophers (Ladyman et al. 2007, 178).) It is clear, however, that a factory is not composed of machines, conveyor belts, and so on in the same way that an atom consists of particles. Surveying the whole of science, Metaphysics of Science strives to account for the various ways higher level objects depend on lower level entities. The aim is not just to establish what depends on what, but to also clarify and explicate the nature of the dependencies. The kinds of relations most fervently discussed in Metaphysics of Science include reduction, emergence, supervenience, and grounding.
Reduction is often conceived of as a two-place, asymmetrical relationship to the effect that one thing is somehow made of, accounted for, or explained in terms of another thing. Typically, the reduced thing is conceived of as somehow less fundamental or less real, or even considered to be eliminated. Two types of reduction are relevant to Metaphysics of Science. First, there is reduction of one theory to another. For example, is it possible to express some theories of chemistry in terms of physical theories? If so, can all chemical theories be thus reduced? What about biological, psychological, and sociological theories? Second, reduction is sought between different sorts of entities or ontological categories such as phenomena, events, processes, and so on. Potential candidates include reduction of macro-level objects to molecules, atoms, and subatomic particles, reduction of properties to sets of objects which resemble one another, reduction of states of affairs to entities and properties (including relations), and reduction of the mental to the physical. The latter especially has been widely discussed in metaphysics. (Note that the first and second kind of reduction cohere: if reduction of one theory to another succeeds, then ontological reduction of the entities postulated by the former to the entities mentioned in the latter may thereby also be achieved.) For Metaphysics of Science, claims of reduction pertaining to entities postulated by the sciences are of great interest, as are claims regarding reductive relationships between theories and their key concepts.
In a way, then, an armchair is reducible to its constituent parts: the fabric, upholstery, wood, and metal springs. However, an armchair is obviously not the same as a random pile of these materials. Unsurprisingly, philosophers disagree over whether, for particular cases, complete reduction can be achieved or not. For example, how could Bach’s Brandenburg Concerto No. 6 be reduced to its physical properties? Sure, a particular performance depends on the physical movements of the musicians and on how the created soundwaves causally impact on the hearers’ eardrums, but the Concerto is not identical to these physical properties apparent in any given performance of it, as it exists independently of them.
Those who argue that such reductions do not succeed often speak of the irreducible as emergent from the underlying basis. They want point out that although there is a dependence of the higher on the lower level, the higher level adds something novel and can thus not be completely reduced to the lower level. An emergent property or phenomenon cannot be accounted for by reduction, because it is believed not to be a property of any of the component parts, and it is not obviously caused solely by their interplay. For example, whether or not you find abortion morally reprehensible does not seem to depend on the physical facts. Given the same situation, somebody else might pass the opposite moral judgment. Whereas such moral considerations are of no great professional import to the metaphysician of science, emergent properties in the sciences are. For example, biology still struggles to explain why higher forms of life have certain properties like consciousness, aspirations, and phenomenal experiences, which are not obviously properties of the underlying matter.
Reduction and emergence are interlevel relations. The most innocent, weakest dependence relation that is compatible with both reduction and emergence is called supervenience. Some thing A (the so-called supervenience set) is said to supervene on some other thing B (the so-called supervenience base) if and only if there can be no difference in A without there also being a difference in B—or, for short, if there is no A-difference without a B-difference. For example, an oil painting’s macro-properties (A)—what it depicts and how it looks to us—supervene on its microphysical properties (B): unless the location, intensity, or color of the paint blotches are changed, the painting will always look the same to us. To better understand the world, metaphysicians of science research supposed supervenience relations in the sciences.
In the early 21st century, metaphysicians turned their attention to another sort of interlevel relation: grounding relations. Grounding relations are metaphysical relations which establish a special sort of (noncausal) priority of one over the other. Of two propositions or facts which are related by a grounding relation, one is taken to ground, or account for, the other. Grounding is stronger than supervenience, as it amounts not just to the claim that some A-facts only vary when B-facts vary—which may occur coincidentally—but that A-facts vary because B-facts vary. Unlike some forms of reduction, grounding does not seek to eliminate the grounded fact; attributing full existence to both of them, it merely ascribes a more fundamental status to the grounding fact.
Debates over grounding revolve around a number of pivotal questions, such as whether instances of grounding are all of the same kind or whether they embody a number of different relations (which fall under the larger category of grounding relations), whether the grounding relation is primitive or can be analyzed in terms of other relations, and whether it is an irreflexive, asymmetric, and transitive relation or if other properties should be ascribed to it. The answers to these questions may also have an effect on how we should conceive of interlevel relations in the sciences, and the latter are of great interest to metaphysicians of science.
g. Space and Time
To most philosophers interested in the field, Metaphysics of Science is not confined to discussing concepts that pervade the whole of science (as, arguably, law of nature and causation do). It is also concerned with metaphysical questions that arise with respect to the particular sciences, like “What is life?” (biology) or “What is the ontological status of cultures, governments, and money?” (sociology). The philosophy of physics, too, gives rise to many interesting metaphysical questions. Among them are questions regarding the nature of space and time, which have been debated since the early dawn of western philosophy and, in the light of modern-day physics, are still at issue in philosophical debates.
As humans, we perceive space and time as different phenomena with differing properties. Space, as we perceive it, extends in three dimensions, and we can (almost) freely move in any direction. Through the physical forces which act upon our bodies, we are capable of detecting some sorts of motion through space (like when we run or jump) but not others (like Earth’s rotation). Time, on the other hand, has a sort of directedness to it (commonly referred to as “the flow of time”). We cannot linger at a particular moment in time, and we cannot go back to previous times. Entities somehow change yet persist through time.
Metaphysicians of science are interested in these phenomena especially in the light of Albert Einstein’s theories of Special and General Relativity. These theories were proposed in order to make sense of the fact that the speed of light was measured to remain constant regardless of the motion of the light source, whereas the velocities of objects depend on the motion of the object relative to an observer. For example, the speed of a train measured by a stationary observer on the platform is greater than its relative speed with respect to another, slower train that moves in the same direction. The speed of light emitted by a lamp on the train, however, will be the same regardless of whether it is measured by a passenger or a bystander. In popular interpretations, Einstein’s theory of Special Relativity suggests that the problem can be solved by postulating that the three spatial and the one temporal dimension form a continuum by the name of space-time. An astonishing consequence could be this: Different observers are at motion with respect to different objects. Their perception of the present is determined by which information is accessible to them, which in turn is a matter of which light signals reach them at a given moment. Therefore, their individual present, past, and future differ according to their state of motion with respect to other objects. Thus, an objective, observer-independent order of points in time does not exist. This view is often referred to as the block universe view, because everything seems to simply exist conjointly, with no objective past or future. Some philosophers also suggest that, on this view, familiar material things are three-dimensional slices of four-dimensional objects (sometimes called “space-time worms”).
Some philosophers claim that the block universe view is incompatible with presentism (the philosophical position that holds that only what is present exists) and supports eternalism (the view that all events past, present, and future exist). Unfortunately, the latter seems not to correspond to our subjective experiences of time. This poses a genuine dilemma for metaphysicians: should we accept Einstein’s theories and dismiss our subjective experiences, or do we need to reinterpret the remarkably well corroborated theories to accommodate our everyday conceptions of space and time?
More such fascinating questions remain. How is the (perceived) directedness of time and its irreversibility (which manifests as increase of entropy) best explained? Are space and time finite or infinite? Do they exist fundamentally and independently of the objects in them, or does their existence hinge on the existence of those objects? Quite obviously, these are questions on which scientific theories have a bearing, and Metaphysics of Science works towards solutions that are both philosophically rewarding and scientifically tenable.
5. The Methodology of Metaphysics of Science
Although Metaphysics of Science is concerned with the key concepts that figure prominently in science, its methods are not predominately those of the sciences. Apart from referencing scientific results and practices, Metaphysics of Science has a number of argumentative tools at its disposal that do not usually play an explicit role in scientific methodology but are not entirely unscientific either. In science these forms of arguments are implicitly employed to establish hypotheses when the empirical evidence is insufficient (for example, because two theories are equally well supported by the available evidence). Unlike many scientific theories, metaphysical claims often cannot be tested experimentally at all—not because we lack the technological means to do so, but because the very nature of these claims defies empirical confirmation or falsification. Think, for example, of the claim that laws of nature hold across all possible worlds. This is why reference to theoretical virtues, Inferences to the Best Explanation, arguments from indispensability and serviceability, extensional adequacy, and the Canberra Plan method are of great argumentative importance in Metaphysics of Science.
Note that some philosophers—for example, proponents of naturalized metaphysics (as mentioned in section 2b)—may reject all or some of these methodological tools as transcendental or indefensibly a priori. However, the issue is not currently settled among philosophers, and the tools described below remain widely used in contemporary Metaphysics of Science.
a. Theoretical Virtues
In both science and metaphysics, we strive for internally consistent, comprehensive, unambiguous theories which cohere with our accepted beliefs, have an adequately large scope, and so on. Among the various desiderata, explanatory power and simplicity are often accorded a central role. To strive for an explanatorily powerful theory is to demand that a theory must explain a certain number of phenomena which stand in need of explanation, that it does so thoroughly and systematically, and that it is not ad hoc. The value of explanatory power is obvious: explanation (or at the very least, systematization) is the very purpose of any hypothesis. Not so with simplicity. There are many ways a theory can be simpler than its competitors; for example, it may contain fewer variables than another. Usually, the call for simplicity is understood in terms of parsimony. Occam’s Razor, a principle frequently appealed to in this context, says that entities must not be multiplied beyond necessity—that is, if faced with otherwise equally good theories (in terms of their explanatory power, for example), we are to prefer the one that postulates fewer (kinds of) entities. However, it is unclear whether simplicity and the other explanatory virtues are truth conducive or whether they are primarily pragmatic or aesthetic theoretical virtues (which means, for example, that simplicity is preferable because it is easier to work with simple theories or because they are somehow more agreeable).
Although theory choice criteria are certainly at work in everyday reasoning, philosophy, and science—remember that nobody wants a complicated, inconsistent, unclear, shallow, or incomprehensive theory—the application of such criteria is not straightforward: they must be measured and traded off against each other. Unfortunately, there are no shared standards or guidelines on how this should be done. How do we find out which of two theories is simpler or more consistent with the body of already accepted beliefs? How do we know which criterion trumps another? What is more, whereas in science theory choice criteria are interim solutions until a theory can be empirically proven, there is usually no such post hoc test in Metaphysics of Science. For all these reasons, justifying our appeal to theoretical virtues is not a trivial or easy task.
b. Inference to the Best Explanation
Once it has been determined through careful assessment of the theoretical virtues which available theory is the best explanation for a given phenomenon, we tend to infer that it must also be the correct explanation. In most cases, we will then also say that the entities (objects, fields, structures) postulated in the explanatory theory really exist. That is, we apply a so-called Inference to the Best Explanation (often referred to as “IBE”). For example, astronomers found that the best explanation for a divergence in the orbit of Uranus is the existence of another planet, Neptune, whose gravity interferes with Uranus’ trajectory. Thus, they inferred that Neptune must exist. This hypothesis was confirmed when Neptune was later discovered through telescopes. Similarly, many metaphysicians of science believe that IBE can be applied to metaphysical theories. For example, Nancy Cartwright believes that the best explanation for the fact that laboratory results produced in controlled, sterile settings can be applied to the messy circumstances of the outside world is the existence of underlying dispositions that are examined in the laboratory but also pervade the rest of the world, and she therefore accepts this view as true (Cartwright 1992, 47–8).
Quite obviously, IBEs are not deductively valid, and even the best explanations we have at our disposal can later turn out to be incorrect. For example, when astronomers sought to explain anomalies in the orbit of Mercury, they failed to find Vulcan, a planet postulated explicitly for this purpose, and the anomalies were later explained with the help of the General Theory of Relativity.
Note also that Occam’s Razor and IBEs sometimes pull in opposite directions: whereas IBEs often enrich, rather than reduce, our ontology, Occam’s Razor is set on eliminating as many entities as possible from our ontology. On the other hand, one of the marks of a good explanation is that it does not postulate more than is necessary; that is, it is parsimonious in the sense of Occam’s Razor. Either way, even if metaphysicians can agree on using theoretical virtues and IBEs as argumentative tools, there is still room for debate.
c. Indispensability and Serviceability Arguments
In addition to IBEs, metaphysicians appeal to further inferential arguments to the effect that we should accept certain hypotheses as true. More specifically, indispensability and serviceability arguments basically consist in claiming that if X plays a crucial role with respect to Y, and if Y is either uncontroversial or relates to some postulate that we are unwilling to let go, then the existence of X can (or must) be asserted—that is, we should believe that X exists for the sake of Y.
One reason for accepting the existence of an entity X may be that its existence is indispensable for the existence of Y; that is, Y cannot be the case unless X exists. For example, some metaphysicians argue that the existence of mathematical entities is indispensable for science, and as science is important and probably at least approximately true, we have every reason to believe in the existence of numbers (as Platonic objects, say). Very roughly put, indispensability arguments infer from the premise that X is indispensable for Y and the premise that Y is the case to the conclusion that X exists.
(An older variant of the argument from indispensability is the so-called transcendental argument, which usually runs like this: if X is a necessary condition for the possibility of Y, and if we believe that Y is the case, we should also hold that X exists.)
Serviceability arguments are weaker than indispensability arguments. They advise us to accept the existence of a (kind of) entity X if X is serviceable towards end Y. For example, David K. Lewis argues that the assumption that possible worlds are concrete objects (just as our actual world) is highly serviceable (1986, 3): among other things, it provides us with the means to spell out the semantics of counterfactual conditionals. However, there may be other ways of accounting for the truth conditions of counterfactuals (for example, by referring to complete descriptions of fictitious possible worlds instead). Whereas indispensability offers a strong argument for the existence of some sort of entity, serviceability allows for contenders. Different kinds of entities may serve equally well to implement a goal, and serviceability arguments alone may not suffice to determine which of these entities we should believe in.
The evaluation of indispensability and serviceability arguments depends on what you already believe and what goals you pursue (as represented by variable Y). At best, they yield conditional existence claims: if you believe that science is successful and that science would not be successful if it were not for the existence of mathematical entities, then you had better believe in the existence of mathematical entities. If you do not believe that science is successful, then the argument is moot. Awareness of the occurrences of these kinds of arguments within debates in Metaphysics of Science will certainly help you understand your opponent, but it will seldom suffice to settle the issue.
d. Extensional Adequacy and the Canberra Plan
One particularly useful tool in evaluating metaphysical hypotheses is the test for extensional adequacy. To test a theory for extensional adequacy means to examine cases that, according to pretheoretical, intuitive judgment, fall under a concept the theory aims to explicate and to check whether the theory indeed subsumes these cases as instances of the concept. In addition, the theory may be tested with regard to scenarios in which its concepts should intuitively not apply; if the theory (wrongly) applies, it may have to be corrected. For example, suppose someone proposes a metaphysical theory as to what a law of nature is in claiming that a law of nature is nothing but a general statement of the form “All things which have property F also have property G.” This theory will quickly be challenged: “All pigs can fly” is a general statement, but, intuitively, it is not a law of nature, because it is clearly false. Whereas the sentence matches the alleged criterion for lawhood, it is intuitively not a law and thus a counterexample to the proposed analysis of lawhood.
Tests from extensional adequacy presuppose judgments regarding the extension of the concept in question; that is, it presupposes having a strong intuition about which entities or phenomena fall under it or are denoted by it. Preconceptions and intuitions as to what a concept denotes can diverge, however. They may be products of the culture we live in or the way we speak, and professional philosophers’ intuitions may well differ from the preconceptions of the folk.
Understanding a concept is not merely a matter of knowing what it denotes. Usually, concepts also carry meanings, or intensions. The so-called Canberra Plan is a complex two-step method for clarifying both the correct extension and intension of concepts. In other words, the Canberra Plan first seeks to fix the meaning of concepts (intension) by describing the role that instances of a given concept have to fulfill then, second, strives to identify its actual fulfillers (extension). It was proposed by philosophers associated with the Research School of Social Sciences in Canberra (most notably Frank Jackson and David K. Lewis). First, a concept’s use in everyday, scientific, and philosophical contexts is analyzed by collecting all sorts of platitudes about it. A platitude can be anything we say or believe about the concept. For example, regarding causation, we might believe that causes always precede their effects, that nothing causes itself, and so on. By systematizing the platitudes, the Canberra Planners determine which roles the referents of the concept are usually expected to fulfill. In the second step, they then search for referents, that is, entities or phenomena in the world that match these roles. For our example of causation, the transfer of energy could be proposed as such a role player. Because scientific theories are elaborate attempts at describing the world and because Canberra Planners are generally inclined to believe that scientific theories are at least approximately true (that is, they are scientific realists), particular attention is given to the postulates of the sciences. Depending on whether the second step is successful, we may find out the real extension of the concept in question—or we may have to concede that it has no basis in reality and should be discarded. However, note that there are multiple ways of systematizing platitudes and evaluating scientific theories, and hence the outcome may vary.
Apparently, whichever method(s) we employ, there will always be ways to question our claims in Metaphysics of Science (and in philosophy generally). Apart from the proponents of a radical naturalization of metaphysics, philosophers tend to see this not as a fatal flaw but simply as a characteristic feature which is grounded in the very nature of the discipline. The fact that Metaphysics of Science knows no ultimately decisive method but draws on many different tools that may result in different outcomes is not necessarily a bad thing: these tools may just be the best we have to answer questions that we cannot avoid asking, and there may nonetheless be progress in the form of ever more precise, extensionally adequate theories. At the very least, they allow us to map the field of possible views within Metaphysics of Science.
6. References and Further Reading
Armstrong, D. M. 1983. What Is a Law of Nature? Cambridge: Cambridge University Press.
Argues that laws of nature are necessitation relations between universals.
Barcan Marcus, R. 1946. “A Functional Calculus of First Order Based on Strict Implication.” Journal of Symbolic Logic 11: 1-16.
Barcan Marcus, R. 1967. “Essentialism in Modal Logic.” Noûs 1: 91-96.
Both seminal texts by Barcan Marcus lay the groundwork for formal modal logic and afford later developments like Kripke’s and Putnam’s ideas on direct designation, rigid designation, and essence.
Bird, Alexander. 2007. Nature’s Metaphysics. Oxford: Oxford University Press.
Develops a dispositional essentialist account of laws of nature according to which laws are grounded in dispositions and turn out to be metaphysically necessary.
Carnap, R. 1936. “Testability and Meaning.” Philosophy of Science 3: 419–471 and 4: 1–40.
Discusses the simple conditional analysis and proposes the reduction sentences analysis of dispositionality.
Carnap, R. 1947. Meaning and Necessity. Chicago: University of Chicago Press.
Historically relevant work on the semantics of natural and formal languages which lays the foundations for modal logic.
Carrier, M. 2007. “Wege der Wissenschaftsphilosophie im 20. Jahrhundert.” In Wissenschaftstheorie: Ein Studienbuch, edited by A. Bartels and M. Stöckler, 15–44. Paderborn: Mentis.
Brief historical introduction to 20th century philosophy of science (in German).
Cartwright, N. 1992. “Aristotelian Natures and the Modern Experimental Method.” In Inference, Explanation, and other Frustrations, edited by J. Earman, 44–70. Berkeley: University of California Press.
Argues that one cannot make sense of modern experimental method unless one assumes that laws are basically about capacities/dispositions.
Chisholm, R. 1946. “The Contrary-to-Fact Conditional.” Mind 55: 289–307.
An early attempt at analyzing counterfactual conditionals.
Cooper, J. M., ed. 1997. Plato: Complete Works. Indianapolis: Hackett.
Collection of English translations of works ascribed to Plato with helpful footnotes and introductory information.
Dowe, P. 1992. “Wesley Salmon’s Process Theory of Causality and the Conserved Quantity Theory.” Philosophy of Science 59: 195-216.
Criticizes Salmon’s process theory of causality and suggests that a causal theory based on conserved physical quantities should replace it.
Dretske, F. 1977. “Laws of Nature.” Philosophy of Science 44: 248–268.
Argues that laws of nature are relations between universals.
Ellis, Brian. 2001. Scientific Essentialism. Cambridge: Cambridge University Press.
Defends the view that the fundamental laws of nature depend on the essential properties of the things on which they are said to operate and that they are metaphysically necessary.
Feynman, R. 1967. The Character of Physical Law. Cambridge: MIT Press.
A series of lectures discussing several physical laws and analysing their common features, with a focus on mathematical features.
Fine, K. 1994. “Essence and Modality.” Philosophical Perspectives 8: 1-16.
Criticizes the idea that essence is a special case of metaphysical necessity (and argues that it actually is the other way around) and discusses the relationship between essence and definition.
Göhner, J.F., K. Engelhard, and M. Schrenk. 2018. Special Issue: Metaphysics: New Perspectives on Analytic and Naturalised Metaphysics of Science. Journal for General Philosophy of Science 49: 159-241.
Addresses various aspects regarding the relationship between metaphysics and science, with a focus on the questions which metaphysical lessons we should learn from linguistics and the social sciences and whether mainstream metaphysical research programmes can have any positive impact on science.
Goodman, N. 1947. “The Problem of Counterfactual Conditionals.” Journal of Philosophy 44: 113–128.
Examines the problems that face analyses of counterfactual conditionals and attempts a partial definition of counterfactual truth.
Goodman, N. 1955. Fact, Fiction, and Forecast. Cambridge: Harvard University Press.
Introduces the “new riddle of induction” (grue-problem) and explores the concepts of counterfactual truth and lawhood in order to develop a theory of projection which resolves it.
Hall, N. 2004. “Two Concepts of Causation.” In Causation and Counterfactuals, edited by J. Collins, N. Hall, and L. A. Paul, 225–276. Cambridge: MIT Press.
Argues that there are two distinct concepts of causation, one of which is best analyzed in terms of dependence, the other in terms of production.
Husserl, E. 1970. The Crisis of European Sciences and Transcendental Phenomenology. Evanston: Northwestern University Press.
Unfinished classical text in phenomenology originally published in German in 1936, which bemoans the fact that modern science is oblivious to the life-world of humans.
Kistler, M. 2006. Causation and Laws of Nature. Oxford: Routledge.
Develops and applies a transfer theory of causation.
Kripke, S. 1963. “Semantical Considerations on Modal Logic.” Acta Philosophica Fennica 16: 83-94.
Gives an exposition of some features of a semantical theory of modal logics.
Kripke, S. 1980. Naming and Necessity. Oxford: Blackwell.
Argues that the meaning of names is not determined by descriptions and that natural kind terms rigidly designate (that is, that they designate the same natural kind across all possible worlds), thus allowing for a posteriori necessities.
Ladyman, J. and D. Ross, D. Spurrett, and J. Collier. 2007. Every Thing Must Go: Metaphysics Naturalized. Oxford: Oxford University Press.
Argues for a naturalization of metaphysics by criticizing contemporary analytic metaphysics and develops a scientifically informed structuralist realist metaphysics.
Lange, M. 2009. Laws and Lawmakers: Science, Metaphysics, and the Laws of Nature. Oxford: Oxford University Press.
Instead of saying that laws support counterfactuals, Lange proposes to reverse the order and say that laws are those generalities that are stable or invariant under counterfactual perturbations.
Lewis, D. K. 1973a. Counterfactuals. Oxford: Blackwell.
An account of counterfactual conditionals in terms of modal realism. Introduces the Best Systems Account of laws of nature.
Lewis, D. K. 1973b. “Causation.” Journal of Philosophy 70: 556–567.
Proposes and modifies the counterfactual account of causation in terms of counterfactual dependence.
Lewis, D. K. 1986. On the Plurality of Worlds. Oxford: Blackwell.
Defends modal realism, which is the view that the actual world is only one of many possible worlds all of which exist, on the basis that it is highly serviceable in solving longstanding philosophical problems.
Mackie, J. L. 1965. “Causes and Conditions.” American Philosophical Quarterly 2: 245–264.
Proposes the INUS account of causation.
Martin, C. B. 1994. “Dispositions and Conditionals.” The Philosophical Quarterly 44: 1–8.
Introduces finkish dispositions as a problem for counterfactual analyses of dispositions.
Maudlin, T. 2007. The Metaphysics within Physics. Oxford: Oxford University Press.
Argues that lawhood is irreducible but can account for causation, counterfactuals, and dispositionality.
Mumford, S. and R. L. Anjum. 2011. Getting Causes from Powers. Oxford: Oxford University Press.
The authors develop not only a theory of causation based on powers, but also offer a detailed analysis of causal powers themselves.
Mumford, S. and M. Tugby. 2013. “What is the Metaphysics of Science?” Metaphysics and Science, Edited by S. Mumford and M. Tugby, 3–26. Oxford: Oxford University Press.
Introduction to a collection of state-of-the-art papers on core issues in Metaphysics of Science.
Paul, L. A. 2012. “Metaphysics as Modeling: The Handmaiden’s Tale.” Philosophical Studies 160: 1–29.
Claims that science and metaphysics of science differ with respect to their respective subject matter, but that there is no categorical difference in method, as both construct theories by building models.
Putnam, H. 1975. “The Meaning of ‘Meaning.’” Minnesota Studies in the Philosophy of Science 7: 131–193.
Argues for semantic externalism (the claim that the meaning of a term does not determine its extension, which means that the meanings of a word are not determined by the psychological state the speaker is in, but by external factors) using the Twin Earth thought experiment.
Quine, W. V. O. 1948. “On What There Is.” In From A Logical Point of View, 1953, 1–19. Cambridge: Harvard University Press.
Proposes that ontological commitments can be read off statements or scientific theories by formalizing them in predicate logic and identifying bound variables.
Quine, W. V. O. 1951. “Two Dogmas of Empiricism.” In From A Logical Point of View, 1953, 20–46. Cambridge: Harvard University Press.
The two dogmas Quine argues against are: (i) that there is a clear distinction between analytically true and synthetically true sentences, and, (ii), that each meaningful sentence faces the tribunal of sense experience on its own for its verification or falsification (rather than holistically in concert with other sentences).
Roberts, J. 2008. The Law-Governed Universe. Oxford: Oxford University Press.
Introduces the measurability account of laws of nature, which states that lawhood is a role that propositions play rather than a property of facts and that laws guarantee the reliability of methods of measuring natural quantities.
Salmon, W. 1984. Scientific Explanation and the Causal Structure of the World. Princeton: Princeton University Press.
Develops a causal/mechanical account of explanation which incorporates the idea that causation is best considered a process.
Salmon, W. 1994. “Causality without Counterfactuals.” Philosophy of Science 61: 297–312.
Agrees with Dowe’s improvement of Salmon’s 1984 theory and also proposes a transfer or conserved quantity theory of causation.
Scholz, Oliver R. 2018. “Induktive Metaphysik – Ein vergessenes Kapitel der Metaphysikgeschichte.” In Philosophische Sprache zwischen Tradition und Innovation, edited by D. Hommen and D. Sölch. Frankfurt am Main: Peter Lang.
Describes and analyses the historical programme of inductive metaphysics which developed simultaneously with Logical Empiricism.
Schrenk, M. 2017. Metaphysics of Science: A Systematic and Historical Introduction. London: Routledge.
Comprehensive, easily accessible systematic and historical introduction to Metaphysics of Science including the topics of dispositions, counterfactuals, laws of nature, causation, and dispositional essentialism, as well as information on the origins and methodology of Metaphysics of Science.
Schurz. G. 2016. “Patterns of Abductive Inference.” In Springer Handbook of Model-Based Science, edited by L. Magnani. and T. Bertoletti, 151–174. New York: Springer.
Analyses the structure of abductive inferences and recommends that metaphysics should make use of such inferences.
Stalnaker, R. 1968. “A Theory of Conditionals.” American Philosophical Quarterly 2: 98–112.
Uses possible worlds semantics to analyze counterfactual conditionals without a commitment to possible worlds realism.
Strawson, P.F. 1959. Individuals: An Essay in Descriptive Metaphysics. New York: Routledge.
Distinguishes between descriptive and revisionary metaphysics and examines the relationship between our language and our habit of conceiving of the world in terms of individuals (particulars and persons).
Tahko, T.E. 2015. An Introduction to Metametaphysics. Cambridge: Cambridge University Press.
Comprehensive and easily accessible introduction to 20th century and current debates about the methodology and epistemology of metaphysics.
Tooley, M. 1977. “The Nature of Laws.” Canadian Journal of Philosophy 7: 667–698.
Argues that the relations between universals are truth-makers for laws of nature.
Williamson, Timothy. 2016. “Abductive Philosophy.” Philosophical Forum, 47 3–4: 263–280.
Recommends both ampliative inferences such as abductions (or, nearly synonymous, inferences to the best explanation) and model-building as valuable methodologies not only for the sciences but also for philosophy and metaphysics.
Woodward, J. 1992. “Realism about Laws.” Erkenntnis 36: 181–218.
Defends the view that the notion of lawfulness is linked to the notion of invariance rather than the notion of necessary connection.
Woodward, J. 2003. Making Things Happen: A Theory of Causal Explanation. Oxford: Oxford University Press.
Proposes an interventionist theory of causation that analyses causation by appealing to the notion of intervention or manipulation.
Author Information
Julia F. Göhner
Heinrich Heine University
Dusseldorf, Germany
and
Markus Schrenk
Email: [email protected]
Heinrich Heine University
Dusseldorf, Germany