Category Archives: Metaphysics

Events and processes

I’ve been reading up on Lewis on causation, and in particular on the account of events he uses. The big thing that his metaphysics of events delivers is a way of getting rid of spurious causal dependence. I say hello, in the course of saying hello abruptly and loudly. I go for a walk, in the course of myself and my girlfriend going for a walk. These patterns of events arise not because one event causes the other, but because one event is part of the other. Pairs of events like these can stand in relations of counterfactual dependence. Suppose I say hello abruptly and loudly. Had I not said hello, then I wouldn’t have said hello abruptly and loudly. So Lewis says: causal dependence between events is not just a matter of counterfactual dependence: it’s counterfactual dependence between distinct events (events that don’t share a part).

When you dig into his metaphysics of events, you see that two notions of parthood are in play. One is broadly logical: event E is a logical part of event F if E occurring in region R entails that F occurs in region R.

A second notion of parthood he uses is spatio-temporal: event E is a st-part of event F if, necessarily, if F occurs in region R, then E occurs some subregion of R. Saying hello abruptly is an l-part of saying hello; my walking is an st-part of myself and my girlfriend walking.

But this still doesn’t cover all cases. Consider Trafalgar and the Napoleonic war. Intuitively, that battle is part of the wars (and not caused by the war, though caused by its earlier parts). But it’s not an l-part, since the region in which the war occurs is more extensive than the region in which the battle occurs. And it’s not an st-part, since the war could have been completed before Trafalgar happened. So Lewis defines up an “accidental” variant of spatio-temporal parthood between occurrent events: E is an a-part of F iff E and F are occurrent, and there’s an occurrent l-part x of E, which is a st-part of some occurrent y which is an l-part of F. I take it the idea is that as well as the Napoleonic wars, there’s another event, the Napoleonic-war-as-it-happened, that is a l-part of the Napoleonic wars; and there’s also Trafalgar-as-it-happened, which is an l-part of Trafalgar. And the latter is an st-part of the former; hence, derivatively, Trafalgar is an st-part of the war.

(Some notes on the interrelation of these notions: if E and F are occurrent, and E is an l-part of F, then it’s an a-part of F (take x=E, y=E). And if E is an st-part of F, then it’s an a-part of F (take x=E, y=F). Rather weirdly, note that when E is an l-part of F, then wherever E occurs, F occurs in an (improper) subregion. Hence F is an st-part of E. And so by the above, if they’re occurrent, we’ll have F an a-part of E. That is, when E is an l-part of F, and both are occurrent, then E and F are a-parts of each other (though of course they may still be distinct).)

Lewis’s requirement that events be “distinct” in order to be candidates for causing one another is that they don’t share a common part in any of these senses.

Lewis notes several times that this would be way too strong a constraint if we allowed events with very rich essences—I’m interested in what this tells us about what sorts of events we can think are hanging around.

Ok: so here is my puzzle. Here’s a first shot—an objection which is plausible but mistaken. Right now, a ball drops, and hits the floor. Consider the conjunctive event or “process” of the ball dropping and hitting the floor. Now (here comes the fallacy) doesn’t this event imply that the ball drops? And so doesn’t that mean the process is an l-part of the ball dropping, and likewise of the ball hitting the floor? But if so, then these two events wouldn’t be distinct, and so couldn’t stand in causal relations. It would be impossible to have a conjunctive process, whose constituents were causally interrelated.

That worried me for a bit but I reckon it’s not a problem. Necessarily, the region *in* which the dropping-and-hitting the floor is a region *within* which the dropping occurs; but it’s not a region *in* which the dropping occurs. “in” is like exact location; an event is then *within* any region it is “in”. But it’s only when every region in which the first occurs is a region *in* which the second occurs, that we have implication or l-parthood. What we have here is just st-parthood, running in the direction you’d have imagined—from constituents to process rather than vice versa.

So that exact puzzle isn’t an objection to Lewis; but I suspect he’s escaped on a technicality, and the underlying trouble with processes will rearise if we tweak the example. Lewis allows for colocated events—and allows they may stand in causal relations. He contemplates a battle of invisible goblins having causal influence on the progress of the AAP conference with which it’s colocated. More seriously, he thinks the presence of an electron in a electric field might cause its acceleration. But the location of the electron, and its acceleration, are colocated events. But in examples of this kind, we really are in trouble if we allow for the conjunctive “process”—the electron-being-so-located-and-accelerating. For necessarily, wherever we have that process in a given region, we have the accelaration *in that region*. So the process is an l-part of the acceleration. Likewise for the locatedness of the electron. But then the two events share a part, and are not distinct—so they couldn’t cause one another!

The trouble for Lewis will arise if we both allow (i) cause and effect to be located in the same region; and (ii) the existence of a “process” encompassing both cause and effect. Lewis says he wants to allow (i); and denying the existence of conjunctive events/processes in (ii) looks unprincipled if we allow them in parallel cases (where the ball drops to the floor). So I conclude there’s pressure on Lewis to rule out conjunctive events/processes across the board.

Nominalizing statistical mechanics

Frank Arntzenius gave the departmental seminar here at Leeds the other day. Given I’ve been spending quite a bit of time just recently thinking about the Fieldian nominalist project, it was really interesting to hear about his updating and extension of the technical side of the nominalist programme (he’s working on extending it to differential geometry, gauge theories and the like).

One thing I’ve been wondering about is how  theories like statistical mechanics fit into the nominalist programme. These were raised as a problem for Field in one of the early reviews (by Malament). There’s a couple of interesting papers recently out in Philosophia Mathematica on this topic, by Glen Meyer and Mark Colyvan/Aidan Lyon. Now, one of the assumptions, as far as I can tell, is that even sticking with the classical, Newtonian framework, the Field programme is incomplete, because it fails to “nominalize” statistical mechanical reasoning (in particular, stuff naturally represented by measures over phase space).

Now one thing that I’ll mention just to set aside is that some of this discussion would look rather different if we increased our nominalistic ontology. Suppose that reality, Lewis-style, contains a plurality of concrete, nominalistic, space-times—at least one for each point in phase space (that’ll work as an interpretation of phase space, right?). Then the project of postulating qualitative probability synthetic structure over such worlds from which a representation theorem for the quantitative probabilities of statistical mechanics looks far easier. Maybe it’s still technically or philosophically problematic. Just a couple of thoughts on this. From the technical side, it’s probably not enough to show that the probabilities can be represented nominalistically—we want to show how to capture the relevant laws. And it’s not clear to me what a nominalistic formulation of something like the past hypothesis looks like (BTW, I’m working with something like the David Albert picture of stat mechanics here). Philosophically, what I’ve described looks like a nominalistic version of primitive propensities, and there are various worries about treating probability in this primitive way (e.g. why should information about such facts constrain credences in the distinctive way information about chance seems to)? I doubt Field would want to go in for this sort of ontological inflation in any case, but it’d be worth working through it as a case study.

Another idea I won’t pursue is the following: Field in the 80’s was perfectly happy to take a (logical) modality as primitive. From this, and nominalistic formulations of Newtonian laws, presumably a nomic modality could be defined. Now, it’s one thing to have a modality, another thing to earn the right to talk of possible worlds (or physical relations between them). But given that phase space looks so much like we’re talking about the space of nomically possible worlds (or time-slices thereof) it would be odd not to look carefully about whether we can use nomic modalities to help us out.

But even setting these kind of resources aside, I wonder what the rules of the game are here. Field’s programme really has two aspects. The first is the idea that there’s some “core” nominalistic science, C. And the second claim is that mathematics, and standard mathematized science, is conservative over C. Now, if the core was null, the conservativeness claim would be trivial, but nobody would be impressed by the project! But Field emphasizes on a number of occasions that the conservativeness claim is not terribly hard to establish, for a powerful block of applied mathematics (things that can be modelled in ZFCU, essentially).

(Actually, things are more delicate than you’d think from Science without Numbers, as emerged in the JPhil exchange between Shapiro and Field. The upshot, I take it  if (a) we’re allowed second-order logic in the nominalistic core; or (b) we can argue that best justified mathematized theory aren’t quite the usual versions, but systematically weakened versions; then the conservativeness results go through).

As far as I can tell, we can have the conservativeness result without a representation theorem. Indeed, for the case of arithmetic (as opposed to geometry and Newtonian gravitational theory) Field relies on conservativeness without giving anything like a representation theorem. I think therefore, that there’s a heel-digging response to all this open to Field. He could say that phase-space theories are all very fine, but they’re just part of the mathematized superstructure—there’s nothing in the core which they “represent”, nor do we need there to be.

Now, maybe this is deeply misguided. But I’d like to figure out exactly why. I can think of two worries: one based on loss of explanatory power; the other on the constraint to explain applicability.

Explanations. One possibility is that nominalistic science without statistical mechanics is a worse theory than mathematized science including phase space formulations—in a sense relevant to the indispensibility argument. But we have to treat this carefully. Clearly, there’s all sorts of ways in which mathematized science is more tractable than nominalized science—that’s Field’s explanation for why we indulge in the former in the first place. One objective of the Colyvan and Lyon article cited earlier is to give examples of the explanatory power of stat mechanical explanations, so that’s one place to start looking.

Here’s one thought about that. It’s not clear that the sort of explanations we get from statistical mechanics, cool though they may be, are of relevantly similar kind to the “explanations” given in classical mechanics. So one idea would be to try to pin down this difference (if there is one) and figure out how they relate to the “goodness” relevant to indispensibility arguments.

Applicability. The second thought is that the “mere conservativeness” line is appropriate either where the applicability of the relevant area of mathematics is unproblematic (as perhaps in arithmetic) or where there aren’t any applications to explain (the higher reaches of pure set theory). In other cases—like geometry, there is a prima facie challenge to tell a story about how claims about abstracta can tell us stuff about the world we live in. And representation theorems scratch this itch, since they show in detail how particular this-worldly structures can exactly call for a representation in terms of abstracta (so in some sense the abstracta are “coding for” purely nominalistic processes—“intrinsic processes” in Field’s terminology). Lots of people unsympathetic to nominalism are sympathetic to representation theorems as an account of the application of mathematics—or so the folklore says.

But, on the one hand, statistical mechanics does appear to feature in explanations of macro-phenomena; and second, the reason that talking about measures over some abstract “space” can be relevant to explaining facts about ripples on a pond is at least as unobvious as the applications of geometry.

I don’t have a very incisive way to end this post. But here’s one thought I have if the real worry is one of accounting for applicability, rather than explanatory power. Why think in these cases that applicability should be explained via representation theorems? In the case of geometry, Newtonian mechanics etc, it’s intuitively appealing to think there are nominalistic relations that our mathematized theories are encoding. Even if one is a platonist, that seems like an attractive part of a story about the applicability of the relevant theories. But when one looks at statistical mechanics, is there any sense that it’s applicability would be explained if we found a way to “code” within Newtonian space-time all the various points of phase space (and then postulate relations between the codings)? It seems like this is the wrong sort of story to be giving here. That thought goes back, I guess, to the point raised earlier in the “modal realist” version: even if we had the resources, would primitive nominalistic structure over some reconstruction of configuration of phase space really give us an attractive story about the applicability of statistical mechanical probabilities?

But if representation theorems don’t look like the right kind of story, what is? Can the Lewis-style “best theory theory” of chance, applied to stat mechanical case (as Barry Loewer has suggested) be wheeled in here? Can the Fieldian nominalist just appeal to (i) conservativeness (ii) the Lewisian account of how the probability-invoking theory and laws gets fixed by the patterns of nominalistic facts in a single classical space? Questions, questions…

Error theories and Revolutions

I’ve been thinking about Hartry Field’s nominalist programme recently. In connection with this (and a draft of a paper I’ve been preparing for the Nottingham metaphysics conference) I’ve been thinking about parallels between the error theories that threaten if ontology is sparse (e.g. nominalistic, or van Inwagenian); and scientific revolutions.

One (Moorean) thought is that we are better justified in our commonsense beliefs (e.g. `I have hands’) than we could be in any philosophical premises incompatible with them. So we should always regard “arguments against the existence of hands” as reductios of the premises that entail that one has no hands. This thought, I take it, extends to commonsense claims about the number of hands I possess. Something similar might be formulated in terms of the comparative strength of justification for (mathematicized) science as against the philosophical premises that motivate its replacement.

So presented, Field (for one) has a response: he argues in several places that we exactly lack good justification for the existence of numbers. He simply rejects the premise of this argument.

A better way presentation of the worry focuses, not on the relative justification for one’s beliefs, but on conditions under which it is rational to change one’s beliefs. I presently have a vast array of beliefs that, according to Field, are simply false.

Forget issues of relative justification. It’s simply that the belief state I would have to be in to consistently accept Field’s view is very distant from my own—it’s not clear whether I’m even psychologically capable of genuinely disbelieving that if there are exactly two things in front of me, then the number of things in front of me is two. (If you don’t feel the pressure in this particular case, consider the suggestion that no macroscopic objects exist—then pretty much all of your existing substantive beliefs are false). Given my starting set of beliefs, it’s hard to see how speculative philosophical considerations could make it rational to change my views so much.

Here’s one way of trying to put some flesh on this general worry. In order to assess an empirical theory, we need to measure it against relevant phenomena to establish theory’s predictive and explanatory power. But what do we take these phenomena to be? A very natural thought is that they include platitudinous statements about the positions of pointers on readers, statements about how experiments were conducted, and whatever is described by records of careful observation. But Field’s theory says that the content of numerical records of experimental data will be false; as will be claims such as “the data points approximate an exponential function”. On a van Inwagenian ontology, there are no pointers, and experimental reports will be pretty much universally false (at least on an error-theoretic reading of his position). Sure, each theorist has a view on how to reinterpret what’s going on. But why should we allow them to skew the evidence to suit their theory? Surely, given what we reasonably take the evidence to be, we should count their theories as disastrously unsuccessful?

But this criticism is based on certain epistemological presuppositions, and these can be disputed. Indeed Field in the introduction to Realism Mathematics and Modality (preemptively) argues that the specific worries just outlined are misguided. He points to cases he thinks analogous, where scientific evidence has forced a radical change in view. He argues that when a serious alternative to our existing system of beliefs (and rules for belief-formation) is suggested to us, it is rational to (a) bracket relevant existing beliefs and (b) consider the two rival theories on their individual merits, adopting whichever one regards as the better theory. The revolutionary theory is not necessarily measured against we believe the data to be, but against what the revolutionary theory says the data is. Field thinks, for example, that in the grip of a geocentric model of the universe, we should treat `the sun moves in absolute upward motion in the morning’ as data. However, even for those within the grip of that model, when the heliocentric model is proposed, it’s rational to measure its success against the heliocentric take on what the proper data is (which, of course, will not describe sunrises in terms of absolute upward motion). Notice that on this model, there’s is effectively no `conservative influence’ constraining belief-change—since when evaluating new theories, one’s prior opinions on relevant matters are bracketed.

If this is the right account of (one form of) belief change, then the version of the Moorean challenge sketched above falls flat (maybe others would do better). Note that for this strategy to work, it doesn’t matter that philosophical evidence is more shaky than scientific evidence which induces revolutionary changes in view—Field can agree that the cases are disanalogous in terms of the weight of evidence supporting revolution. The case of scientific revolutions is meant to motivate the adoption of a certain epistemology of belief revision. This general epistemology, in application to the philosophy of mathematics, tells us we need not worry about the massive conflicts with existing beliefs that so concerned the Mooreans.

On the other hand, the epistemology that Field sketches is contentious. It’s certainly not obvious that the responsible thing to do is to measure revisionary theory T against T’s take on the data, rather than against one’s best judgement about what the data is. Why bracket what one takes to be true, when assessing new theories? Even if we do want to make room for such bracketing, it is questionable whether it is responsible to pitch us into such a contest whenever someone suggests some prima facie coherent revolutionary alternative. A moderated form of the proposal would require there to be extant reasons for dissatisfaction with current theory (a “crisis in normal science”) in order to make the kind of radical reappraisal appropriate. If that’s right, it’s certainly not clear whether distinctively philosophical worries of the kind Field raises should count as creating crisis conditions in the relevant sense. Scientific revolutions and philosophical error theories might reasonably be thought to be epistemically disanalogous in a way unhelpful to Field.

Two final notes. It is important to note what kind of objection a Moorean would put forward. It doesn’t engage in any way with the first-order case that the Field constructs for his error-theoretic conclusion. If substantiated, the result will be that it would not be rational for me (and people like me) to come to believe the error-theoretic position.

The second note is that we might save the Fieldian ontology without having to say contentious stuff in epistemology, by pursuing reconciliation strategies. Hermeneutic fictionalism—for example in Steve Yablo’s figuralist version—is one such. If we never really believed that the number of peeps was twelve, but only pretended this to be so, then there’s no prima facie barrier from “belief revision” considerations that prevents us from explicitly adopting a nominalist ontology. Another reconciliation strategy is to do some work in the philosophy of language to make the case that “there are numbers” can be literally true, even if Field is right about the constituents of reality. (There are a number of ways of cashing out that thought, from traditional Quinean strategies, to the sort of stuff on varying ontological commitments I’ve been working on recently).

In any case, I’d be really interested in people’s take on the initial tension here—and particularly on how to think about rational belief change when confronted with radically revisionary theories—pointers to the literature/state of the art on this stuff would be gratefully received!

Counting delineations

I presented my paper on indeterminacy and conditionals in Konstanz a few days ago. The basic question that paper poses is: if we are highly confident that a conditional is indeterminate, what sorts of confidence in the conditional itself are open to us?

Now, one treatment I’ve been interested in for a while is “degree supervaluationism”. The idea, from the point of view of the semantics, is to replace appeal to a single intended interpretation (with truth=truth at that interpretation) or set of “intended interpretations” (with truth=truth at all of them) with a measure over the set of interpretations (with truth to degree d = being true at exactly measure d of the interpretations). A natural suggestion, given that setting, is that if you know (/are certain) S is true to measure d, then your confidence in S should be d.

I’d been thinking of degree-supervaluationism in this sense, and the more standard set-of-intended-interpretations supervaluationism, as distinct options. But (thanks to Tim Williamson) I realize now that there may be an intermediate option.

Suppose that S= the number 6 is bleh. And we know that linguistic conventions settle that numbers <5 are bleh, and numbers >7 are not bleh. The available delineations of “nice”, among the integers, are ones where the first non-bleh number is 5, 6, 7 or 8. These will count as the “intended interpretations” for a standard supervaluational treatment, so “6 is bleh” will be indeterminate—in this context, neither true nor false.

I’ve discussed in the past several things we could say about rational confidence in this supervaluational setting. But one (descriptive) option I haven’t thought much about is to say that you should proportion your confidence to the number of delineations on which “6 is bleh” comes out true. In the present case, our confidence that 6 is bleh should be 0.5, our confidence that 5 is bleh should come out 0.25, and our confidence that 7 is bleh should come out 0.25.

Notice that this *isn’t* the same as degree-supervaluationism. For that just required some measure or other over the space of interpretations. And even if that was non-zero everywhere apart from ones which place first non-bleh number in 5-8, there are many options available. E.g. we might have a measure that assigns 0.9 to the interpretation which makes 5 the first non-bleh number, and distributes 0.3333… to the others. In other words, the degree-supervaluationist needn’t think that the measure is a measure *of the number of delineations*. I usually think of it (in the finite case), intuitively, as a measure of the “degree of intendedness” of each interpretation. In a sense, the degree-supervaluationists I was thinking of conceive of the measure as telling us to what extent usage and eligibility and other subvening facts favour one interpretation or another. But the kind of supervaluationists we’re now considering won’t buy into that at all.

I should mention that even if, descriptively, it’s clear what proposal here is, it’s less clear how the count-the-delineations supervaluationists would go about justifying the rule for assigning credences that I’m suggesting for them. Maybe the idea is that we should seek some kind of compromise between the credences that would be rational if we took D to be the unique intended interpretation, for each D in our set of “intended interpretations” (see this really interesting discussion of compromise for a model of what we might say—the bits at the end on mushy credence are particularly relevant). And they’ll be some oddities that this kind of theorist will have to adopt—e.g. for a range of cases, they’ll be assigning significant credence to sentences of the form “S and S isn’t true”. I find that odd, but I don’t think it blows the proposal out of the water.

Where might this be useful? Well, suppose you believe in B-theoretic branching time, and are going to “supervaluate” over the various future-branches (so “there will be a sea-battle” will a truth-value gap, since it is true on some but not all). (This approach originates with Thomason, and is still present, with tweaks, in recent relativistic semantics for branching time). “Branches” play the role of “interpretations”, in this setting. I’ve argued in previous work that this kind of indeterminacy about branching futures leads to trouble on certain natural “rejectionist” readings of what our attitudes to known indeterminate p should be. But a count-the-branches proposal seems pretty promising here. The idea is that we should proportion our credences in p to the *number* of branches on which p is true.

Of course, there are complicated issues here. Maybe there are just two qualitative possibilities for the future, R and S. We know R has a 2/3 chance of obtaining, and S a 1/3 chance of obtaining. In the B-theoretic branching setting, an R-branch will exist, and an S-branch will exist. Now, one model of the metaphysics at this point is that we don’t allow qualitatively duplicate future brnaches: so there are just two future-branches in existence, the R one and the S one. On a count-the-branches recipe, we’ll get the result that we should have 1/2 credence that R will obtain. But that conflicts with what the instruction to proportion our credences to the known chances would give us. Maybe R is primitively attached to a “weight” of 2/3—but our count-the-branches recipe didn’t say anything about that.

An alternative is that we multiply indiscernable futures. Maybe there are two, indiscernable R futures, and only one S future. Then apportioning  the credences in the way mentioned won’t get us into trouble. And in general, if we think whenever the chance (at moment m) that p is k, then the proportion of p-futures to non-p-futures is k, then  we’ll have a recipe that coheres nicely with the principal principle.

Let me be clear that I’m not suggesting that we identify chances with numbers-of-branches. Nor am I suggesting that we’ve got some easy route here for justifying the principal principle. The only thing I want to say is that *if* we’ve got a certain match between chances and numbers of future branches, then two recipes for assigning credences won’t conflict.

(I emphasized earlier that count-the-precisifications supervaluationism had less flexibility than degree-supervaluationism where the relevant measure was unconstrained by counting considerations. In a sense, what the above little discussion highlights is that when we move from “interpretations” to “branches” as the locus of supervaluational indeterminacy, this difference in flexibility evaporates. For in the case where that role is played by actually existing futures, then there’s at least the possibility of mutiplying qualitatively indiscernable futures. That sort of maneuver has little place in the original, intended-interpretations settings, since presumably we’ve got an independent fix on what the interpretations are, and we can’t simply postulate that the world gives us intended interpretations in proporitions that exactly match the credences we independently want to assign to the cases.)

Branching worlds

I’ve recently discovered some really interesting papers on how to think about belief in a future with branching time. Folks are interested in branching time as it (putatively) emerges out of “decoherence” in the Everett interpretation of standard Quantum mechanics.

The first paper linked to above is forthcoming in BJPS, by Simon Saunders and David Wallace. In it, they argue for a certain kind of parallel between the semantics for personal fission cases and the semantics most charitably applied to language users in branching time, and argue that this sheds lights on the way that beliefs should behave.

Now, lots of clever people are obviously thinking about this, and I haven’t absorbed all the discussion yet. But since it’s really cool stuff, and since I’ve been thinking about related material recently (charity-based metasemantics, fission cases, semantics in branching time) I thought I’d sit down and figure out how things look from my point of view.

I’m sceptical, in fact, whether personal fission itself (and associated de se uncertainty about who one will be) will really help us out here in the way that Saunders and Wallace think. Set aside for now the question of whether faced with a fission case you should feel uncertain which fission-product you will end up as (for discussion of that question, on the assumption that it’s indeterminate which of the Lewisian continuing persons is me, see the indeterminate survival paper I just posted up). But suppose that we do get some sense in which, when you’re about to fission, you have de se uncertainty about where you’ll be, even granted full knowledge of the de dicto facts.

The Saunders-Wallace idea is to try to generalize this de se ignorance as an explanation of the ignorance we’d have if we were placed in a branching universe, and knew what was to happen on every branch. We’d know all the de dicto truths about multiple futures—and we would literally be about to undergo fission, since I’d be causally related in the right kind of ways to multiple person stages in the different futures. So—they claim—ignorance of who I am maps onto ignorance of what I’m about to see next (whether I’m about to see the stuff in the left branch, or in the right). And that explains how we can get ignorance in a branching world, and so lays the groundwork for explaining how we can get a genuine notion of uncertainty/probability/degree of belief off the ground.

I’m a bit worried about the generality of the purported explanation. The basic thought there is that to get a complete story about beliefs in branching universes, we’re going to need to justify degrees of beliefs in matters that happen, if at all, long after we would go out of existence. And so it just doesn’t seem likely that we’re going to get a complete story about uncertainty from consideration of uncertainty about which branch I myself am located within.

To dramatize, consider an instantaneous, omniscient agent. She knows all the de dicto truths about the world (in every future branch) and also exactly where he is located—so no de se ignorance either. But still, this agent might care about other things, and have a certain degree of belief as to whether, e.g. the sea-battle will happen in the future. The kind of degree of belief she has (and any associated “ignorance”) can’t, I think, be a matter of de se ignorance. And I think, for events that happen if at all in the far future, we’re relevantly like the instantaneous omniscient agent.

What else can we do? Well—very speculatively—I think there’s some prospect for using the sort of charity-based considerations David Wallace has pointed to in the literature for getting a direct, epistemic account of why we should adopt this or that degree of belief in borderline cases. The idea would be that we *mimimize inaccuracy of our beliefs* by holding true sentences to exactly the right degrees.

A first caveat: this hangs on having the *right* kind of semantic theory in the background. A Thomason-style supervaluationist semantics for the branching future just won’t cut it, nor will MacFarlane-style relativistic tweaks. I think one way of generalizing the “multiple utterances” idea of Saunders and Wallace holds out some prospect of doing better—but best of all would be a degree-theoretic semantics.

A second caveat: what I’ve got (if anything) is epistemic reason for adopting certain kinds of graded attitude. It’s not clear to me that we have to think of these graded attitudes as a kind of uncertainty. And it’s not so clear why expected utility, as calculated from these attitudes, should be a guide to action. On the other hand, I don’t see clearly the argument that they *don’t* or *shouldn’t* have this pragmatic significance.

So I’ve written up a little note on some of these issues—the treatment of fission that Saunders-Wallace use, the worries about limitations to the de se defence, and some of the ideas about accuracy-based defences of graded beliefs in a branching world. It’s very drafty (far more so than anything I usually put up as work in progres). To some extent it seems like a big blog post, so I thought I’d link to it from here in that spirit. Comments very welcome!

Indeterminate survival: in draft

So, finally, I’ve got another draft prepared. This is a paper focussing on Bernard Williams’ concerns about how to think and feel about indeterminacy in questions of one’s own survival.

Suppose that you know that you know there’s an individual in the future who’s going to get harmed. Should you invest a small amount of money to alleviate the harm? Should you feel anxious about the harm?

Well, obviously if you care about the guy (or just have a modicum of humanity) you probably should. But if it was *you* that was going to suffer the harm, there’d be a particularly distinctive frisson. From a prudential point of view, you’d be compelled to invest minor funds for great benefit. And you really should have that distinctive first-personal phenomenology associated with anxiety on one’s own behalf. Both of these de se attitudes seem important features of our mental life and evaluations.

The puzzle I take from Williams is: are the distinctively first-personal feelings and expectations appropriate in a case where you know that it’s indeterminate whether you survive as the individual who’s going to suffer?

Williams thought that by reflecting on such questions, we could get an argument against account of personal identity that land us with indeterminate cases of survival. I’d like to play the case in a different direction. It seems to me pretty unavoidable that we’ll end up favouring accounts of personal identity that allow for indeterminate cases. So if , when you combine such cases with this or that theory of indeterminacy, you end up saying silly things, I want to take that as a blow to that account of indeterminacy.

It’s not knock-down (what is in philosophy?) but I do think that we can get leverage in this way against rejectionist treatments of indeterminacy, at least as applied to these kind of cases. Rejectionist treatments include those folks who think that characteristic attitudes to borderline cases includes primarily a rejection of the law of excluded middle; and (probably) those folks who think that in such cases we should reject bivalence, even if LEM itself is retained.

In any case, this is definitely something I’m looking for feedback/comments on (particularly on the material on how to think about rational constraints on emotions, which is rather new territory for me). So thoughts very welcome!

Primitivism about indeterminacy: a worry

I’m quite tempted by the view that it is indeterminate that might be one of those fundamental, brute bits of machinery that goes into constructing the world. Imagine, for example, you’re tempted by the thought that in a strong sense the future is “open”, or “unfixed”. Now, maybe one could parlay that into something epistemic (lack of knowledge of what the future is to be), or semantic (indecision over which of the existing branching futures is “the future”) or maybe mere non-existence of the future would capture some of this unfixity thought. But I doubt it. (For discussion of what the openness of the future looks like from this perspective, see Ross and Elizabeth’s forthcoming Phil Studies piece).

The open future is far from the only case you might consider—I go through a range of possible arenas in which one might be friendly to a distinctively metaphysical kind of indeterminacy in this paper—and I think treating “indeterminacy” as a perfectly natural bit of kit is an attractive way to develop that. And, if you’re interested in some further elaboration and defence of this primitivist conception see this piece by Elizabeth and myself—and see also Dave Barnett’s rather different take on a similar idea in a forthcoming piece in AJP (watch out for the terminological clashes–Barnett wants to contrast his view with that of “indeterminists”. I think this is just a different way of deploying the terminology.)

I think everyone should pay more attention to primitivism. It’s a kind of “null” response to the request for an account of indeterminacy—and it’s always interesting to see why the null response is unavailable. I think we’ll learn a lot about what the compulsory questions the a theory of indeterminacy must answer, from seeing what goes wrong when the theory of indeterminacy is as minimal as you can get.

But here I want to try to formulate a certain kind of objection to primitivism about indeterminacy. Something like this has been floating around in the literature—and in conversations!—for a while (Williamson and Field, in particular, are obvious sources for it). I also think the objection if properly formulated would get at something important that lies behind the reaction of people who claim *just not to understand* what a metaphysical conception of indeterminacy would be. (If people know of references where this kind of idea is dealt with explicitly, then I’d be really glad to know about them).

The starting assumption is: saying “it’s an indeterminate case” is a legitimate answer to the query “is that thing red?”. Contrast the following. If someone asks “is that thing red?” and I say: it’s contingent whether it’s red”, then I haven’t made a legitimate conversational move. The information I’ve given is simply irrelevant to it’s actual redness.

So it’s a datum that indeterminacy-answers are in some way relevant to redness (or whatever) questions. And it’s not just that “it is indeterminate whether it is red” has “it is red” buried within it – so does the contingency “answer”, but it is patently irrelevant.

So what sort of relevance does it have? Here’s a brief survey of some answers:

(1) Epistemicist. “It’s indeterminate whether p” has the sort of relevance that answering “I don’t know whether p” has. Obviously it’s not directly relevant to the question of whether p, but at least expresses the inability to give a definitive answer.

(2) Rejectionist (like truth-value gap-ers, inc. certain supervaluationists, and LEM-deniers like Field, intuitionists). Answering “it’s indeterminate” communicates information which, if accepted, should lead you to reject both p, and not-p. So it’s clearly relevant, since it tells the inquirer what their attitudes to p itself should be.

(3) Degree theorist (whether degree-supervaluationist like Lewis, Edgington, or degree-functional person like Smith, Machina, etc). Answering “it’s indeterminate” communicates something like the information that p is half-true. And, at least on suitable elaborations of degree theory, we’ll then now how to shape our credences in p itself: we should have credence 0.5 in p if we have credence 1 that p is half true.

(4) Clarification request. (maybe some contextualists?) “it’s indeterminate that p” conveys that somehow the question is ill-posed, or inappropriate. It’s a way of responding whereby we refuse to answer the question as posed, but invite a reformulation. So we’re asking the person who asked “is it red?” to refine their question to something like “is it scarlet?” or “is it reddish?” or “is it at least not blue?” or “does it have wavelength less than such-and-such?”.

(For a while, I think, it was assumed that every series account of indeterminacy would say that if p was indeterminate, one couldn’t know p (think of parallel discussion of “minimal” conceptions of vagueness—see Patrick Greenough’s Mind paper). If that was right then (1) would be available to everybody. But I don’t think that that’s at all obvious — and in particular, I don’t think it’s obvious the primitivist would endorse it, and if they did, what grounds they would have for saying so).

There are two readings of the challenge we should pull apart. One is purely descriptive. What kind of relevance does indeterminacy have, on the primitivist view? The second is justificatory: why does it have that relevance? Both are relevant here, but the first is the most important. Consider the parallel case of chance. There we know what, descriptively, we want the relevance of “there’s a 20% chance that p” to be: someone learning this information should, ceteris paribus, fix their credence in p to 0.2. And there’s a real question about whether a metaphysical primitive account of chance can justify that story (that’s Lewis’s objection to a putative primitivist treatment of chance facts).

The justification challenge is important, and how exactly to formulate a reasonable challenge here will be a controversial matter. E.g. maybe route (4), above, might appeal to the primitivist. Fine—but why is that response the thing that indeterminacy-information should prompt? I can see the outlines of a story if e.g. we were contextualists. But I don’t see what the primitivist should say.

But the more pressing concern right now is that for the primitivist about indeterminacy, we don’t as yet have a helpful answer to the descriptive question. So we’re not even yet in a position to start engaging with the justificatory project. This is what I see as the source of some dissatisfaction with primitivism – the sense that as an account it somehow leaves something unimportant explained. Until the theorist has told me something more I’m at a loss about what to do with the information that p is indeterminate

Furthermore, at least in certain applications, one’s options on the descriptive question are constrained. Suppose, for example, that you want to say that the future is indeterminate. But you want to allow that one can rationally have different credences for different future events. So I can be 50/50 on whether the sea battle is going to happen tomorrow, and almost certain I’m not about to quantum tunnel through the floor. Clearly, then, nothing like (2) or (3) is going on, where one can read off strong constraints on strength of belief in p from the information that p is indeterminate. (1) doesn’t look like a terribly good model either—especially if you think we can sometimes have knowledge of future facts.

So if you think that the future is primitively unfixed, indeterminate, etc—and friends of mine do—I think (a) you owe a response to the descriptive challenge; (b) then we can start asking about possible justifications for what you say; (c) your choices for (a) are very constrained.

I want to finish up by addressing one response to the kind of questions I’ve been pressing. I ask: what is the relevance of answering “it’s indeterminate” to first-order questions? How should I alter my beliefs in receipt of the information, what does it tell me about the world or the epistemic state of my informant?

You might be tempted to say that your informant communicates, minimally, that it’s at best indeterminate whether she knows that p. Or you might try claiming that in such circumstances it’s indeterminate whether you *should* believe p (i.e. there’s no fact of the matter as to how you should shape your credences on the question of whether p). Arguably, you can derive these from the determinate truth of certain principles (determinacy, truth as the norm of belief, etc) plus a bit of logic. Now, that sort of thing sounds like progress at first glance – even if it doesn’t lay down a recipe for shaping my beliefs, it does sound like it says something relevant to the question of what to do with the information. But I’m not sure about that it really helps. After all, we could say exactly parallel things with the “contingency answer” to the redness question with which we began. Saying “it’s contingent that p” does entail that it’s contingent at best whether one knows that p, and contingent at best whether one should believe p. But that obviously doesn’t help vindicate contingency-answers to questions of whether p. So it seems that the kind of indeterminacy-involving elaborations just given, while they may be *true*, don’t really say all that much.