Category Archives: Science

Nominalizing statistical mechanics

Frank Arntzenius gave the departmental seminar here at Leeds the other day. Given I’ve been spending quite a bit of time just recently thinking about the Fieldian nominalist project, it was really interesting to hear about his updating and extension of the technical side of the nominalist programme (he’s working on extending it to differential geometry, gauge theories and the like).

One thing I’ve been wondering about is how  theories like statistical mechanics fit into the nominalist programme. These were raised as a problem for Field in one of the early reviews (by Malament). There’s a couple of interesting papers recently out in Philosophia Mathematica on this topic, by Glen Meyer and Mark Colyvan/Aidan Lyon. Now, one of the assumptions, as far as I can tell, is that even sticking with the classical, Newtonian framework, the Field programme is incomplete, because it fails to “nominalize” statistical mechanical reasoning (in particular, stuff naturally represented by measures over phase space).

Now one thing that I’ll mention just to set aside is that some of this discussion would look rather different if we increased our nominalistic ontology. Suppose that reality, Lewis-style, contains a plurality of concrete, nominalistic, space-times—at least one for each point in phase space (that’ll work as an interpretation of phase space, right?). Then the project of postulating qualitative probability synthetic structure over such worlds from which a representation theorem for the quantitative probabilities of statistical mechanics looks far easier. Maybe it’s still technically or philosophically problematic. Just a couple of thoughts on this. From the technical side, it’s probably not enough to show that the probabilities can be represented nominalistically—we want to show how to capture the relevant laws. And it’s not clear to me what a nominalistic formulation of something like the past hypothesis looks like (BTW, I’m working with something like the David Albert picture of stat mechanics here). Philosophically, what I’ve described looks like a nominalistic version of primitive propensities, and there are various worries about treating probability in this primitive way (e.g. why should information about such facts constrain credences in the distinctive way information about chance seems to)? I doubt Field would want to go in for this sort of ontological inflation in any case, but it’d be worth working through it as a case study.

Another idea I won’t pursue is the following: Field in the 80’s was perfectly happy to take a (logical) modality as primitive. From this, and nominalistic formulations of Newtonian laws, presumably a nomic modality could be defined. Now, it’s one thing to have a modality, another thing to earn the right to talk of possible worlds (or physical relations between them). But given that phase space looks so much like we’re talking about the space of nomically possible worlds (or time-slices thereof) it would be odd not to look carefully about whether we can use nomic modalities to help us out.

But even setting these kind of resources aside, I wonder what the rules of the game are here. Field’s programme really has two aspects. The first is the idea that there’s some “core” nominalistic science, C. And the second claim is that mathematics, and standard mathematized science, is conservative over C. Now, if the core was null, the conservativeness claim would be trivial, but nobody would be impressed by the project! But Field emphasizes on a number of occasions that the conservativeness claim is not terribly hard to establish, for a powerful block of applied mathematics (things that can be modelled in ZFCU, essentially).

(Actually, things are more delicate than you’d think from Science without Numbers, as emerged in the JPhil exchange between Shapiro and Field. The upshot, I take it  if (a) we’re allowed second-order logic in the nominalistic core; or (b) we can argue that best justified mathematized theory aren’t quite the usual versions, but systematically weakened versions; then the conservativeness results go through).

As far as I can tell, we can have the conservativeness result without a representation theorem. Indeed, for the case of arithmetic (as opposed to geometry and Newtonian gravitational theory) Field relies on conservativeness without giving anything like a representation theorem. I think therefore, that there’s a heel-digging response to all this open to Field. He could say that phase-space theories are all very fine, but they’re just part of the mathematized superstructure—there’s nothing in the core which they “represent”, nor do we need there to be.

Now, maybe this is deeply misguided. But I’d like to figure out exactly why. I can think of two worries: one based on loss of explanatory power; the other on the constraint to explain applicability.

Explanations. One possibility is that nominalistic science without statistical mechanics is a worse theory than mathematized science including phase space formulations—in a sense relevant to the indispensibility argument. But we have to treat this carefully. Clearly, there’s all sorts of ways in which mathematized science is more tractable than nominalized science—that’s Field’s explanation for why we indulge in the former in the first place. One objective of the Colyvan and Lyon article cited earlier is to give examples of the explanatory power of stat mechanical explanations, so that’s one place to start looking.

Here’s one thought about that. It’s not clear that the sort of explanations we get from statistical mechanics, cool though they may be, are of relevantly similar kind to the “explanations” given in classical mechanics. So one idea would be to try to pin down this difference (if there is one) and figure out how they relate to the “goodness” relevant to indispensibility arguments.

Applicability. The second thought is that the “mere conservativeness” line is appropriate either where the applicability of the relevant area of mathematics is unproblematic (as perhaps in arithmetic) or where there aren’t any applications to explain (the higher reaches of pure set theory). In other cases—like geometry, there is a prima facie challenge to tell a story about how claims about abstracta can tell us stuff about the world we live in. And representation theorems scratch this itch, since they show in detail how particular this-worldly structures can exactly call for a representation in terms of abstracta (so in some sense the abstracta are “coding for” purely nominalistic processes—“intrinsic processes” in Field’s terminology). Lots of people unsympathetic to nominalism are sympathetic to representation theorems as an account of the application of mathematics—or so the folklore says.

But, on the one hand, statistical mechanics does appear to feature in explanations of macro-phenomena; and second, the reason that talking about measures over some abstract “space” can be relevant to explaining facts about ripples on a pond is at least as unobvious as the applications of geometry.

I don’t have a very incisive way to end this post. But here’s one thought I have if the real worry is one of accounting for applicability, rather than explanatory power. Why think in these cases that applicability should be explained via representation theorems? In the case of geometry, Newtonian mechanics etc, it’s intuitively appealing to think there are nominalistic relations that our mathematized theories are encoding. Even if one is a platonist, that seems like an attractive part of a story about the applicability of the relevant theories. But when one looks at statistical mechanics, is there any sense that it’s applicability would be explained if we found a way to “code” within Newtonian space-time all the various points of phase space (and then postulate relations between the codings)? It seems like this is the wrong sort of story to be giving here. That thought goes back, I guess, to the point raised earlier in the “modal realist” version: even if we had the resources, would primitive nominalistic structure over some reconstruction of configuration of phase space really give us an attractive story about the applicability of statistical mechanical probabilities?

But if representation theorems don’t look like the right kind of story, what is? Can the Lewis-style “best theory theory” of chance, applied to stat mechanical case (as Barry Loewer has suggested) be wheeled in here? Can the Fieldian nominalist just appeal to (i) conservativeness (ii) the Lewisian account of how the probability-invoking theory and laws gets fixed by the patterns of nominalistic facts in a single classical space? Questions, questions…

Error theories and Revolutions

I’ve been thinking about Hartry Field’s nominalist programme recently. In connection with this (and a draft of a paper I’ve been preparing for the Nottingham metaphysics conference) I’ve been thinking about parallels between the error theories that threaten if ontology is sparse (e.g. nominalistic, or van Inwagenian); and scientific revolutions.

One (Moorean) thought is that we are better justified in our commonsense beliefs (e.g. `I have hands’) than we could be in any philosophical premises incompatible with them. So we should always regard “arguments against the existence of hands” as reductios of the premises that entail that one has no hands. This thought, I take it, extends to commonsense claims about the number of hands I possess. Something similar might be formulated in terms of the comparative strength of justification for (mathematicized) science as against the philosophical premises that motivate its replacement.

So presented, Field (for one) has a response: he argues in several places that we exactly lack good justification for the existence of numbers. He simply rejects the premise of this argument.

A better way presentation of the worry focuses, not on the relative justification for one’s beliefs, but on conditions under which it is rational to change one’s beliefs. I presently have a vast array of beliefs that, according to Field, are simply false.

Forget issues of relative justification. It’s simply that the belief state I would have to be in to consistently accept Field’s view is very distant from my own—it’s not clear whether I’m even psychologically capable of genuinely disbelieving that if there are exactly two things in front of me, then the number of things in front of me is two. (If you don’t feel the pressure in this particular case, consider the suggestion that no macroscopic objects exist—then pretty much all of your existing substantive beliefs are false). Given my starting set of beliefs, it’s hard to see how speculative philosophical considerations could make it rational to change my views so much.

Here’s one way of trying to put some flesh on this general worry. In order to assess an empirical theory, we need to measure it against relevant phenomena to establish theory’s predictive and explanatory power. But what do we take these phenomena to be? A very natural thought is that they include platitudinous statements about the positions of pointers on readers, statements about how experiments were conducted, and whatever is described by records of careful observation. But Field’s theory says that the content of numerical records of experimental data will be false; as will be claims such as “the data points approximate an exponential function”. On a van Inwagenian ontology, there are no pointers, and experimental reports will be pretty much universally false (at least on an error-theoretic reading of his position). Sure, each theorist has a view on how to reinterpret what’s going on. But why should we allow them to skew the evidence to suit their theory? Surely, given what we reasonably take the evidence to be, we should count their theories as disastrously unsuccessful?

But this criticism is based on certain epistemological presuppositions, and these can be disputed. Indeed Field in the introduction to Realism Mathematics and Modality (preemptively) argues that the specific worries just outlined are misguided. He points to cases he thinks analogous, where scientific evidence has forced a radical change in view. He argues that when a serious alternative to our existing system of beliefs (and rules for belief-formation) is suggested to us, it is rational to (a) bracket relevant existing beliefs and (b) consider the two rival theories on their individual merits, adopting whichever one regards as the better theory. The revolutionary theory is not necessarily measured against we believe the data to be, but against what the revolutionary theory says the data is. Field thinks, for example, that in the grip of a geocentric model of the universe, we should treat `the sun moves in absolute upward motion in the morning’ as data. However, even for those within the grip of that model, when the heliocentric model is proposed, it’s rational to measure its success against the heliocentric take on what the proper data is (which, of course, will not describe sunrises in terms of absolute upward motion). Notice that on this model, there’s is effectively no `conservative influence’ constraining belief-change—since when evaluating new theories, one’s prior opinions on relevant matters are bracketed.

If this is the right account of (one form of) belief change, then the version of the Moorean challenge sketched above falls flat (maybe others would do better). Note that for this strategy to work, it doesn’t matter that philosophical evidence is more shaky than scientific evidence which induces revolutionary changes in view—Field can agree that the cases are disanalogous in terms of the weight of evidence supporting revolution. The case of scientific revolutions is meant to motivate the adoption of a certain epistemology of belief revision. This general epistemology, in application to the philosophy of mathematics, tells us we need not worry about the massive conflicts with existing beliefs that so concerned the Mooreans.

On the other hand, the epistemology that Field sketches is contentious. It’s certainly not obvious that the responsible thing to do is to measure revisionary theory T against T’s take on the data, rather than against one’s best judgement about what the data is. Why bracket what one takes to be true, when assessing new theories? Even if we do want to make room for such bracketing, it is questionable whether it is responsible to pitch us into such a contest whenever someone suggests some prima facie coherent revolutionary alternative. A moderated form of the proposal would require there to be extant reasons for dissatisfaction with current theory (a “crisis in normal science”) in order to make the kind of radical reappraisal appropriate. If that’s right, it’s certainly not clear whether distinctively philosophical worries of the kind Field raises should count as creating crisis conditions in the relevant sense. Scientific revolutions and philosophical error theories might reasonably be thought to be epistemically disanalogous in a way unhelpful to Field.

Two final notes. It is important to note what kind of objection a Moorean would put forward. It doesn’t engage in any way with the first-order case that the Field constructs for his error-theoretic conclusion. If substantiated, the result will be that it would not be rational for me (and people like me) to come to believe the error-theoretic position.

The second note is that we might save the Fieldian ontology without having to say contentious stuff in epistemology, by pursuing reconciliation strategies. Hermeneutic fictionalism—for example in Steve Yablo’s figuralist version—is one such. If we never really believed that the number of peeps was twelve, but only pretended this to be so, then there’s no prima facie barrier from “belief revision” considerations that prevents us from explicitly adopting a nominalist ontology. Another reconciliation strategy is to do some work in the philosophy of language to make the case that “there are numbers” can be literally true, even if Field is right about the constituents of reality. (There are a number of ways of cashing out that thought, from traditional Quinean strategies, to the sort of stuff on varying ontological commitments I’ve been working on recently).

In any case, I’d be really interested in people’s take on the initial tension here—and particularly on how to think about rational belief change when confronted with radically revisionary theories—pointers to the literature/state of the art on this stuff would be gratefully received!

Branching worlds

I’ve recently discovered some really interesting papers on how to think about belief in a future with branching time. Folks are interested in branching time as it (putatively) emerges out of “decoherence” in the Everett interpretation of standard Quantum mechanics.

The first paper linked to above is forthcoming in BJPS, by Simon Saunders and David Wallace. In it, they argue for a certain kind of parallel between the semantics for personal fission cases and the semantics most charitably applied to language users in branching time, and argue that this sheds lights on the way that beliefs should behave.

Now, lots of clever people are obviously thinking about this, and I haven’t absorbed all the discussion yet. But since it’s really cool stuff, and since I’ve been thinking about related material recently (charity-based metasemantics, fission cases, semantics in branching time) I thought I’d sit down and figure out how things look from my point of view.

I’m sceptical, in fact, whether personal fission itself (and associated de se uncertainty about who one will be) will really help us out here in the way that Saunders and Wallace think. Set aside for now the question of whether faced with a fission case you should feel uncertain which fission-product you will end up as (for discussion of that question, on the assumption that it’s indeterminate which of the Lewisian continuing persons is me, see the indeterminate survival paper I just posted up). But suppose that we do get some sense in which, when you’re about to fission, you have de se uncertainty about where you’ll be, even granted full knowledge of the de dicto facts.

The Saunders-Wallace idea is to try to generalize this de se ignorance as an explanation of the ignorance we’d have if we were placed in a branching universe, and knew what was to happen on every branch. We’d know all the de dicto truths about multiple futures—and we would literally be about to undergo fission, since I’d be causally related in the right kind of ways to multiple person stages in the different futures. So—they claim—ignorance of who I am maps onto ignorance of what I’m about to see next (whether I’m about to see the stuff in the left branch, or in the right). And that explains how we can get ignorance in a branching world, and so lays the groundwork for explaining how we can get a genuine notion of uncertainty/probability/degree of belief off the ground.

I’m a bit worried about the generality of the purported explanation. The basic thought there is that to get a complete story about beliefs in branching universes, we’re going to need to justify degrees of beliefs in matters that happen, if at all, long after we would go out of existence. And so it just doesn’t seem likely that we’re going to get a complete story about uncertainty from consideration of uncertainty about which branch I myself am located within.

To dramatize, consider an instantaneous, omniscient agent. She knows all the de dicto truths about the world (in every future branch) and also exactly where he is located—so no de se ignorance either. But still, this agent might care about other things, and have a certain degree of belief as to whether, e.g. the sea-battle will happen in the future. The kind of degree of belief she has (and any associated “ignorance”) can’t, I think, be a matter of de se ignorance. And I think, for events that happen if at all in the far future, we’re relevantly like the instantaneous omniscient agent.

What else can we do? Well—very speculatively—I think there’s some prospect for using the sort of charity-based considerations David Wallace has pointed to in the literature for getting a direct, epistemic account of why we should adopt this or that degree of belief in borderline cases. The idea would be that we *mimimize inaccuracy of our beliefs* by holding true sentences to exactly the right degrees.

A first caveat: this hangs on having the *right* kind of semantic theory in the background. A Thomason-style supervaluationist semantics for the branching future just won’t cut it, nor will MacFarlane-style relativistic tweaks. I think one way of generalizing the “multiple utterances” idea of Saunders and Wallace holds out some prospect of doing better—but best of all would be a degree-theoretic semantics.

A second caveat: what I’ve got (if anything) is epistemic reason for adopting certain kinds of graded attitude. It’s not clear to me that we have to think of these graded attitudes as a kind of uncertainty. And it’s not so clear why expected utility, as calculated from these attitudes, should be a guide to action. On the other hand, I don’t see clearly the argument that they *don’t* or *shouldn’t* have this pragmatic significance.

So I’ve written up a little note on some of these issues—the treatment of fission that Saunders-Wallace use, the worries about limitations to the de se defence, and some of the ideas about accuracy-based defences of graded beliefs in a branching world. It’s very drafty (far more so than anything I usually put up as work in progres). To some extent it seems like a big blog post, so I thought I’d link to it from here in that spirit. Comments very welcome!

Must, Might and Moore.

I’ve just been enjoying reading a paper by Thony Gillies. One thing that’s very striking is the dilemma he poses—quite generally—for “iffy” accounts of “if” (i.e. accounts that see English “if” as expressing a sentential connective, pace Kratzer’s restrictor account).

The dilemma is constructed around finding a story that handles the interaction between modals and conditionals. The prima facie data is that the following pairs are equivalent:

  • If p, must be q
  • If p, q


  • If p, might be q
  • Might be (p&q)

The dilemma proceeds by first looking at whether you want to say that the modals scope over the conditional or vice versa, and then (on the view where the modal is wide-scoped) looking into the details of how the “if” is supposed to work and showing that one or other of the pairs come out inequivalent. The suggestion in the paper is if we have the right theory of context-shiftiness, and narrow-scope the modals, then we can be faithful to the data. I don’t want to take issue with that positive proposal. I’m just a bit worried about the alleged data itself.

It’s a really familiar tactic, when presented with a putative equivalence that causes trouble for your favourite theory, to say that the pairs aren’t equivalent at all, but can be “reasonably inferred” from each other (think of various ways of explaining away “or-to-if” inferences). But taken cold such pragmatic explanations can look a bit ad hoc.

So it’d be nice if we could find independent motivation for the inequivalence we need. In a related setting, Bob Stalnaker uses the acceptability of Moorean-patterns to do this job. To me, the Stalnaker point seems to bear directly on the Gillies dilemma above.

Before we even consider conditionals, notice that “p but it might be that not p” sounds terrible. Attractive story: this is because you shouldn’t assert something unless you know it to be true; and to say that p might not be the case is (inter alia) to deny you know it. One way of bringing out the pretty obviously pragmatic nature of the tension in uttering the conjunction here is to note that asserting the following sort of thing looks much much better:

  • it might be that not p; but I believe that p

(“I might miss the train; but I believe I’ll just make it”). The point is that whereas asserting “p” is appropriate only if you know that p, asserting “I believe that p” (arguably) is appropriate even if you know you don’t know it. So looking at these conjunctions and figuring out whether they sound “Moorean” seems like a nice way of filtering out some of the noise generated by knowledge-rules for assertion.

(I can sometimes still hear a little tension in the example: what are you doing believing that you’ll catch the train if you know you might not? But for me this goes away if we replace “I believe that” with “I’m confident that” (which still, in vanilla cases, gives you Moorean phenomena). I think in the examples to be given below, residual tension can be eliminated in the same way. The folks who work on norms of assertion I’m sure have explored this sort of territory lots.)

That’s the prototypical case. Let’s move on to examples where there are more moving parts. David Lewis famously alleged that the following pair are equivalent:

  • it’s not the case that: if were the case that p, it would have been that q
  • if were that p, it might have been that ~q

Stalnaker thinks that this is wrong, since instances of the following sound ok:

  • if it were that p, it might have been that not q; but I believe if it were that p it would have been that q.

Consider for example: “if I’d left only 5 mins to walk down the hill, (of course!) I might have missed the train; but I believe that, even if I’d only left 5 mins, I’d have caught it. ” That sounds totally fine to me. There’s a few decorations to that speech (“even” “of course” “only”). But I think the general pattern here is robust, once we fill in the background context. Stalnaker thinks this cuts against Lewis, since if mights and woulds were obvious contradictories, then the latter speech would be straightforwardly equivalent to something of the form “A and I don’t believe that A”. But things like that sounds terrible, in a way that the speech above doesn’t.

We find pretty much the same cases for “must” and indicative “if”.

  • It’s not true that if p, then it must be that q; but I believe that if p, q.

(“it’s not true that if Gerry is at the party, Jill must be too—Jill sometimes gets called away unexpectedly by her work. But nevertheless I believe that if Gerry’s there, Jill’s there.”). Again, this sounds ok to me; but if the bare conditional and the must-conditional were straightforwardly equivalent, surely this should sound terrible.

These sorts of patterns make me very suspicious of claims that “if p, must q” and “if p, q” are equivalent, just as the analogous patterns make me suspicious of the Lewis idea that “if p, might ~q” and “if p, q” are contradictories when the “if” is subjunctive. So I’m thinking the horns of Gillies’ dilemma aren’t equal: denying the must conditional/bare conditional equivalence is independently motivated.

None of this is meant to undermine the positive theory that Thony Gillies is presenting in the paper: his way of accounting for lots of the data looks super-interesting, and I’ve got no reason to suppose his positive story won’t have a story about everything I’ve said here. I’m just wondering whether the dilemma that frames the debate should suck us in.

The fuzzy link

Following up on one of my earlier posts on quantum stuff, I’ve been reading up on an interesting literature on relating ordinary talk to quantum mechanics. As before, caveats apply: please let me know if I’m making terrible technical errors, or if there’s relevant literature I should be reading/citing.

The topic here is GRW. This way of doing things, recall, involved random localizations of the wavefunction. Let’s think of the quantum wavefunction for a single particle system, and suppose it’s initially pretty wide. So the amplitude of the wavefunction pertaining to the “position” of the particle is spread out over a wide span of space. But, if one of the random localizations occurs, the wavefunction collapses into a very narrow spike, within a tiny region of space.

But what does all this mean? What does it say about the position of the particle? (Here I’m following the Albert/Loewer presentation, and ignoring alternatives, e.g. Ghirardi’s mass-density approach).

Well, one traditional line was that talk of position was only well defined when the particle was in an eigenstate of the position observable. Since on GRW the particles’ wavefunction is pretty much spread all over space, on this view talking of a particle’s location would never be well-defined.

Albert and Loewer’s suggestion is that we alter the link. As previously, think of the wavefunction as giving a measure over different situations in which the particle has a definite location. Rather than saying x is located within region R iff the set of situations in which the particle lies in R is measure 1, they suggest that x is located within region R iff the set of situations in which the particle lies in R is almost measure 1. The idea is that even if not all of a particle’s wavefunction places it right here, the vast majority of it is within a tiny subregion here. On the Albert/Loewer suggestion, we get to say on this basis, that the particle is located in that tiny subregion. They argue also that there are sensible choices of what “almost 1” should be that’ll give the right results, though it’s probably a vague matter exactly what the figure is.

Peter Lewis points out oddities with this. One oddity is that conjunction-introduction will fail. It might be true that marble i is in a particular region, for each i between 1 and 100; and yet it fail to be true that all these marbles are in the box.

Here’s another illustration of the oddities. Take a particle with a localized wavefunction. Choose some region R around the peak of the wavefunction which is minimal, such that enough of the wavefunction is inside for the particle to be within R. Then subdivide R into two pieces (the left half and the right half) such that the wavefunction is nonzero in each. The particle is within R. But it’s not within the left half of R. Nor is it within the right half of R (in each case by modus tollens on the Albert/Loewer’s biconditional). But the R is just the sum of the left half and right half of R. So either we’re committed to some very odd combination of claims about location, or something is going wrong with modus tollens.

So clearly this proposal is looking like it’s pretty revisionary of well-entrenched principles. While I don’t think it indefensible (after all, logical revisionism from science isn’t a new idea) I do think it’s a significant theoretical cost.

I want to suggest a slightly more general, and I think, much more satisfactory, way of linking up the semantics of ordinary talk with the GRW wavefunction. The rule will be this:

“Particle x is within region R” is true to degree equal to the wavefunction-measure of the set of situations where the particle is somewhere in region R.

On this view, then, ordinary claims about position don’t have a classical semantics. Rather, they have a degreed semantics (in fact, exactly the degreed-supervaluational semantics I talked about in a previous post). And ordinary claims about the location of a well-localized particle aren’t going to be perfectly true, but only almost-perfectly true.

Now, it’s easy but unwarranted to slide from “not perfectly true” to “not true”. The degree theorist in general shouldn’t concede that. It’s an open question for now how to relate ordinary talk of truth simpliciter to the degree-theorist’s setting.

One advantage of setting up things in this more general setting is that we can “off the peg” take results about what sort of behaviour we can expect the language to exhibit. An example: it’s well known that if you have a classically valid argument in this sort of setting, then the degree of untruth of the conclusion cannot exceed the sum of the degrees of untruth of the premises. This amounts to a “safety constraint” on arguments: we can put a cap on how badly wrong things can go, though there’ll always be the phenomenon of slight degradations of truth value across arguments, unless we’re working with perfectly true premises. So there’s still some point of classifying arguments like conjunction introduction as “valid” on this picture, for that captures a certain kind of important information.

Say that the figure that Albert and Loewer identified as sufficient for particle-location was 1-p. Then the way to generate something like the Albert and Loewer picture on this view is to identify truth with truth-to-degree-1-p. In the marbles case, the degrees of falsity of each premise “marble i is in the box” collectively “add up” in the conclusion to give a degree of falsity beyond the permitted limit. In the case

An alternative to the Albert-Loewer suggestion for making sense of ordinary talk is to go for a universal error-theory, supplemented with the specification of a norm for assertion. To do this, we allow the identification of truth simpliciter with true-to-degree 1. Since ordinary assertions of particle location won’t be true to degree 1, they’ll be untrue. But we might say that such sentences are assertible provided they’re “true enough”: true to the Albert/Loewer figure of 1-p, for example. No counterexamples to classical logic would threaten (Peter Lewis’s cases would all be unsound, for example). Admittedly, a related phenomenon would arise: we’d be able to go by classical reasoning from a set of premises all of which are assertible, to a conclusion that is unassertible. But there are plausible mundane examples of this phenomenon, for example, as exhibited in the preface “paradox”.

But I’d rather not go either for the error-theoretic approach, nor for the identification of a “threshold” for truth, as the Albert-Loewer inspired proposal suggests. I think there are more organic ways to handle utterance-truth within a degree theoretic framework. It’s a bit involved to go into here, but the basic ideas are extracted from recent work by Agustin Rayo, and involve only allowing “local” specifications of truth simpliciter, relative to a particular conversational context. The key thing is that on the semantic side, once we have the degree theory, we can take “off the peg” an account of how such degree theories interact with a general account of communication. So combining the degree-based understanding of what validity amounts to (in terms of limiting the creep of falsity into the conclusion) and this degree-based account
of assertion, I think we’ve got a pretty powerful, pretty well understood overview about how ordinary language position-talk works.

Bohm and Lewis

So I’ve been thinking and reading a bit about quantum theory recently (originally in connection with work on ontic vagueness). One thing that’s been intriguing me is the Bohmian interpretation of non-relativistic quantum theory. The usual caveats apply: I’m no expert in this area, on a steep learning curve, wouldn’t be terribly surprised if there’s some technical error in here somewhere.

What is Bohmianism? Well, to start with it’s quite a familiar picture. There are a bunch of particles, each supplied with non-dynamical properties (like charge and mass) and definite positions, which move around in a familiar three-dimensional space. The actual trajectories of those particles, though, are not what you’d expect from a classical point of view: they don’t trace straight lines through the space, but rather wobbly ones, if if they were bobbing around on some wave.

The other part of the Bohmian picture, I gather, is that one appeals to a wavefunction that lives in a space of far higher dimension: configuration space. As mentioned in a previous post I’m thinking of this as a set of (temporal slices of) possible worlds. The actual world is a point in configuration space, just as one would expect given this identification.

The first part of the Bohmian picture sounds all very safe from the metaphysician’s perspective: the sort of world at which, for example, Lewis’s project of Humean supervenience could get off and running, the sort of thing to give us the old-school worries about determinism and freedom (the evolution of a Bohmian world is totally deterministic). And so on and so forth.

But the second part is all a bit unexpected. What is a wave in modal space? Is that a physical thing (after all, it’s invoked in fundamental physical theory)? How can a wave in modal space push around particles in physical space? Etc.

I’m sure there’s lots of interesting phil physics and metaphysics to be done that takes the wave function seriously (I’ve started reading some of it). But I want to sketch a metaphysical interpretation of the above that treats it unseriously, for those of us with weak bellies.

The inspiration is Lewis’s treatment of objective chance (as explained, for example, in his “Humean supervenience debugged”). The picture of chance he there sketches has some affinities to frequentism: when we describe what there is and how it is in fundamental terms, we never mention chances. Rather, we just describe patterns of instantiation: radioactive decay here, now, another radioactive decay there, then (for example). What one then has to work with is certain statistically regularities that emerge from the mosaic of non-chancy facts.

Now, it’s very informative to be told about these regularities, but it’s not obvious how to capture that information within a simple theory (we could just write down the actual frequencies, but that’d be pretty ugly, and wouldn’t allow us to to capture underlying patterns among the frequencies). So Lewis suggests, when we’re writing down the laws, we should avail ourselves of a new notion “P”, assigning numbers to proposition-time pairs, obeying the usual probability axioms. We’ll count a P-theory as “fitting” with facts (roughly) to the extent that the P-values it assigns to propositions match up, overall, to the statistically regularities we mentioned earlier. Thus, if we’re told that a certain P-theory is “best”, we’re given some (cp) information on what the statistical regularities are. At not much gain in complexity, therefore, our theory gains enormously in informativeness.

The proposal, then, is that the chance of p at t is n, iff overall best theory assigns n to (p,t).

That’s very rough, but the I hope the overall idea is clear: we can be “selectively instrumentalist” about some of the vocabulary that appears in fundamental physical theory. Though many of the physical primitives will also be treated as metaphysically basic (as expressing “natural properties”) some bits that by the lights of independently motivated metaphysics are “too scary” can be regarded as just reflections of best theory, rather than part of the furniture of the world.

The question relevant here is: why stop at chance? If we’ve been able to get rid of one function over the space of possible worlds (the chance measure), why not do the same with another metaphysically troubling piece of theory: the wavefunction field.

Recall the first part of the Bohmian picture: particles moving through 3-space, in rather odd paths “as if guided by a wave”. Suppose this was all there (fundamentally) was. Well then, we’re going to be in a lot of trouble finding a decent way of encapsulating all this data about the trajectories of particles: the theory would be terribly unwieldy if we had to write out in longhand the exact trajectory. As before, there’s much to be gained in informativeness if we allow ourselves a new notion in the formulation of overall theory, L, say. L will assign scalar values (complex numbers) to proposition-time pairs, and we can then use L in writing down the wavefunction equations of quantum mechanics which elegantly predicts the future positions of particles on the basis of their present positions. The “best” L-theory, of course will be that one whose predictions of the future positions of particles fits with the actual future-facts. The idea is that wavefunction talk is thereby allowed for: the wave function takes value z at region R of configuration space at time t iff Best L-theory assigns z to L(R,t).

So that’s the proposal: we’re selectively instrumentalist about the wavefunction, just as Lewis is selectively instrumentalist about objective chance (I’m using “instrumentalist” in a somewhat picturesque sense, by the way: I’m certainly not denying that chance or wavefunction talk has robust, objective truth-conditions.) There are, of course, ways of being unhappy with this sort of treatment of basic physical notions in general (e.g. one might complain that the explanatory force has been sucked from notions of chance, or the wavefunction). But I can’t see anything that Humeans such as Lewis should be unhappy with here.

(There’s a really nice paper by Barry Loewer on Lewisian treatments of objective chance which I think is the thing to read on this stuff. Interestingly, at the end of that paper he canvasses the possibility of extending the account to the “chances” one (allegedly) finds in Bohmianism. It might be that he has in mind something that is, in effect, exactly the position sketched above. But there are also reasons for thinking there might be differences between the two ideas. Loewer’s idea turns on the idea that one can have something that deserves the name objective chance, even in a world for which there are deterministic laws underpinning what happens (as is the case for both Bohmianism, and for the chancy laws of statistically mechanics in a chancy world). I’m inclined to agree with Loewer on this, but even if that were given up, and one thought that the measure induced by the wavefunction isn’t a chance-measure, the position I’ve sketched is still a runner: the fundamental idea is to use the Lewisian tactics to remove ideological commitment, not to use the Lewisian tactics to remove ideological commitment to chance specifically. [Update: it turns out that Barry definitely wasn’t thinking of getting rid of the wavefunction in the way I canvass in this post: the suggestion in the cited paper is just to deal with the Bohmian (deterministic) chances in the Lewisian way])

[Update: I’ve just read through Jonathan Schaffer’s BJPS paper which (inter alia) attacks the Loewer treatment of chance in Stat Mechanics and Bohm Mechanics (though I think some of his arguments are more problematic in the Bohmian case than the stat case.) But anyway, if Jonathan is right, it still wouldn’t matter for the purposes of the theory presented here, which doesn’t need to make the claim that the measure determined by the wavefunction is anything to do with chance: it has a theoretical role, in formulating the deterministic dynamical laws, that’s quite independent of the issues Jonathan raises.]

Vagueness and quantum stuff

I’ve finally put online a tidied up version of my ontic vagueness paper, which’ll be coming out in Phil Quarterly some time soon. One idea in the paper is to give an account of truths in an ontically vague world, making use of the idea that more than one possible world is actual. The result is a supervaluation-like framework, with “precisifications” replaced with precise possible worlds. For some reason, truth-functional multi-valued settings seem to have a much firmer grip on the ontic vagueness debate than in the vagueness debate more generally. That seems a mistake to me.

(The idea of having supervaluation-style treatments of ontic vagueness isn’t unknown in the literature however: in a couple of papers, Ken Akiba argues for this kind of treatment of ontic vagueness, though his route to this framework is pretty different to the one I like. And Elizabeth Barnes has been thinking and writing about the the kind of modal treatments of ontic vagueness for a while, and I owe huge amounts to conversations with her about all of these issues. Her take on these matters is very close to the one I like (non-coincidentally) and those interested should check out her papers for systematic discussion and defense of the coherence of ontic vagueness in this spirit.)

The project in my paper wasn’t to argue that there was ontic vagueness, or even tell you what ontic vagueness (constitutively) is. The project was just to set up a framework for talking about, and reasoning about, metaphysically vague matters, with a particular eye to evaluate the Evans argument against ontically vague identity. In particular, the framework I gave has no chance of giving any sort of reduction of metaphysical indeterminacy, since that very notion was used in defining up bits of the framework. (I’m actually pretty attracted to the view that the right way to think about these things would be to treat indeterminacy as a metaphysical primitive, in the way that some modalists might treat contingency. See this previous blog post. I was later pointed to this excellent paper by David Barnett where he works out this sort of idea in far more detail.)

One thing that I’ve been thinking about recently is how the sort of “indeterminacy” that people talk about in quantum mechanics might relate to this setting. So I want to write a bit about this here.

Some caveats. First, this stuff clearly isn’t going to be interpretation neutral. If you think Bohm gave the right account of quantum ontology, then you’re not going to think there’s much indeterminacy around. So I’ll be supposing something like the GRW interpretation. Second, I’m not going to be metaphysically neutral even given this interpretation: there’s going to be a bunch of other ways of thinking about the metaphysics of GRW that I don’t consider here (I do think, however, that independently motivated metaphysics can contribute to the interpretation of a physical theory). Third, I’m only thinking of non-relativistic quantum theory here: Quantum field theory and the like is just beyond me at the moment. Finally, I’m on a steep learning curve with this stuff, so please excuse stupidities.

You can represent the GRW quantum ontology as a wave function over a certain space (configuration space). Mathematically speaking, that’s a scalar field over a set of points (which then determines a measure over those points) in a high-dimensional space. As time rolls forward, the equations of quantum theory tell you how this field changes its values. Picture it as a wave evolving through time over this space. GRW tells you that at random intervals, this wave undergoes a certain drastic change, and this drastic change is what plays the role of “collapse”.

That’s all highly abstract. So let me try parlaying that into something more familiar to metaphysicians.

Suppose you’re interested in a world with N particles in it, at time t. Without departing from classical modes of thinking yet, think of the possible arrangements of those particles at t: a scattering of particles equipped with mass and charge over a 3-dimensional space, say (think of the particles haecceistically for now). Collect all these possible-world-slices together into a set. There’ll be a certain implicit ordering on this set: if the worlds contain nothing but those N massy and chargey particles located in space-time, then we can describe a world-slice w by giving, for each of the N particles, the coordinates of its location within w: that is, by giving a list of 3N coordinates. What this means is that each world can be regarded as a point in a 3N dimensional space (the first 3 dimensions giving the position of the first particle in w, the second 3 dimensions the position of the second, etc). And this is what I’m taking to be the “configuration space”. So what is the configuration space, on the way I’m thinking of it? It’s a certain set of time-slices of possible worlds.

One Bohmian picture of quantum ontology fits very naturally into the way that we usually think of possible worlds at this point. For Bohm says that one point in configuration space is special: it gives the actual positions of particles. And this fits the normal way of thinking of possible worlds: the special point in configuration space is just the slice of the actual world at t. (Bohmian mechanics doesn’t dispense with the wave across configuration space, of course: just as some physical theories would appeal to objective chance in their natural laws, which we can represent as a measure across a space of possible worlds, Bohmianism appeals to a scalar field determining a measure across configuration space: the wavefunction).

But on the GRW interpretation, we don’t get anything like this trad picture. What we have is configuration space and the wave function over it. Sometimes, the amplitude of that wave function is highly concentrated on a set of world-slices that are in certain respects very similar: say, they all contain particles arranged in a rough pointer-shaped in a certain location. But nevertheless, no single world will be picked out, and some amplitude will be given to sets of worlds which have the particles in all sorts of odd positions.

But of course, the framework for ontic vagueness I like is up for monkeying around with the actuality of worlds. There needn’t be a single designated actual world, on the way I was thinking of things. But the picture I described doesn’t exactly fit the present situation. For I supposed (following the supervaluationist paradigm) that there’d be a set of worlds, all of which would be “co-actual”.

Yet there are other closely related models that’d help here. In particular, Lewis, Kamp and Edgington have described what I’ll call a “degree supervaluationist” picture that looks to be exactly what we need. Here’s the story, in the original setting. Your classical semantic theorist looks at the set of all possible interpretations of the language, and says that one among them is the designated (or “intended”) one. Truth is truth at the unique, designated, interpretation. Your supervaluationist looks at the same space, and says that there’s a set of interpretations with equal claim to be “intended”: so they should all be co-designated. Truth is truth at each of the co-designated interpretations. Your degree-supervaluationist looks at the set of all interpretations, and says that some are better than others: they are “intended” to different degrees. So the way to describe the semantic facts is to give a measure over the space of interpretations that (roughly) gives in each case the degree to which a given interpretation is designated. Degree supervaluationism will share some of the distinctive features of the classical and standard supervaluational setups: for example, since classical tautologies are true at all interpretations, the law of excluded middle and the like will be “true to degree 1” (i.e. true on a set of interpretations of designation-measure 1).

I don’t see any reason why we can’t take this across to the worlds setting I favoured. Just as the traditional view is that there’s a unique actual world among the space of possible worlds, and I argued that we can make sense of there sometimes being a set of coactual worlds among that space (with something being true if it is true at all of them), I now suggest that we should be up for there being some measure across the space of possible worlds, expressing the degree to which those worlds are actual.

The suggestion this is building up to is that we regard the measure determined by the wavefunction in GRW as the “actuality measure”. Things are determinately the case to the extent that the set of worlds where they’re true is assigned a high measure.

So, for example, suppose that the amplitude of the wavefunction is concentrated on worlds where Sparky is located within region R (suppose the measure of that space of world-slices is 0.9). Then it’ll be determinately the case to degree 0.9 that Spark is in location R. Of course, in a set of worlds of measure 0.1, Sparky will be outside R. So it’ll be determinately the case to degree 0.1 that Sparky is outside R. (Of course, it’ll be determinate to degree 1 that Sparky is either inside R or outside R: at all the worlds, Sparky is located somewhere!)

I don’t expect this to shed much light at all on what the wavefunction means. Ontic indeterminacy, many think, is a pretty obscure notion taken cold, and I’m not expecting metaphysicians or anyone else to find the notion of “degrees of actuality” something they recognize. So I’m not saying that there’s any illuminating metaphysics of GRW here. I think the illumination is likely to go in the other direction: if you’ve can get a pre-philosophical grip on the “determinacy” and “no fact of the matter” talk in quantum physics, we’ve got a way of using that to explain talk of “degrees of actuality” and the like. Nevertheless, I think that, if this all works technically, then a bunch of substantive results follow. Here’s a few thoughts in that direction:

  1. We’ve got a candidate for vagueness in the world, linked to a general story about how to think about ontic vagueness. Given ontic vagueness isn’t in the best repute in the philosophical community, there’s an important “existence result” in the offing here.
  2. Recall the idea canvassed earlier that “determinacy” or an equivalent might just be a metaphysical primitive. Well, here we have the suggestion that what amounts to (degrees of) determinacy being taken as a *physical* primitive. And taking the primitives of fundamental physics as a prima facie guide to metaphysical primitives is a well-trodden route, so I think some support for that idea could be found here.
  3. If there is ontic vagueness in the quantum domain, then we should be able to extract information about the appropriate way to think and reason in the presence of determinacy, by looking at an appropriately regimented version of how this goes in physics. And notice that there’s no suggestion here that we go for a truth-functional degree theory with the consequent revisions of classical logic: rather, a variant of the supervaluational setup seems to me to be the best regimentation. If that’s right, then it lends the support for the (currently rather hetrodox) supervaluational-style framework for thinking about metaphysical vagueness.
  4. I think that there’s a bunch of alleged metaphysical implications of quantum theory that don’t *obviously* go through if we buy into the sort of metaphysics of GRW just suggested. I’m thinking in particular about the allegation that quantum theory teaches us that certain systems of particles have “emergent properties” (Jonathan Shaffer has been using this recently as part of his defence of Monism). Bohmianism already shows, I guess, that this sort of claim won’t be interpretation-neutral. But the above picture I think complicates the case for holism even within GRW.

(Thanks are owed to a bunch of people, particularly George Darby, for discussion of this stuff. They shouldn’t be blamed for any misunderstands of the physics, or indeed, philosophy, that I’m making!)

The present time

One notorious issue for presentists (and other kinds of A-theorist) is the following: special relativity tells us (I gather) that among the slices of space-time that “look like time slices”, there’s no one that is uniquely privileged as “the present” (i.e. simulataneous with what’s going on here-now). But the presentist says that only the present exists. So it looks like her metaphysics entails that there is a metaphysically privileged time-slice: the only one that exists. (Of course, I suppose the science is just telling us that there’s no physically significance sense in which one is privileged, and it’s not obvious the presentist is saying anything that conflicts with that. But it does seem worrying…)

One option is to retreat into “here-now”ism: the only things that exist are those that exist right here right now. No problems with relativity there.

I was idly wondering about the following line: say that it’s (ontically) vague which time-slice is present, and so (for the presentist) say that it’s ontically vague what exists. As I’m thinking of it, there’ll be some kind of here-now-ish element to the metaphysics. From the point of view of a certain position p in space time, all that exists are those “time-like” slices of space time that contain the point, then it will be determinately the case that p exists. But for every other space-time point q, there will (I take it) be a reference frame according to which p and q are non-simultaneous. So it won’t determinately be the case that q exists.

The details are going to get quite involved. I think some hard thinking about higher-order indeterminacy will be in order. But here’s a quick sketch: choose a point r such that there’s a choice of reference-frame that make q and r simultaneous. Then it sort of seems to me that, from p’s perspective, the following should hold:

r doesn’t exist
determinately, r doesn’t exist
not determinately determinately r doesn’t exist

The idea is that while r isn’t “present” (and so fails to exist), relative to the perspective of some of the things that are present, it is present.

What I’d like to do is model this in a “supervaluation-style” framework like that one I talk about here. First, consider the set of all centred time-like-slices. It’ll end up determinate that one and only one of these exists: but it’ll be a vague matter which one. Let centred time-like-slice x access centred time-slice y iff the centre of y is somewhere in the time-slice x.

Now take a set of time-slices P which are all and only those with common centre p. These are the ontic candidates for being the present time. Next, consider the set P*, containing a set of time-slices which are all and only those accessed by some time-slice in P. And similarly construct P**, P*** etc etc etc.

Now, among space-time points, only the “here-now” point p determinately exists. All and only points which are within some some time-slice in P don’t determinately fail to exist. All and only points which are within some time-slice in P* don’t determinately determinately fail to exist. All and only points which are within some time-slice in P* don’t determinately determinately determinately fail to exist. And so on. (If you like, existence shades of into greater and greater indeterminacy as we look further away from the privileged here-now point).

Well, I’m no longer sure that this deserves the name “presentism”. Kit Fine distinguishes some versions of A-theory in a paper in “Modality and tense” which this view might fit better with (the Fine-esque way of setting this up would be to have the whole of space-time existing, but only some time-slices really or fundamentally existing. The above framework then models vagueness in what really or fundamentally exist). It is anyway up to it’s neck in ontic vagueness, which you might already dislike. But I’ve no problem with ontic vagueness, and insofar as I can simulate being a presentist, I quite like this option.

There should be other variants too for different forms of A-theory. Consider, for example, the growing block view of reality (the time-slices in the model can be thought of as the front edges of a growing block: as we go through time, more slices get added to the model). The differences may be interesting: for the growing block, future space-time points determinately don’t exist, but they don’t det …det fail to exist for some amount of iterations of “det”; while past space-time points determinately exist, but they don’t det …. det exist for some amount of iterations of “det”.

Any thoughts most welcome, and references to any related literature particularly invited!