Monthly Archives: June 2007

Bohm and Lewis

So I’ve been thinking and reading a bit about quantum theory recently (originally in connection with work on ontic vagueness). One thing that’s been intriguing me is the Bohmian interpretation of non-relativistic quantum theory. The usual caveats apply: I’m no expert in this area, on a steep learning curve, wouldn’t be terribly surprised if there’s some technical error in here somewhere.

What is Bohmianism? Well, to start with it’s quite a familiar picture. There are a bunch of particles, each supplied with non-dynamical properties (like charge and mass) and definite positions, which move around in a familiar three-dimensional space. The actual trajectories of those particles, though, are not what you’d expect from a classical point of view: they don’t trace straight lines through the space, but rather wobbly ones, if if they were bobbing around on some wave.

The other part of the Bohmian picture, I gather, is that one appeals to a wavefunction that lives in a space of far higher dimension: configuration space. As mentioned in a previous post I’m thinking of this as a set of (temporal slices of) possible worlds. The actual world is a point in configuration space, just as one would expect given this identification.

The first part of the Bohmian picture sounds all very safe from the metaphysician’s perspective: the sort of world at which, for example, Lewis’s project of Humean supervenience could get off and running, the sort of thing to give us the old-school worries about determinism and freedom (the evolution of a Bohmian world is totally deterministic). And so on and so forth.

But the second part is all a bit unexpected. What is a wave in modal space? Is that a physical thing (after all, it’s invoked in fundamental physical theory)? How can a wave in modal space push around particles in physical space? Etc.

I’m sure there’s lots of interesting phil physics and metaphysics to be done that takes the wave function seriously (I’ve started reading some of it). But I want to sketch a metaphysical interpretation of the above that treats it unseriously, for those of us with weak bellies.

The inspiration is Lewis’s treatment of objective chance (as explained, for example, in his “Humean supervenience debugged”). The picture of chance he there sketches has some affinities to frequentism: when we describe what there is and how it is in fundamental terms, we never mention chances. Rather, we just describe patterns of instantiation: radioactive decay here, now, another radioactive decay there, then (for example). What one then has to work with is certain statistically regularities that emerge from the mosaic of non-chancy facts.

Now, it’s very informative to be told about these regularities, but it’s not obvious how to capture that information within a simple theory (we could just write down the actual frequencies, but that’d be pretty ugly, and wouldn’t allow us to to capture underlying patterns among the frequencies). So Lewis suggests, when we’re writing down the laws, we should avail ourselves of a new notion “P”, assigning numbers to proposition-time pairs, obeying the usual probability axioms. We’ll count a P-theory as “fitting” with facts (roughly) to the extent that the P-values it assigns to propositions match up, overall, to the statistically regularities we mentioned earlier. Thus, if we’re told that a certain P-theory is “best”, we’re given some (cp) information on what the statistical regularities are. At not much gain in complexity, therefore, our theory gains enormously in informativeness.

The proposal, then, is that the chance of p at t is n, iff overall best theory assigns n to (p,t).

That’s very rough, but the I hope the overall idea is clear: we can be “selectively instrumentalist” about some of the vocabulary that appears in fundamental physical theory. Though many of the physical primitives will also be treated as metaphysically basic (as expressing “natural properties”) some bits that by the lights of independently motivated metaphysics are “too scary” can be regarded as just reflections of best theory, rather than part of the furniture of the world.

The question relevant here is: why stop at chance? If we’ve been able to get rid of one function over the space of possible worlds (the chance measure), why not do the same with another metaphysically troubling piece of theory: the wavefunction field.

Recall the first part of the Bohmian picture: particles moving through 3-space, in rather odd paths “as if guided by a wave”. Suppose this was all there (fundamentally) was. Well then, we’re going to be in a lot of trouble finding a decent way of encapsulating all this data about the trajectories of particles: the theory would be terribly unwieldy if we had to write out in longhand the exact trajectory. As before, there’s much to be gained in informativeness if we allow ourselves a new notion in the formulation of overall theory, L, say. L will assign scalar values (complex numbers) to proposition-time pairs, and we can then use L in writing down the wavefunction equations of quantum mechanics which elegantly predicts the future positions of particles on the basis of their present positions. The “best” L-theory, of course will be that one whose predictions of the future positions of particles fits with the actual future-facts. The idea is that wavefunction talk is thereby allowed for: the wave function takes value z at region R of configuration space at time t iff Best L-theory assigns z to L(R,t).

So that’s the proposal: we’re selectively instrumentalist about the wavefunction, just as Lewis is selectively instrumentalist about objective chance (I’m using “instrumentalist” in a somewhat picturesque sense, by the way: I’m certainly not denying that chance or wavefunction talk has robust, objective truth-conditions.) There are, of course, ways of being unhappy with this sort of treatment of basic physical notions in general (e.g. one might complain that the explanatory force has been sucked from notions of chance, or the wavefunction). But I can’t see anything that Humeans such as Lewis should be unhappy with here.

(There’s a really nice paper by Barry Loewer on Lewisian treatments of objective chance which I think is the thing to read on this stuff. Interestingly, at the end of that paper he canvasses the possibility of extending the account to the “chances” one (allegedly) finds in Bohmianism. It might be that he has in mind something that is, in effect, exactly the position sketched above. But there are also reasons for thinking there might be differences between the two ideas. Loewer’s idea turns on the idea that one can have something that deserves the name objective chance, even in a world for which there are deterministic laws underpinning what happens (as is the case for both Bohmianism, and for the chancy laws of statistically mechanics in a chancy world). I’m inclined to agree with Loewer on this, but even if that were given up, and one thought that the measure induced by the wavefunction isn’t a chance-measure, the position I’ve sketched is still a runner: the fundamental idea is to use the Lewisian tactics to remove ideological commitment, not to use the Lewisian tactics to remove ideological commitment to chance specifically. [Update: it turns out that Barry definitely wasn’t thinking of getting rid of the wavefunction in the way I canvass in this post: the suggestion in the cited paper is just to deal with the Bohmian (deterministic) chances in the Lewisian way])

[Update: I’ve just read through Jonathan Schaffer’s BJPS paper which (inter alia) attacks the Loewer treatment of chance in Stat Mechanics and Bohm Mechanics (though I think some of his arguments are more problematic in the Bohmian case than the stat case.) But anyway, if Jonathan is right, it still wouldn’t matter for the purposes of the theory presented here, which doesn’t need to make the claim that the measure determined by the wavefunction is anything to do with chance: it has a theoretical role, in formulating the deterministic dynamical laws, that’s quite independent of the issues Jonathan raises.]

Academic careers

Others have already pointed this out, but it’s worth highlighting.

Terence Tao – recent winner of the Field’s medal (a sort of Nobel prize for mathematics) – has written some really interesting career advice. It’s aimed at mathematicians, but lots of it is more generally applicable, and certainly lots of strikes a chord with academic philosophy. It’s also not just for graduates: e.g. I’m a recent-graduate, and I’m sure there’s lots there that I’m not doing, which it’s good to be reminded of.

The advice to “use the wastebasket” is going to be more difficult now that the University of Leeds has decided to remove all wastebackets from our offices…

HT: Shawn Standefer, Richard Zach

p.s. here’s one thing that struck me as particularly transferable:

“Don’t prematurely obsess on a single “big problem” or “big theory”
. This is a particularly dangerous occupational hazard in this subject – that one becomes focused, to the exclusion of other mathematical activity, on a single really difficult problem in a field (or on some grand unifying theory) before one is really ready (both in terms of mathematical preparation, and also in terms of one career) to devote so much of one’s research time to such a project. When one begins to neglect other tasks (such as writing and publishing one’s “lesser” results), hoping to use the eventual “big payoff” of solving a major problem or establishing a revolutionary new theory to make up for lack of progress in all other areas of one’s career, then this is a strong warning sign that one should rebalance one’s priorities. While it is true that several major problems have been solved, and several important theories introduced, by precisely such an obsessive approach, this has only worked out well when the mathematician involved (a) has a proven track record of reliably producing significant papers in the area already, and (b) has a secure career (e.g. a tenured position). If you do not yet have both (a) and (b), and if your ideas on how to solve a big problem still have a significant speculative component (or if your grand theory does not yet have a definite and striking application), I would strongly advocate a more balanced approach instead: keep the big problems and theories in mind, and tinker with them occasionally, but spend most of your time on more feasible “low-hanging fruit”, which will build up your experience, mathematical power, and credibility for when you are ready to tackle the more ambitious projects. “

Pictures from St Andrews (with added commentary)

Courtesy of Brit over at Lemmings

you can find the originals from the link here

We had a great time in St Andrews, by the way. Two good conferences, lots of fun time spent with interesting people. And conference-accommodation to die for…

AJP paper

My paper on a certain kind of argument for structural universals has just appeared in AJP. Very exciting from my perspective: I’ve had things “forthcoming” for so long, I think I thought they’d always have that status.

FWIW, the paper discusses a certain argument for the existence of structural universals (that is, universals “made out of” other universals, as “being water” might be thought to be made out of “being Hydrogen” “being Oxygen” etc.) The argument is based on the (alleged) possibility of worlds with no fundamental physical layer: where things “go down forever”. Quite a few people use this argument in print, and many more raise it in conversation when you’re pressing a microphysicalist metaphysics.

This is part of a wider project exploring a ontological microphysicalism, where the only things that really exist are the physical fundamentals. The recent stuff on ontological commitment is, in part, a continuation of that project.

On a more practical note, I can’t figure out how you access AJP articles these days: my institution is supposed to have a subscription, but the links that take you to the pdf don’t seem live. Any ideas of how to get into it would be gratefully received!

Vagueness and quantum stuff

I’ve finally put online a tidied up version of my ontic vagueness paper, which’ll be coming out in Phil Quarterly some time soon. One idea in the paper is to give an account of truths in an ontically vague world, making use of the idea that more than one possible world is actual. The result is a supervaluation-like framework, with “precisifications” replaced with precise possible worlds. For some reason, truth-functional multi-valued settings seem to have a much firmer grip on the ontic vagueness debate than in the vagueness debate more generally. That seems a mistake to me.

(The idea of having supervaluation-style treatments of ontic vagueness isn’t unknown in the literature however: in a couple of papers, Ken Akiba argues for this kind of treatment of ontic vagueness, though his route to this framework is pretty different to the one I like. And Elizabeth Barnes has been thinking and writing about the the kind of modal treatments of ontic vagueness for a while, and I owe huge amounts to conversations with her about all of these issues. Her take on these matters is very close to the one I like (non-coincidentally) and those interested should check out her papers for systematic discussion and defense of the coherence of ontic vagueness in this spirit.)

The project in my paper wasn’t to argue that there was ontic vagueness, or even tell you what ontic vagueness (constitutively) is. The project was just to set up a framework for talking about, and reasoning about, metaphysically vague matters, with a particular eye to evaluate the Evans argument against ontically vague identity. In particular, the framework I gave has no chance of giving any sort of reduction of metaphysical indeterminacy, since that very notion was used in defining up bits of the framework. (I’m actually pretty attracted to the view that the right way to think about these things would be to treat indeterminacy as a metaphysical primitive, in the way that some modalists might treat contingency. See this previous blog post. I was later pointed to this excellent paper by David Barnett where he works out this sort of idea in far more detail.)

One thing that I’ve been thinking about recently is how the sort of “indeterminacy” that people talk about in quantum mechanics might relate to this setting. So I want to write a bit about this here.

Some caveats. First, this stuff clearly isn’t going to be interpretation neutral. If you think Bohm gave the right account of quantum ontology, then you’re not going to think there’s much indeterminacy around. So I’ll be supposing something like the GRW interpretation. Second, I’m not going to be metaphysically neutral even given this interpretation: there’s going to be a bunch of other ways of thinking about the metaphysics of GRW that I don’t consider here (I do think, however, that independently motivated metaphysics can contribute to the interpretation of a physical theory). Third, I’m only thinking of non-relativistic quantum theory here: Quantum field theory and the like is just beyond me at the moment. Finally, I’m on a steep learning curve with this stuff, so please excuse stupidities.

You can represent the GRW quantum ontology as a wave function over a certain space (configuration space). Mathematically speaking, that’s a scalar field over a set of points (which then determines a measure over those points) in a high-dimensional space. As time rolls forward, the equations of quantum theory tell you how this field changes its values. Picture it as a wave evolving through time over this space. GRW tells you that at random intervals, this wave undergoes a certain drastic change, and this drastic change is what plays the role of “collapse”.

That’s all highly abstract. So let me try parlaying that into something more familiar to metaphysicians.

Suppose you’re interested in a world with N particles in it, at time t. Without departing from classical modes of thinking yet, think of the possible arrangements of those particles at t: a scattering of particles equipped with mass and charge over a 3-dimensional space, say (think of the particles haecceistically for now). Collect all these possible-world-slices together into a set. There’ll be a certain implicit ordering on this set: if the worlds contain nothing but those N massy and chargey particles located in space-time, then we can describe a world-slice w by giving, for each of the N particles, the coordinates of its location within w: that is, by giving a list of 3N coordinates. What this means is that each world can be regarded as a point in a 3N dimensional space (the first 3 dimensions giving the position of the first particle in w, the second 3 dimensions the position of the second, etc). And this is what I’m taking to be the “configuration space”. So what is the configuration space, on the way I’m thinking of it? It’s a certain set of time-slices of possible worlds.

One Bohmian picture of quantum ontology fits very naturally into the way that we usually think of possible worlds at this point. For Bohm says that one point in configuration space is special: it gives the actual positions of particles. And this fits the normal way of thinking of possible worlds: the special point in configuration space is just the slice of the actual world at t. (Bohmian mechanics doesn’t dispense with the wave across configuration space, of course: just as some physical theories would appeal to objective chance in their natural laws, which we can represent as a measure across a space of possible worlds, Bohmianism appeals to a scalar field determining a measure across configuration space: the wavefunction).

But on the GRW interpretation, we don’t get anything like this trad picture. What we have is configuration space and the wave function over it. Sometimes, the amplitude of that wave function is highly concentrated on a set of world-slices that are in certain respects very similar: say, they all contain particles arranged in a rough pointer-shaped in a certain location. But nevertheless, no single world will be picked out, and some amplitude will be given to sets of worlds which have the particles in all sorts of odd positions.

But of course, the framework for ontic vagueness I like is up for monkeying around with the actuality of worlds. There needn’t be a single designated actual world, on the way I was thinking of things. But the picture I described doesn’t exactly fit the present situation. For I supposed (following the supervaluationist paradigm) that there’d be a set of worlds, all of which would be “co-actual”.

Yet there are other closely related models that’d help here. In particular, Lewis, Kamp and Edgington have described what I’ll call a “degree supervaluationist” picture that looks to be exactly what we need. Here’s the story, in the original setting. Your classical semantic theorist looks at the set of all possible interpretations of the language, and says that one among them is the designated (or “intended”) one. Truth is truth at the unique, designated, interpretation. Your supervaluationist looks at the same space, and says that there’s a set of interpretations with equal claim to be “intended”: so they should all be co-designated. Truth is truth at each of the co-designated interpretations. Your degree-supervaluationist looks at the set of all interpretations, and says that some are better than others: they are “intended” to different degrees. So the way to describe the semantic facts is to give a measure over the space of interpretations that (roughly) gives in each case the degree to which a given interpretation is designated. Degree supervaluationism will share some of the distinctive features of the classical and standard supervaluational setups: for example, since classical tautologies are true at all interpretations, the law of excluded middle and the like will be “true to degree 1” (i.e. true on a set of interpretations of designation-measure 1).

I don’t see any reason why we can’t take this across to the worlds setting I favoured. Just as the traditional view is that there’s a unique actual world among the space of possible worlds, and I argued that we can make sense of there sometimes being a set of coactual worlds among that space (with something being true if it is true at all of them), I now suggest that we should be up for there being some measure across the space of possible worlds, expressing the degree to which those worlds are actual.

The suggestion this is building up to is that we regard the measure determined by the wavefunction in GRW as the “actuality measure”. Things are determinately the case to the extent that the set of worlds where they’re true is assigned a high measure.

So, for example, suppose that the amplitude of the wavefunction is concentrated on worlds where Sparky is located within region R (suppose the measure of that space of world-slices is 0.9). Then it’ll be determinately the case to degree 0.9 that Spark is in location R. Of course, in a set of worlds of measure 0.1, Sparky will be outside R. So it’ll be determinately the case to degree 0.1 that Sparky is outside R. (Of course, it’ll be determinate to degree 1 that Sparky is either inside R or outside R: at all the worlds, Sparky is located somewhere!)

I don’t expect this to shed much light at all on what the wavefunction means. Ontic indeterminacy, many think, is a pretty obscure notion taken cold, and I’m not expecting metaphysicians or anyone else to find the notion of “degrees of actuality” something they recognize. So I’m not saying that there’s any illuminating metaphysics of GRW here. I think the illumination is likely to go in the other direction: if you’ve can get a pre-philosophical grip on the “determinacy” and “no fact of the matter” talk in quantum physics, we’ve got a way of using that to explain talk of “degrees of actuality” and the like. Nevertheless, I think that, if this all works technically, then a bunch of substantive results follow. Here’s a few thoughts in that direction:

  1. We’ve got a candidate for vagueness in the world, linked to a general story about how to think about ontic vagueness. Given ontic vagueness isn’t in the best repute in the philosophical community, there’s an important “existence result” in the offing here.
  2. Recall the idea canvassed earlier that “determinacy” or an equivalent might just be a metaphysical primitive. Well, here we have the suggestion that what amounts to (degrees of) determinacy being taken as a *physical* primitive. And taking the primitives of fundamental physics as a prima facie guide to metaphysical primitives is a well-trodden route, so I think some support for that idea could be found here.
  3. If there is ontic vagueness in the quantum domain, then we should be able to extract information about the appropriate way to think and reason in the presence of determinacy, by looking at an appropriately regimented version of how this goes in physics. And notice that there’s no suggestion here that we go for a truth-functional degree theory with the consequent revisions of classical logic: rather, a variant of the supervaluational setup seems to me to be the best regimentation. If that’s right, then it lends the support for the (currently rather hetrodox) supervaluational-style framework for thinking about metaphysical vagueness.
  4. I think that there’s a bunch of alleged metaphysical implications of quantum theory that don’t *obviously* go through if we buy into the sort of metaphysics of GRW just suggested. I’m thinking in particular about the allegation that quantum theory teaches us that certain systems of particles have “emergent properties” (Jonathan Shaffer has been using this recently as part of his defence of Monism). Bohmianism already shows, I guess, that this sort of claim won’t be interpretation-neutral. But the above picture I think complicates the case for holism even within GRW.

(Thanks are owed to a bunch of people, particularly George Darby, for discussion of this stuff. They shouldn’t be blamed for any misunderstands of the physics, or indeed, philosophy, that I’m making!)