Category Archives: Metaphysics

From vague parts to vague identity

(Update: as Dan notes in the comment below, I should have clarified that the initial assumption is supposed to be that it’s metaphysically vague what the parts of Kilimanjaro (Kili) are. Whether we should describe the conclusion as deriving a metaphysically vague identity is a moot point.)

I’ve been reading an interesting argument that Brian Weatherson gives against “vague objects” (in this case, meaning objects with vague parts) in his paper “Many many problems”.

He gives two versions. The easiest one is the following. Suppose it’s indeterminate whether Sparky is part of Kili, and let K+ and K- be the usual minimal variations of Kili (K+ differs from Kili only in determinately containing Sparky, K- only by determinately failing to contain Sparky).

Further, endorse the following principle (scp): if A and B coincide mereologically at all times, then they’re identical. (Weatherson’s other arguments weaken this assumption, but let’s assume we have it, for the sake of argument).

The argument then runs as follows:
1. either Sparky is part of Kili, or she isn’t. (LEM)
2. If Sparky is part of Kili, Kili coincides at all times with K+ (by definition of K+)
3. If Sparky is part of Kili, Kili=K+ (by 2, scp)
4. If Sparky is not part of Kili, Kili coincides at all times with K- (by definition of K-)
5. If Sparky is not part of Kili, Kili=K- (by 4, scp).
6. Either Kili=K+ or Kili=K- (1, 3,5).

At this point, you might think that things are fine. As my colleague Elizabeth Barnes puts it in this discussion of Weatherson’s argument you might simply think at this point that only the following been established: that it is determinate that either Kili=K+ or K-: but that it is indeterminate which.

I think we might be able to get an argument for this. First our all, presumably all the premises of the above argument hold determinately. So the conclusion holds determinately. We’ll use this in what follows.

Suppose that D(Kili=K+). Then it would follow that Sparky was determinately a part of Kili, contrary to our initial assumption. So ~D(Kili=K+). Likewise ~D(Kili=K-).

Can it be that they are determinately distinct? If D(~Kili=K+), then assuming that (6) holds determinately, D(Kili=K+ or Kili=K-), we can derive D(Kili=K-), which contradicts what we’ve already proven. So ~D(~Kili=K+) and likewise ~D(~Kili=K-).

So the upshot of the Weatherson argument, I think, is this: it is indeterminate whether Kili=K+, and indeterminate whether Kili=K-. The moral: vagueness in composition gives rise to vague identity.

Of course, there are well known arguments against vague identity. Weatherson doesn’t invoke them, but once he reaches (6) he seems to think the game is up, for what look to be Evans-like reasons.

My working hypothesis at the moment, however, is that whenever we get vague identity in the sort of way just illustrated (inherited from other kinds of ontic vagueness), we can wriggle out of the Evans reasoning without significant cost. (I go through some examples of this in this forthcoming paper). The over-arching idea is that the vagueness in parthood, or whatever, can be plausibly viewed as inducing some referential indeterminacy, which would then block the abstraction steps in the Evans proof.

Since Weatherson’s argument is supposed to be a general one against vague parthood, I’m at liberty to fix the case in any way I like. Here’s how I choose to do so. Let’s suppose that the world contains two objects, Kili and Kili*. Kili* is just like Kili, except that determinately, Kili and Kili* differ over whether they contain Sparky.

Now, think of reality as indeterminate between two ways: one in which Kili contains Sparky, the other where it doesn’t. What of our terms “K+” and “K-“? Well, if Kili contains Sparky, then “K+” denotes Kili. But if it doesn’t, then “K+” denotes Kili*. Mutatis Mutandis for “K-“. Since it is is indeterminate which option obtains, “K+” and “K-” are referentially indeterminate, and one of the abstraction steps in the Evans proof fail.

Now, maybe it’s built into Weatherson’s assumptions that the “precise” objects like K+ and K- exist, and perhaps we could still cause trouble. But I’m not seeing cleanly how to get it. (Notice that you’d need more than just the axioms of mereology to secure the existence of [objects determinately denoted by] K+ and K-: Kili and Kili* alone would secure the truth that there are fusions including Sparky and fusions not including Sparky). But at this point I think I’ll leave it for others to work out exactly what needs to be added…

Why preserve the letter of Humean supervenience?

Today in the phil physics reading group here at Leeds we were discussing Tim Maudlin’s paper “Why be Humean?”.

The question arose about why we should accord to the letter of the Humean supervenience principle. What that requires is that everything there is should supervene on the distribution of fundamental (local, monadic) properties and spatio-temporal relations. Why not e.g. allow further perfectly natural relations holding between pointy particles, so long as they are physically motivated and don’t enter into necessary connections with other fundamental properties or relations?

Brian Weatherson’s Lewis blog addressed something like this question at one point. His suggestion (I take it) was that the interest of tightly-constrained Humean supervenience was methodological: roughly, if we can fit all important aspects of the manifest image (causality, intentionality, consciousness, laws, modality, whatever) into an HS world, then we should be confident that we could do the same in non-HS worlds, worlds which are more generous with the range of fundamentals they commit us to. If Brian’s right about this, the motivation for going for the strongest formulation of HS, is that allowing any more would make our stories about how to fit the manifest image into the world as described by science, more dependent on exactly what science delivers.

If that’s the motivation for HS, then it’s not so interesting whether physics contradicts HS: what’s interesting is whether the stories about causality, intentionality and the rest that Lewis describes with the HS equipment in mind, go through in the non-HS worlds with minimal alteration.

Bohm and Lewis

So I’ve been thinking and reading a bit about quantum theory recently (originally in connection with work on ontic vagueness). One thing that’s been intriguing me is the Bohmian interpretation of non-relativistic quantum theory. The usual caveats apply: I’m no expert in this area, on a steep learning curve, wouldn’t be terribly surprised if there’s some technical error in here somewhere.

What is Bohmianism? Well, to start with it’s quite a familiar picture. There are a bunch of particles, each supplied with non-dynamical properties (like charge and mass) and definite positions, which move around in a familiar three-dimensional space. The actual trajectories of those particles, though, are not what you’d expect from a classical point of view: they don’t trace straight lines through the space, but rather wobbly ones, if if they were bobbing around on some wave.

The other part of the Bohmian picture, I gather, is that one appeals to a wavefunction that lives in a space of far higher dimension: configuration space. As mentioned in a previous post I’m thinking of this as a set of (temporal slices of) possible worlds. The actual world is a point in configuration space, just as one would expect given this identification.

The first part of the Bohmian picture sounds all very safe from the metaphysician’s perspective: the sort of world at which, for example, Lewis’s project of Humean supervenience could get off and running, the sort of thing to give us the old-school worries about determinism and freedom (the evolution of a Bohmian world is totally deterministic). And so on and so forth.

But the second part is all a bit unexpected. What is a wave in modal space? Is that a physical thing (after all, it’s invoked in fundamental physical theory)? How can a wave in modal space push around particles in physical space? Etc.

I’m sure there’s lots of interesting phil physics and metaphysics to be done that takes the wave function seriously (I’ve started reading some of it). But I want to sketch a metaphysical interpretation of the above that treats it unseriously, for those of us with weak bellies.

The inspiration is Lewis’s treatment of objective chance (as explained, for example, in his “Humean supervenience debugged”). The picture of chance he there sketches has some affinities to frequentism: when we describe what there is and how it is in fundamental terms, we never mention chances. Rather, we just describe patterns of instantiation: radioactive decay here, now, another radioactive decay there, then (for example). What one then has to work with is certain statistically regularities that emerge from the mosaic of non-chancy facts.

Now, it’s very informative to be told about these regularities, but it’s not obvious how to capture that information within a simple theory (we could just write down the actual frequencies, but that’d be pretty ugly, and wouldn’t allow us to to capture underlying patterns among the frequencies). So Lewis suggests, when we’re writing down the laws, we should avail ourselves of a new notion “P”, assigning numbers to proposition-time pairs, obeying the usual probability axioms. We’ll count a P-theory as “fitting” with facts (roughly) to the extent that the P-values it assigns to propositions match up, overall, to the statistically regularities we mentioned earlier. Thus, if we’re told that a certain P-theory is “best”, we’re given some (cp) information on what the statistical regularities are. At not much gain in complexity, therefore, our theory gains enormously in informativeness.

The proposal, then, is that the chance of p at t is n, iff overall best theory assigns n to (p,t).

That’s very rough, but the I hope the overall idea is clear: we can be “selectively instrumentalist” about some of the vocabulary that appears in fundamental physical theory. Though many of the physical primitives will also be treated as metaphysically basic (as expressing “natural properties”) some bits that by the lights of independently motivated metaphysics are “too scary” can be regarded as just reflections of best theory, rather than part of the furniture of the world.

The question relevant here is: why stop at chance? If we’ve been able to get rid of one function over the space of possible worlds (the chance measure), why not do the same with another metaphysically troubling piece of theory: the wavefunction field.

Recall the first part of the Bohmian picture: particles moving through 3-space, in rather odd paths “as if guided by a wave”. Suppose this was all there (fundamentally) was. Well then, we’re going to be in a lot of trouble finding a decent way of encapsulating all this data about the trajectories of particles: the theory would be terribly unwieldy if we had to write out in longhand the exact trajectory. As before, there’s much to be gained in informativeness if we allow ourselves a new notion in the formulation of overall theory, L, say. L will assign scalar values (complex numbers) to proposition-time pairs, and we can then use L in writing down the wavefunction equations of quantum mechanics which elegantly predicts the future positions of particles on the basis of their present positions. The “best” L-theory, of course will be that one whose predictions of the future positions of particles fits with the actual future-facts. The idea is that wavefunction talk is thereby allowed for: the wave function takes value z at region R of configuration space at time t iff Best L-theory assigns z to L(R,t).

So that’s the proposal: we’re selectively instrumentalist about the wavefunction, just as Lewis is selectively instrumentalist about objective chance (I’m using “instrumentalist” in a somewhat picturesque sense, by the way: I’m certainly not denying that chance or wavefunction talk has robust, objective truth-conditions.) There are, of course, ways of being unhappy with this sort of treatment of basic physical notions in general (e.g. one might complain that the explanatory force has been sucked from notions of chance, or the wavefunction). But I can’t see anything that Humeans such as Lewis should be unhappy with here.

(There’s a really nice paper by Barry Loewer on Lewisian treatments of objective chance which I think is the thing to read on this stuff. Interestingly, at the end of that paper he canvasses the possibility of extending the account to the “chances” one (allegedly) finds in Bohmianism. It might be that he has in mind something that is, in effect, exactly the position sketched above. But there are also reasons for thinking there might be differences between the two ideas. Loewer’s idea turns on the idea that one can have something that deserves the name objective chance, even in a world for which there are deterministic laws underpinning what happens (as is the case for both Bohmianism, and for the chancy laws of statistically mechanics in a chancy world). I’m inclined to agree with Loewer on this, but even if that were given up, and one thought that the measure induced by the wavefunction isn’t a chance-measure, the position I’ve sketched is still a runner: the fundamental idea is to use the Lewisian tactics to remove ideological commitment, not to use the Lewisian tactics to remove ideological commitment to chance specifically. [Update: it turns out that Barry definitely wasn’t thinking of getting rid of the wavefunction in the way I canvass in this post: the suggestion in the cited paper is just to deal with the Bohmian (deterministic) chances in the Lewisian way])

[Update: I’ve just read through Jonathan Schaffer’s BJPS paper which (inter alia) attacks the Loewer treatment of chance in Stat Mechanics and Bohm Mechanics (though I think some of his arguments are more problematic in the Bohmian case than the stat case.) But anyway, if Jonathan is right, it still wouldn’t matter for the purposes of the theory presented here, which doesn’t need to make the claim that the measure determined by the wavefunction is anything to do with chance: it has a theoretical role, in formulating the deterministic dynamical laws, that’s quite independent of the issues Jonathan raises.]

AJP paper

My paper on a certain kind of argument for structural universals has just appeared in AJP. Very exciting from my perspective: I’ve had things “forthcoming” for so long, I think I thought they’d always have that status.

FWIW, the paper discusses a certain argument for the existence of structural universals (that is, universals “made out of” other universals, as “being water” might be thought to be made out of “being Hydrogen” “being Oxygen” etc.) The argument is based on the (alleged) possibility of worlds with no fundamental physical layer: where things “go down forever”. Quite a few people use this argument in print, and many more raise it in conversation when you’re pressing a microphysicalist metaphysics.

This is part of a wider project exploring a ontological microphysicalism, where the only things that really exist are the physical fundamentals. The recent stuff on ontological commitment is, in part, a continuation of that project.

On a more practical note, I can’t figure out how you access AJP articles these days: my institution is supposed to have a subscription, but the links that take you to the pdf don’t seem live. Any ideas of how to get into it would be gratefully received!

Vagueness and quantum stuff

I’ve finally put online a tidied up version of my ontic vagueness paper, which’ll be coming out in Phil Quarterly some time soon. One idea in the paper is to give an account of truths in an ontically vague world, making use of the idea that more than one possible world is actual. The result is a supervaluation-like framework, with “precisifications” replaced with precise possible worlds. For some reason, truth-functional multi-valued settings seem to have a much firmer grip on the ontic vagueness debate than in the vagueness debate more generally. That seems a mistake to me.

(The idea of having supervaluation-style treatments of ontic vagueness isn’t unknown in the literature however: in a couple of papers, Ken Akiba argues for this kind of treatment of ontic vagueness, though his route to this framework is pretty different to the one I like. And Elizabeth Barnes has been thinking and writing about the the kind of modal treatments of ontic vagueness for a while, and I owe huge amounts to conversations with her about all of these issues. Her take on these matters is very close to the one I like (non-coincidentally) and those interested should check out her papers for systematic discussion and defense of the coherence of ontic vagueness in this spirit.)

The project in my paper wasn’t to argue that there was ontic vagueness, or even tell you what ontic vagueness (constitutively) is. The project was just to set up a framework for talking about, and reasoning about, metaphysically vague matters, with a particular eye to evaluate the Evans argument against ontically vague identity. In particular, the framework I gave has no chance of giving any sort of reduction of metaphysical indeterminacy, since that very notion was used in defining up bits of the framework. (I’m actually pretty attracted to the view that the right way to think about these things would be to treat indeterminacy as a metaphysical primitive, in the way that some modalists might treat contingency. See this previous blog post. I was later pointed to this excellent paper by David Barnett where he works out this sort of idea in far more detail.)

One thing that I’ve been thinking about recently is how the sort of “indeterminacy” that people talk about in quantum mechanics might relate to this setting. So I want to write a bit about this here.

Some caveats. First, this stuff clearly isn’t going to be interpretation neutral. If you think Bohm gave the right account of quantum ontology, then you’re not going to think there’s much indeterminacy around. So I’ll be supposing something like the GRW interpretation. Second, I’m not going to be metaphysically neutral even given this interpretation: there’s going to be a bunch of other ways of thinking about the metaphysics of GRW that I don’t consider here (I do think, however, that independently motivated metaphysics can contribute to the interpretation of a physical theory). Third, I’m only thinking of non-relativistic quantum theory here: Quantum field theory and the like is just beyond me at the moment. Finally, I’m on a steep learning curve with this stuff, so please excuse stupidities.

You can represent the GRW quantum ontology as a wave function over a certain space (configuration space). Mathematically speaking, that’s a scalar field over a set of points (which then determines a measure over those points) in a high-dimensional space. As time rolls forward, the equations of quantum theory tell you how this field changes its values. Picture it as a wave evolving through time over this space. GRW tells you that at random intervals, this wave undergoes a certain drastic change, and this drastic change is what plays the role of “collapse”.

That’s all highly abstract. So let me try parlaying that into something more familiar to metaphysicians.

Suppose you’re interested in a world with N particles in it, at time t. Without departing from classical modes of thinking yet, think of the possible arrangements of those particles at t: a scattering of particles equipped with mass and charge over a 3-dimensional space, say (think of the particles haecceistically for now). Collect all these possible-world-slices together into a set. There’ll be a certain implicit ordering on this set: if the worlds contain nothing but those N massy and chargey particles located in space-time, then we can describe a world-slice w by giving, for each of the N particles, the coordinates of its location within w: that is, by giving a list of 3N coordinates. What this means is that each world can be regarded as a point in a 3N dimensional space (the first 3 dimensions giving the position of the first particle in w, the second 3 dimensions the position of the second, etc). And this is what I’m taking to be the “configuration space”. So what is the configuration space, on the way I’m thinking of it? It’s a certain set of time-slices of possible worlds.

One Bohmian picture of quantum ontology fits very naturally into the way that we usually think of possible worlds at this point. For Bohm says that one point in configuration space is special: it gives the actual positions of particles. And this fits the normal way of thinking of possible worlds: the special point in configuration space is just the slice of the actual world at t. (Bohmian mechanics doesn’t dispense with the wave across configuration space, of course: just as some physical theories would appeal to objective chance in their natural laws, which we can represent as a measure across a space of possible worlds, Bohmianism appeals to a scalar field determining a measure across configuration space: the wavefunction).

But on the GRW interpretation, we don’t get anything like this trad picture. What we have is configuration space and the wave function over it. Sometimes, the amplitude of that wave function is highly concentrated on a set of world-slices that are in certain respects very similar: say, they all contain particles arranged in a rough pointer-shaped in a certain location. But nevertheless, no single world will be picked out, and some amplitude will be given to sets of worlds which have the particles in all sorts of odd positions.

But of course, the framework for ontic vagueness I like is up for monkeying around with the actuality of worlds. There needn’t be a single designated actual world, on the way I was thinking of things. But the picture I described doesn’t exactly fit the present situation. For I supposed (following the supervaluationist paradigm) that there’d be a set of worlds, all of which would be “co-actual”.

Yet there are other closely related models that’d help here. In particular, Lewis, Kamp and Edgington have described what I’ll call a “degree supervaluationist” picture that looks to be exactly what we need. Here’s the story, in the original setting. Your classical semantic theorist looks at the set of all possible interpretations of the language, and says that one among them is the designated (or “intended”) one. Truth is truth at the unique, designated, interpretation. Your supervaluationist looks at the same space, and says that there’s a set of interpretations with equal claim to be “intended”: so they should all be co-designated. Truth is truth at each of the co-designated interpretations. Your degree-supervaluationist looks at the set of all interpretations, and says that some are better than others: they are “intended” to different degrees. So the way to describe the semantic facts is to give a measure over the space of interpretations that (roughly) gives in each case the degree to which a given interpretation is designated. Degree supervaluationism will share some of the distinctive features of the classical and standard supervaluational setups: for example, since classical tautologies are true at all interpretations, the law of excluded middle and the like will be “true to degree 1” (i.e. true on a set of interpretations of designation-measure 1).

I don’t see any reason why we can’t take this across to the worlds setting I favoured. Just as the traditional view is that there’s a unique actual world among the space of possible worlds, and I argued that we can make sense of there sometimes being a set of coactual worlds among that space (with something being true if it is true at all of them), I now suggest that we should be up for there being some measure across the space of possible worlds, expressing the degree to which those worlds are actual.

The suggestion this is building up to is that we regard the measure determined by the wavefunction in GRW as the “actuality measure”. Things are determinately the case to the extent that the set of worlds where they’re true is assigned a high measure.

So, for example, suppose that the amplitude of the wavefunction is concentrated on worlds where Sparky is located within region R (suppose the measure of that space of world-slices is 0.9). Then it’ll be determinately the case to degree 0.9 that Spark is in location R. Of course, in a set of worlds of measure 0.1, Sparky will be outside R. So it’ll be determinately the case to degree 0.1 that Sparky is outside R. (Of course, it’ll be determinate to degree 1 that Sparky is either inside R or outside R: at all the worlds, Sparky is located somewhere!)

I don’t expect this to shed much light at all on what the wavefunction means. Ontic indeterminacy, many think, is a pretty obscure notion taken cold, and I’m not expecting metaphysicians or anyone else to find the notion of “degrees of actuality” something they recognize. So I’m not saying that there’s any illuminating metaphysics of GRW here. I think the illumination is likely to go in the other direction: if you’ve can get a pre-philosophical grip on the “determinacy” and “no fact of the matter” talk in quantum physics, we’ve got a way of using that to explain talk of “degrees of actuality” and the like. Nevertheless, I think that, if this all works technically, then a bunch of substantive results follow. Here’s a few thoughts in that direction:

  1. We’ve got a candidate for vagueness in the world, linked to a general story about how to think about ontic vagueness. Given ontic vagueness isn’t in the best repute in the philosophical community, there’s an important “existence result” in the offing here.
  2. Recall the idea canvassed earlier that “determinacy” or an equivalent might just be a metaphysical primitive. Well, here we have the suggestion that what amounts to (degrees of) determinacy being taken as a *physical* primitive. And taking the primitives of fundamental physics as a prima facie guide to metaphysical primitives is a well-trodden route, so I think some support for that idea could be found here.
  3. If there is ontic vagueness in the quantum domain, then we should be able to extract information about the appropriate way to think and reason in the presence of determinacy, by looking at an appropriately regimented version of how this goes in physics. And notice that there’s no suggestion here that we go for a truth-functional degree theory with the consequent revisions of classical logic: rather, a variant of the supervaluational setup seems to me to be the best regimentation. If that’s right, then it lends the support for the (currently rather hetrodox) supervaluational-style framework for thinking about metaphysical vagueness.
  4. I think that there’s a bunch of alleged metaphysical implications of quantum theory that don’t *obviously* go through if we buy into the sort of metaphysics of GRW just suggested. I’m thinking in particular about the allegation that quantum theory teaches us that certain systems of particles have “emergent properties” (Jonathan Shaffer has been using this recently as part of his defence of Monism). Bohmianism already shows, I guess, that this sort of claim won’t be interpretation-neutral. But the above picture I think complicates the case for holism even within GRW.

(Thanks are owed to a bunch of people, particularly George Darby, for discussion of this stuff. They shouldn’t be blamed for any misunderstands of the physics, or indeed, philosophy, that I’m making!)

Parsimony and the fundamental (x-posted from metaphysical values)

A bit cross-posting this one…

In his APA comments on Jonathan Schaffer, Ross asks about some of Jonathan’s ideas about the applicability of Ockham’s razor. The question arises if you buy into some robust distinction between “fundamental” and “derivative” existents. Candidate fundamental existents: quarks, electrons, maybe organisms (or maybe just THE WORLD). Candidate derivative existents: weirdo fusions, impure sets, maybe tables and chairs (or maybe everything except THE WORLD).

Let’s call the idea that “derivative” as well as “fundamental” entities are (thump table) existing things the expansivist interpretation of the fundamental/derivative distinction. Call the idea that only the fundamental (thump table) exists the restrictivist interpretation of that distinction.

Jonathan’s position is that Ockham’s razor, rightly understood, tells us to minimize the number of fundamental entities. Ross’s idea (I think?) is that this is right iff one has a restrictivist understanding of the fundamental/derivative distinction. But Jonathan, pretty clearly, has an expansivist understanding of that distinction: he doesn’t want to say that the only thing that (thump table) exists is the world, just that the world is ontologically prior to everything else. So if Ross is right, his application of parsimony is in trouble.

I can see what the idea is here: after all, understanding parsimony as the instruction to minimize (thump table) existents or to minimize the (thump table) kinds of existents is surely close to the traditional understanding. Whereas the idea that we need only minimize (kinds of) existents of such-and-such a type, seems to come a bit out of the blue, and at minimum we need some more explanation before we could accept that revision to our theoretical maxims.

However… One thing that seems important is to consider what sort of principles of parsimony might be present in more ordinary theorizing (e.g. in the special sciences). The appeal of appealing to parsimony in metaphysics is in large part that it’s a general theoretical virtue, applicable in all sorts of areas that are paradigms of good, productive fields of inquiry. Now, theoretical virtues in the sciences is not a topic that I’m in a position to speak with authority on. But one thing that seems to me important in this connection: if you think that the entities of special sciences aren’t fundamental entities, then principles of parsimony restricted to the fundamentals aren’t going to be in a position to give you much bite. (NB: I think that this was raised by someone in comments on Jonathan’s paper in Boise, but I can’t remember who it was…).

If that’s right, then whether you’re an expansivist or a restrictivist about the fundamental/derivative distinction seems beside the point. Any theorist who gives a story about what the fundamentals are that’s unconstrained by what the special sciences say, is going to be in trouble with the idea that principles of parsimony should be restricted to constraints on fundamental existents: for such principles of parsimony won’t then be able to get much bite on theorizing in the special sciences. I’d like to think that quarks, leptons etc are going to populate the fundamental, rather than Jonathan’s WORLD. This point bites me as much as Jonathan.

There’s plenty of room for further discussion here, particularly the interaction of the above with what you take to be evidence for some entities being fundamental. E.g. if you thought that various types of emergentism in special science would be evidence for “higher level” fundamental entities, then maybe the above parsimony principle would still have application to special sciences: it’d tell you to reduce to the number of emergent entities you postulate (i.e. it’d be a methodological imperative towards reductionism).

Also, it seems to me that there is something to the thought that some entities are simply “don’t cares” when applying parsimony principles. If I’m concerned with theorizing about the behaviour of various beetles in front of me, I care about how many kinds of beetles my theory is giving me, but not with how many kinds of mathematical entities I need to invoke in formulating that theory. Now, maybe that differential attitude can be explained away by pointing to the generality of the mathematica involved (e.g. that total science is “already committed to them”). But one natural take would be to look for restrictions to principles of parsimony/Ockham’s razor, making them sensitive to the subject-matter under investigation.

To speculate wildly: If principles of parsimony do need to be sensitized in this way, and if the study of what fundamentally exists is a genuine investigation, maybe the principle of parsimony, in application to that study, really would tell us to minimize the number of, and kinds of, fundamental entities we posit.

Fundamental and derivative truths

After a bit of to-ing and fro-ing, I’ve decided to post a first draft of “Fundamental and derivative truths” on my work in progress page.

I’ve been thinking about this material a lot lately, but I’ve found it surprisingly different to formulate and explain. I can see how everything fits together: just not sure how best to go about explaining it to people. Different people react to it in such different ways!

The paper does a bunch of things:

  • offering an interpretation of Kit Fine‘s distinction between things that are really true, and things that are merely true. (So, e.g. tables might exist, but not really exist).
  • using Agustin Rayo‘s recent proposal for formulating a theory of requirements/ontological commitments in explication.
  • putting forward a general strategy for formulating nihilist-friendly theories of requirements (set theoretic nihilism and mereological nihilisms being the illustrative cases used in the paper).
  • using this to give an account of “postulating” things into existence (e.g. sets, weirdo fusions).
  • sketching a general answer to the question: in virtue of what do our sentences have the ontological commitments they do (i.e. what makes a theory of requirements *the correct one* for this or that language?)

This is exploratory stuff: there’s lots more to be said about each of these, and plenty more issues (e.g. how does this relate to fictionalist proposals?) But I’m at a stage where feedback and discussion are perhaps the most important things, so making it public seems a natural strategy…

I’m going to be talking in more detail about the case of mereological nihilism at the CMM structure in metaphysics workshop.

Perspectives and magnets

As Brian Weatherson notes, the new Philosophical Perspectives is now out. This includes a paper of mine called “Illusions of gunk”. The paper defends mereological nihlism (the view that no complex objects exist) against a certain type of worry: (1) that mereological nihlism is necessary, if true; and (2) that “gunk-worlds” (worlds apparently containing no non-complex objects) are possible. (See this paper of Ted Sider’s for the worry) I advise the merelogical nihilist to reject (2). There are various possibilities that the nihilist can admit, that plausibly explain the illusion that gunk is possible.

The volume looks to be full of interesting papers, but there’s one in particular I’ve read before, so I’ll write a little about that right now.

The paper is Brian Weatherson’s “Asymmetric Magnets Problem”. The puzzle he sets out is based on a well-entrenched link between intrinsicality and duplication: a property is intrinsic iff necessarily, it is shared among duplicate objects. Weatherson examines an application of this principle to a case where some of the features of the objects we consider are vectorial.

In particular, consider an asymmetric magnet M: one which has a pointy-bit at one end, and is such that the north pole of the magnet “points out” of the pointy end. Intuitively, the following is a duplicate of another magnet M*: one with the same shape, but simply rotated by 180 degrees so that both the north pole and the pointy end are both orientated in the opposite direction to M. (Weatherson has some nice pictures, if you want to be clear about the situation).

Though M and M* seem to be duplicates, their vectorial features differ: M has its north pole pointing in one direction, M* has its north pole pointing in the opposite direction. Moral: given the link, we can’t take vectorial properties “as a whole” (i.e. building in their directions) as intrinsic, for they differ between duplicates.

What if we think that only the magnitude of a vectorial feature is intrinsic? Then we get a different problem: for their are pointy magnets whose north pole is directed out of the non-pointy end. Call one of these M**. But in shape properties, and so on, it matches M and M*. And ex hypothesi, in all intrinsic respects, their vectorial features are the same. So M, M* and M** all count as duplicates. But that’s intuitively wrong (it’s claimed).

Such is the asymmetric magnets problem. The challenge is to say something precise about how to think about the duplication of things with vectorial features, that’d preserve both intuitions and the duplication-intrinsicality link.

Weatherson’s response is to take a certain relationship between parts of the pointy magnet its vectorial feature, as intrinsic to the magnet. In effect, he takes the relative orientation of the north-pole vector, and a line connecting certain points within the magnet, as intrinsic.

Ok, that’s Weatherson’s line in super-quick summary, as I read him. Here are some thoughts.

First thing to note: the asymmetric magnets problem looks like a special case of a more general issue. Suppose point particles a, b, c each have two fundamental vectoral features F and G, with the same magnitude in each case. Suppose in a’s case they point in different directions, whereas in b and c’s cases they point in the same direction (in b’s case they both point north, in c’s case they both point south). The intuitive verdict is that a and b are not duplicates, but b and c are. But, if you just demand that duplicates preserve the magnitudes of the quantities, you’ll get a, b, and c as duplicates of one another; and if you demand that duplicates preserve direction of vectoral quantities, you’ll get none of them as duplicates. That sounds just like the asymmetric magnets problem all over again. Let me call it the vector-pair problem.

What’s the natural Weathersonian thought about the vector-pair problem? The natural line is to take the relative orientation (“angle”) between the instances of F and G as a perfectly natural relation. (I think that Weatherson might go for this line now: see his comment here).

It seemed to me that a natural response to the problem just posed might be this: require that the magnitude of any quantities is invariant under duplication; also that the *relative orientation* of vectoral properties be invariant under duplication. Thus we build into the definition of duplication the requirement that any angles between vectors are preserved. There’s thus no easy answer to the question of whether vectorial features of objects are intrinsic: we can only say that their magnitudes and relative orientations are, but their absolute orientation is not.

This leads to a couple of natural questions:

(A) Why do we demand absolute sameness of magnitude, and only relative sameness of direction, when defining what it takes for something to be a duplicate of something else?

I’m tempted to think that there’s no deep answer to this question. In particular, consider a possible world with an “objective centre”, and where various natural laws are formulated in terms of whether objects have properties “pointing towards” the centre or away from it. E.g. suppose two objects both with instantaneous velocity towards the centre will repel each other with a force proportional to the inverse of their separation; while two objects both with instantaneous velocity away from the centre will attract each other with a similar force (or something like that: I’m sure we can cook something up that’ll make the case work). Anyway, since the behaviour of objects depends on the “direction in which they’re pointing”, I no longer have strong intuitions that particles like b and c should count as duplicates (with that world considered counteractually).

I find it harder to imagine worlds where only relative magnitudes matter to physical laws, but I suspect that with ingenuity one could describe such a case: and maybe (considering such a scenario counteractually again) we’d be happier to demand only relative sameness of magnitudes, in addition to relative sameness of orientation of vectoral properties, among duplicates.

(B) The above proposal demands invariance of relative orientation of vectoral properties among duplicate entities. But that doesn’t straightaway deal with the original asymmetric magnet case. For there we had the orientation of the shape-properties of the object to consider, not just the orientation of the vectoral quantities that the (parts of) the object has.

I’m tempted by the following way of subsuming the original problem under the more general treatment just given: say that some perfectly natural spatial properties are actually vectoral in character. E.g. the spatial property that holds between my hand and my foot is not simply “being separated by 1m” but rather “being separated by 1m downwards” (with, of course, the converse relation holding in the other direction). After all, if in giving the spatial properties that I currently have, we just list the spatial separations of my parts, we leave something out: my orientation. And that is a spatial property that I have (and is coded into the usual representations of location, e.g. Cartesian or polar coordinates. Of course, such representations are all relative to a choice of axes, just as the representation of spatial separation is relative to a choice of unit.)

Now, there might be ways of getting this result without saying that spatial-temporal relations among particulars are fundamentally vectorial. But I’m not seeing exactly how this would work.

(Incidentally, if we do allow fundamentally vectorial spatio-temporal relations, then it’s not clear that we need to appeal to spatio-temporal relations among parts of an object to solve the asymmetric magnets problem: appealing to the angle between the “north pole” and the (vectorial) spatio-temporal properties of the pointy magnet may be enough to get the intuitive duplication verdicts. If so, then the Weathersonian solution can be extended to the case where the magnets are extended simples, which is (a) a case he claims not to be able to handle (b) a case he claims to be impossible. But I disagree with (b), so from my perspective (a) looks like a serious worry!)

(x-posted on metaphysical values)

Eliminating cross-level universals

I’ve just come back from a CMM discussion of Lewis on Quantities (built around John Hawthorne’s paper of that title).

One thing that came up was the issue of what you might call potentially “cross level” fundamental properties. These are properties that you might expect to find instantiated at the “bottommost” microphysical level, but also instantiated “further up”. For example, electrons have negative charge; but so do ions. But ions are composite entities, which (from what I remember of A-level chemistry) are charged in virtue of the charges of their parts.
Clearly in some sense, electrons and ions can have the same determinate property: e.g. “charge -1”. But, when giving e.g. a theory of universals, I’m wondering whether we have to say that they share the same Universal.

On Armstrong’s theory of quantities, it looks to me that we won’t say that the ion and the electron both instantiate the same Universal. The “charge -1” we find instantiated by the ion will be a structural universal, composed of the various charge Universals instantiated by the basic parts of the ion. The “charge -1” we find instantiated by an electron, on the other hand, looks like it’ll be a basic, non-structural universal. So, it seems to me, it’ll then be a challenge to Armstrong’s account to say why these two universals resemble each other in a tight enough way that we apply to them the same predicate. (To avoid confusion, let’s call the former “ur-charge -1” and leave “charge -1” as a predicate that applies to both ions and electrons, though not, on this view, in virtue of them instantiating the same Universal).

Let’s suppose we’re looking at a theory of universals (such as the one Lewis seems to contemplate at various points) which is just like Armstrong’s except for ditching all the structural universals. Electrons get to instantiate the Universal “ur-charge -1”. But ions, as actual-worldly complex objects, instantiate no Universals at all. Of course, again there’s the challenge to spell out exactly what the conditions are under which we’ll apply the predicate “charge -1” to things (roughly: when the various ur-charges instantiated by their parts “balance out”—though the details get tricksy).

What goes for charge can go for various other types of property. So we may find it useful to distinguish ur-mass 1kg (which will be a genuine basic universal) from the set of things “having mass 1kg”.

A last thought. What is the relation between mass properties and ur-masses? In particular, is it the case that things can only ever have masses when their basic parts have ur-masses? I don’t see any immediate reason to think so. Perhaps the actual world is one where things have mass in virtue of their parts having ur-mass. But why shouldn’t we think that “having parts that have ur-masses” is but one *realization* of mass: and that at other worlds quite different ur-properties may underlie mass (say, ur-mass-densities, rather than ur-masses). That’s potentially significant for discussions of modality and quantities: for two worlds that intially seem to be share the same stock of fundamental properties (spin, charge, mass, etc) may turn out to actual contain alien properties from each others point of view: if one contains ur-masses underlying the (non-fundamental) mass properties, while the other contains ur-mass-densities underlying those same properties.

(Thanks to all those at CMM for the discussion that led to this. This is x-posted at Metaphysical Values. And thanks to an anonymous commentator, who pointed out in an early version of this post that by “free radicals” I meant “ions”!)

Primitivism about vagueness

One role this blog is playing is allowing me to put down thoughts before I lose them.

So here’s another idea I’ve been playing with. If you think about the literature on vagueness, it’s remarkable that each of the main players seems to be broadly reductionist about vagueness. The key term here is “definitely”. The Williamsonian epistemicist reduces “definitely” to a concept constructed out of knowability. The supervaluationist typically appeals to semantic indecision, on one reading, that reduces vagueness to semantic facts; on another reading, that reduces vagueness to metasemantic facts concerning the link between semantic facts and their subvening base. Things are a little less clear with the degree theorist, but if “definite truth” is identified with “truth to degree 1”, then what they’re doing is reducing vagueness to semantic facts again.

If you think of the structure of the debate like this, then it makes sense of some of the dialectic on higher-order vagueness. For example, if vagueness is nothing but semantics, then the question immediately arises: what about those cases where semantic facts themselves appear to be vague? The parallel question for the epistemicist is: what about cases where it’s vague whether such-and-such is knowable? The epistemicists look like they’ve got a more stable position at this point, though exactly why this is is hard to spell out.

Consider other debates, e.g. in the philosophy of modality. Sure, there are reductionist views: Lewis wanting to reduce modality to what goes on in other concrete space-times; people who want to reduce it to a priori consistency; and so on. But a big player in that debate is the modalist, who just takes “possibility” and “necessity” as primitive, and refuses to offer a reductive story.

It seems to me pretty clear that a position analogous to modalism should be a central part of the vagueness literature; but I’m not aware of any self-conscious proponents of this position. Let me call it “primitivism” about vagueness. I think that perhaps some self-described semantic theorists would be better classified as primitivists.

At the end of ch 5 of the “Vagueness” book, Tim Williamson has just finished beating up on traditional supervaluationism, which equates truth with supertruth. He then briefly considers people who drop that identification. Here’s my take on this position. Proponents say that semantically, there’s a single precisification of our language which is the intended one, but which one it is is (semantically) vague. Truth is truth on the intended precisification; but definite truth is defined to be truth on all the precisifications which aren’t determinately unintended. Definite truth (supertruth) and truth come apart. This position, from a logical point of view, is entirely classical; satisfies bivalence; and looks like it thereby avoids many of Williamson’s objections to supervaluationism.

I think Williamson puts exactly the right challenge to this line. In what sense is this a semantic theory of vagueness? After all, you haven’t characterized “Definitely” in semantic terms: rather, what we’ve done is characterized “Definitely” using that very notion again in the metalanguage. One might resist this, claiming that “Definitely” should be defined using the term “admissible precisification” or some such. But then one wonders what account could be made of “admissible”: it plays no role in defining semantic notions such as “true” or “consequence” for this theorist. What sense can be made of it?

I think the challenge can be met by metasemantic versions of supervaluationism, who give a substantive theory of what makes a precisification admissible. I take that to be something like the McGee/McLaughlin line, and I spent a chapter of my thesis trying to lay out precisely what was involved. But that’s another story.

What I want to suggest now is that Primitivism about vagueness gives us a genuinely distinct option. This accepts Williamson’s contention that when we drop supertruth=truth, “nothing articulate” remains of the semantic theory of vagueness. But it questions the idea that this should lead us towards epistemicism. Let’s just take determinacy (or lack of it) as a fundamental part of reality, and then use it in constructing theories that make sense of the phenomenon of vagueness. Of course, there’s nothing positive this theorist has to say that distinguishes her from reductive rivals such as the epistemicist; but she has plenty of negative things to say disclaiming various reductive theses.