Category Archives: Indeterminacy

“Supervaluationism”: the word

I’ve got progressively more confused over the years about the word “supervaluations”. It seems lots of people use it in slightly different ways. I’m going to set out my understanding of some of the issues, but I’m very happy to be contradicted—I’m really in search of information here.

The first occurrence I know of is van Fraassen’s treatment of empty names in a 1960’s JP article. IIRC, the view there is that language comes with a partial intended interpretation function, specifying the references of non-empty names. When figuring out what is true in the language, we
look at what is true on all the full interpretations that extend the intended partial interpretation. And the result is that “Zeus is blue” will come out neither true nor false, because on some completions of the intended interpretation the empty name”Zeus” will designate a blue object, and others he won’t.

So that gives us one meaning of a “supervaluation”: a certain technique for defining truth simpliciter out of the model-theoretic notions of truth-relative-to-an-index. It also, so far as I can see, closes off the question of how truth and “supertruth” (=truth on all completions) relate. Supervaluationism, in this original sense, just is the thesis that truth simpliciter should be defined as truth-on-all-interpretations. (Of course, one could argue against supervaluationism in this sense by arguing against the identification; and one could also consistently with this position argue for the ambiguity view that “truth” is ambiguous between supertruth and some other notion—but what’s not open is to be a supervaluationist and deny that supertruth is truth in any sense.)

Notice that there’s nothing in the use of supervaluations in this sense that enforces any connection to “semantic theories of vagueness”. But the technique is obviously suggestive of applications to indeterminacy. So, for example, Thomason in 1970 uses the technique within an “open future” semantics. The idea there is that the future is open between a number of currently-possible histories. And what is true about is what happens on all these histories.

In 1975, Kit Fine published a big and technically sophisticated article mapping out a view of vagueness arising from partially assigned meanings, that used among other things supervaluational techniques. Roughly, the basic move was to assign each predicate with an extension (the set of things to which it definitely applies) and an anti-extension (the set of things to which it definitely doesn’t apply). An interpretation is “admissible” only if it assigns an set of objects to a predicate that is a superset of the extension, and which doesn’t overlap the anti-extension. There are other constraints on admissibility too: so-called “penumbral connections” have to be respected.

Now, Fine does lots of clever stuff with this basic setup, and explores many options (particularly in dealing with “higher-order” vagueness). But one thing that’s been very influential in the folklore is the idea that based on the sort of factors just given, we can get our hands on a set of “admissible” fully precise classical interpretations of the language.

Now the supervaluationist way of working with this would tell you that truth=truth on each admissible interpretation, and falsity=falsity on all such interpretations. But one needn’t be a supervaluationist in this sense to be interested in all the interesting technologies that Fine introduces, or the distinctive way of thinking about semantic indecision he introduces. The supervaluational bit of all this refers only to one stage of the whole process—the step from identifying a set of admissible interpretations to the definition of truth simpliciter.

However, “supervaluationism” has often, I think, been identified with the whole Finean programme. In the context of theories of vagueness, for example, it is often used to refer to the idea that vagueness or indeterminacy arises as a matter of some kind of unsettledness as to what precise extensions are expressions pick out (“semantic indecision”). But even if the topic is indeterminacy, the association with *semantic indecision* wasn’t part of the original conception of supervaluations—Thomason’s use of them in his account of indeterminacy about future contingents illustrates that.

If one understands “supervaluationism” as tied up with the idea of semantic indecision theories of vagueness, then it does become a live issue whether one should identify truth with truth on all admissible interpretations (Fine himself raises this issue). One might think that the philosophically motivated semantic machinery of partial interpretations, penumbral connections and admissible interpretations is best supplemented by a definition of truth in the way that the original VF-supervaluationists favoured. Or one might think that truth-talk should be handled differently, and that the status of “being true on all admissible assignments” shouldn’t be identified with truth simpliciter (say because the disquotational schemes fail).

If you think that the latter is the way to go, you can be a “supervaluationist” in the sense of favouring a semantic indecision theory of vagueness elaborated along Kit Fine’s lines, without being a supervaluationist in the sense of using Van Fraassen’s techniques.

So we’ve got at least these two disambiguations of “supervaluationism”, potentially cross-cutting:

(A) Formal supervaluationism: the view that truth=truth on each of a range of relevant interpretations (e.g. truth on all admissible interpretations (Fine); on all completions (Van Fraassen); or on all histories (Thomason)).
(B) Semantic indeterminacy supervaluationism: the view that (semantic) indeterminacy is a matter of semantic indecision: there being a range of classical interpretations of the language, which, all-in, have equal claim to be the right one.

A couple of comments on each. (A) of course, needs to be tightened up in each case by saying which are the relevant range of classical interpretations quantified over. Notice that a standard way of defining truth in logic books is actually supervaluationist in this sense. Because if you define what it is for a formula “p” to be true as it being true relative to all variable assignments, then open formulae which vary in truth value from variable-assignment to variable assignment end up exactly analogous to formulae like “Zeus is blue” in Van Fraassen’s setting: they will be neither true nor false.

Even when it’s clear we’re talking about supervaluationism in the sense of (B), there’s continuing ambiguity. Kit Fine’s article is incredibly rich, and as mentioned above, both philosophically and technically he goes far beyond the minimal idea that semantic vagueness has something to do with the meaning-fixing facts not settling on a single classical interpretation.

So there’s room for an understanding of “supervaluationism” in the semantic-indecision sense that is also minimal, and which does not commit itself to Fine’s ideas about partial interpretations, conceptual truths as “penumbral constraints” etc. David Lewis in “Many but also one”, as I read him, has this more minimal understanding of the semantic indecision view—I guess it goes back to Hartry Field’s material on inscrutability and indeterminacy and “partial reference” in the early 1970’s, and Lewis’s own brief comments on related ideas in his (1969).

So even if your understanding of “supervaluationism” is the (B)-sense, and we’re thinking only in terms of semantic indeterminacy, then you still owe elaboration of whether you’re thinking of a minimal “semantic indecision” notion a la Lewis, or the far richer elaboration of that view inspired by Fine. Once you’ve settled this, you can go on to say whether or not you’re a supervaluationist in the formal, (A)-sense—and that’s the debate in the vagueness literature over whether truth should be identified with supertruth.

Finally, there’s the question of whether the “semantic indecision” view (B), should be spelled out in semantic or metasemantic terms. One possible view has the meaning-fixing facts picking out not a single interpretation, but a great range of them, which collectively play the role of “semantic value” of the term. That’s a semantic or “first-level” (in Matti Eklund‘s terminology) view of semantic indeterminacy. Another possible view has the meaning-fixing facts trying to fix on a single interpretation which will give the unique semantic value of each term in the language, but it being unsettled which one they favour. That’s a metasemantic or “second-level” view of the case.

If you want to complain that second view is spelled out quite metaphorically, I’ve some sympathy (I think at least in some settings it can be spelled out a bit more tightly). One might also want to press the case that the distinction between semantic and metasemantic here is somewhat terminological—what we choose to label the facts “semantic” or not. Again, I think there might be something to this. There are also questions about how this relates to the earlier distinctions—it’s quite natural to think of Fine’s elaboration as being a paradigmatically semantic (rather than metasemantic) conception of semantic supervaluationism. It’s also quite natural to take the metasemantic idea to go with a conception that is non-supervaluational in the (A) sense. (Perhaps the Lewis-style “semantic indecision” rhetoric might be taken to suggest a metasemantic reading all along, in which way it is not a good way to cash out what’s the common ground among (B)-theorists is). But there’s room for a lot of debate and negotiation on these and similar points.

Now all this is very confusing to me, and I’m sure I’ve used the terminology confusingly in the past. It kind of seems to me that ideally, we’d go back to using “supervaluationism” in the (A) sense (on which truth=supertruth is analytic of the notion); and that we’d then talk of “semantic indecision” views of vagueness of various forms, with its formal representation stretching from the minimal Lewis version to the rich Fine elaboration, and its semantic/metasemantic status specified. In any case, by depriving ourselves of commonly used terminology, we’d force ourselves to spell out exactly what the subject matter we’re discussing is.

As I say, I’m not sure I’ve got the history straight, so I’d welcome comments and corrections.

Aristotelian indeterminacy and partial beliefs

I’ve just finished a first draft of the second paper of my research leave—title the same as this post. There’s a few different ways to think about this material, but since I hadn’t posted for a while I thought I’d write up something about how it connects with/arises from some earlier concerns of mine.

The paper I’m working on ends up with arguments against standard “Aristotelian” accounts of the open future, and standard supervaluational accounts of vague survival. But one starting point was an abstract question in the philosophy of logic: in what sense is standard supervaluationism supposed to be revisionary? So let’s start there.

The basic result—allegedly—is that while all classical tautologies are supervaluational tautologies, certain classical rules of inference (such as reductio, proof by cases, conditional proof, etc) fail in the supervaluational setting.

Now I’ve argued previously that one might plausibly evade even this basic form of revisionism (while sticking to the “global” consequence relation, which preserves traditional connections between logical consequence and truth-preservation). But I don’t think it’s crazy to think that global supervaluational consequence is in this sense revisionary. I just think that it requires an often-unacknowledged premise about what should count as a logical constant (in particular, whether “Definitely” counts as one). So for now let’s suppose that there are genuine counterexamples to conditional proof and the rest.

The standard move at this point is to declare this revisionism a problem for supervaluationists. Conditional proof, argument by cases: all these are theoretical descriptions of widespread, sensible and entrenched modes of reasoning. It is objectionably revisionary to give them up.

Of course some philosophers quite like logical revisionism, and would want to face-down the accusation that there’s anything wrong with such revisionism directly. But there’s a more subtle response available. One can admit that the letter of conditional proof, etc are given up, but the pieces of reasoning we normally call “instances of conditional proof” are all covered by supervaluationally valid inference principles. So there’s no piece of inferential practice that’s thrown into doubt by the revisionism of supervaluational consequence: it seems that all that happens is that the theoretical representation of that practice has to take a slightly more subtle form than one might except (but still quite a neat and elegant one).

One thing I mention in that earlier paper but don’t go into is a different way of drawing out consequences of logical revisionism. Forget inferential practice and the like. Another way in which logic connects with the rest of philosophy is in connection to probability (in the sense of rational credences, or Williamson’s epistemic probabilities, or whatever). As I sketched in a previous post, so long as you accept a basic probability-logic constraint, which says that the probability of a tautology should be 1, and the probability of a contradiction should be 0, then the revisionary supervaluational setting quickly forces you to a non-classical theory of probability: one that allows disjunctions to have probability 1 where each disjunct has probability 0. (Maybe we shouldn’t call such a thing “probability”: I take it that’s terminological).

Folk like Hartry Field have argued completely independently of this connection to Supervaluationism that this is the right and necessary way to handle probabilities in the context of indeterminacy. I’ve heard others say, and argue, that we want something closer to classicism (maybe tweaked to allow sets of probability functions, etc). And there are Dutch Book arguments to consider in favour of the classical setting (though I think the responses to these from the perspective of non-classical probabilities are quite convincing).

I’ve got the feeling the debate is at a stand-off, at least at this level of generality. I’m particularly unmoved by people swapping intuitions about degrees of belief it is appropriate to have in borderline cases of vague predicates, and the like (NB: I don’t think that Field ever argues from intuition like this, but others do). Sometimes introspection suggests intriguing things (for example, Schiffer makes the interesting suggestion that one’s degree of belief in a conjunction of two vague propositions is typically matches one’s degree of belief in the propositions themselves). But I can’t see any real dialectical force here. In my own case, I don’t have robust intuitions about these cases. And if I’m to go on testimonial evidence on others intuitions, it’s just too unclear what people are reporting on for me to feel comfortable taking their word for it. I’m worried, for example, they might just be reporting the phenomenological level of confidence they have in the proposition in question: surely that needn’t coincide with one’s degree of belief in the proposition (thinking of an exam you are highly nervous about, but are fairly certain you will pass… your behaviour may well manifest a high degree of belief, even in the absence of phenomenological trappings of confidence). In paradigm cases of indeterminacy, it’s hard to see how to do better than this.

However, I think in application to particular debates we might be able to make much more progress. Let us suppose that the topic for the day is the open future, construed, minimally, as the claim that while there are definite facts about the past and present, the future is indefinite.

Might we model this indefiniteness supervaluationally? Something like this idea (with possible futures playing the role of precisifications) is pretty widespread, perhaps orthodoxy (among friends of the open future). It’s a feature of MacFarlane’s relativistic take on the open future, for example. Even though he’s not a straightforward supervaluationist, he still has truth-value gaps, and he still treats them in a recognizably supervaluational-style way.

The link between supervaluational consequence and the revisionionary behaviour of partial beliefs should now kick in. For if you know with certainty that some P is neither true nor false, we can argue that you should invest no credence at all in P (or in its negation). Likewise, in a framework of evidential probabilities, P gets no evidential probability at all (nor does its negation).

But think what this says in the context of the open future. It’s open which way this fair coin lands: it could be heads, it could be tails. On the “Aristotelian” truth-value conception of this openness, we can know that “the coin will land heads” is gappy. So we should have credence 0 in it, and none of our evidence supports it.

But that’s just silly. This is pretty much a paradigmatic case where we know what partial belief we have and should have in the coin landing heads: one half. And our evidence gives exactly that too. No amount of fancy footwork and messing around with the technicalities of Dempster-Shafer theory leads to a sensible story here, as far as I can see. It’s just plainly the wrong result. (One doesn’t improve matters very much by relaxing the assumptions, e.g. taking the degree of belief in a failure of bivalence in such cases to fall short of one: you can still argue for a clearly incorrect degree of belief in the heads-proposition).

Where does that leave us? Well, you might reject the logic-probability link (I think that’d be a bad idea). Or you might try to argue that supervaluational consequence isn’t revisionary in any sense (I sketched one line of thought in support of this in the paper cited). You might give up on it being indeterminate which way the coin will land—i.e. deny the open future, a reasonably popular option. My own favoured reaction, in moods when I’m feeling sympathetic to the open future, is to go for a treatment of metaphysical indeterminacy where bivalence can continue to hold—my colleague Elizabeth Barnes has been advocating such a framework for a while, and it’s taken a long time for me to come round.

All of these reactions will concede the broader point—that at least in this case, we’ve got an independent grip on what the probabilities should be, and that gives us traction against the Supervaluationist.

I think there are other cases where we can find similar grounds for rejecting the structure of partial beliefs/evidential probabilities that supervaluational logic forces upon us. One is simply a case where empirical data on folk judgements has been collected—in connection with indicative conditions. I talk about this in some other work in progress here. Another which I talk about in the current paper, and which I’m particularly interested in, concerns cases of indeterminate survival. The considerations here are much more involved than in indeterminacy we find in connection to the open future or conditionals. But I think the case against the sort of partial beliefs supervaluationism induces can be made out.

All these results turn on very local issues. None, so far as see, generalizes to the case of paradigmatic borderline cases of baldness and the rest. I think that makes the arguments even more interesting: potentially, they can serve as a kind of diagnostic: this style of theory of indeterminacy is suitable over here; that theory over there. That’s a useful thing to have in one’s toolkit.

Emergence, Supervenience, and Indeterminacy

While Ross Cameron, Elizabeth Barnes and I were up in St Andrews a while back, Jonathan Schaffer presented one of his papers arguing for Monism: the view that the whole is prior to the parts, and the world is the one “fundamental” object.

An interesting argument along the way argued that contemporary physics supports the priority of the whole, at least to the extent that properties of some systems can’t be reduced to properties of their parts. People certainly speak that way sometimes. Here, for example, is Tim Maudlin (quoted by Schaffer):

The physical state of a complex whole cannot always be reduced to those of its parts, or to those of its parts together with their spatiotemporal relations… The result of the most intensive scientific investigations in history is a theory that contains an ineliminable holism. (1998: 56)

The sort of case that supports this is when, for example, a quantum system featuring two particles determinately has zero total spin. The issues is that there also exist systems that duplicate the intrinsic properties of the parts of this system, but which do not have the zero-total spin property. So the zero-total-spin property doesn’t appear to be fixed by the properties of its parts.

Thinking this through, it seemed to me that one can systematically construct such cases for “emergent” properties if one is a believer in ontic indeterminacy of whatever form (and thinks of it that way that Elizabeth and I would urge you to). For example, suppose you have two balls, both indeterminate between red and green. Compatibly with this, it could be determinate that the fusion of the two be uniform; and it could be determinate that the fusion of the two be variegrated. The distributional colour of the whole doesn’t appear to be fixed by the colour-properties of the parts.

I also wasn’t sure I believed in the argument, so posed. It seems to me that one can easily reductively define “uniform colour” in terms of properties of its parts. To have uniform colour, there must be some colour that each of the parts has that colour. (Notice that here, no irreducible colour-predications of the whole are involved). And surely properties you can reductively define in terms of F, G, H are paradigmatically not emergent with respect to F, G and H.

What seems to be going on, is not a failure for properties of the whole to supervene on the total distribution of properties among its parts; but rather a failure of the total distribution of properties among the parts to supervene on the simple atomic facts concerning its parts.

That’s really interesting, but I don’t think it supports emergence, since I don’t see why someone who wants to believe that only simples instantiate fundamental properties should be debarred from appealing to distributions of those properties: for example, that they are not both red, and not both green (this fact will suffice to rule out the whole being uniformly coloured). Minimally, if there’s a case for emergence here, I’d like to see it spelled out.

If that’s right though, then application of supervenience tests for emergence have to be handled with great care when we’ve got things like metaphysical indeterminacy flying around. And it’s just not clear anymore whether the appeal in the quantum case with which we started is legitimate or not.

Anyway, I’ve written up some of the thoughts on this in a little paper.

Williamson on vague states of affairs

In connection with the survey article mentioned below, I was reading through Tim Williamson’s “Vagueness in reality”. It’s an interesting paper, though I find its conclusions very odd.

As I’ve mentioned previously, I like a way of formulating claims of metaphysical indeterminacy that’s semantically similar to supervaluationism (basically, we have ontic precisifications of reality, rather than semantic sharpenings of our meanings. It’s similar to ideas put forward by Ken Akiba and Elizabeth Barnes).

Williamson formulates the question of whether there is vagueness in reality, as the question of whether the following can ever be true:

(EX)(Ex)Vague[Xx]

Here X is a property-quantifier, and x an object quantifier. His answer is that the semantics force this to be false. The key observation is that, as he sets things up, the value assigned to a variable at a precisification and a variable assignment depends only on the variable assignment, and not at all on the precisification. So at all precisifications, the same value is assigned to the variable. That goes for both X and x; with the net result that if “Xx” is true relative to some precisification (at the given variable assignment) it’s true at all of them. That means there cannot be a variable assignment that makes Vague[Xx] true.

You might think this is cheating. Why shouldn’t variables receive different values at different precisifications (formally, it’s very easy to do)? Williamson says that, if we allow this to happen, we’d end up making things like the following come out true:

(Ex)Def[Fx&~Fx’]

It’s crucial to the supervaluationist’s explanatory programme that this come out false (it’s supposed to explain why we find the sorites premise compelling). But consider a variable assignment to x which at each precisification maps x to that object which marks the F/non-F cutoff relative to that precisification. It’s easy to see that on this “variable assignment”, Def[Fx&Fx’] comes out true, underpinning the truth of the existential.

Again, suppose that we were taking the variable assignment to X to be a precisification-relative matter. Take some object o that intuitively is perfectly precise. Now consider the assignment to X that maps X at precisification 1 to the whole domain, and X at precisification 2 to the null set. Consider “Vague[Xx]”, where o is assigned to x at every precisification, and the assignment to X is as above. The sentence will be true relative to these variable assignments, and so we have “(EX)Vague[Xx]” relative to an assignment of o to x which is supposed to “say” that o has some vague property.

Although Williamson’s discussion is about the supervaluationist, the semantic point equally applies to the (pretty much isomorphic) setting that I like, and which is supposed to capture vagueness in reality. If one makes the variable assignments non-precisification relative, then trivially the quantified indeterminacy claims go false. If one makes the variable assignments precisification-relative, then it threatens to make them trivially true.

The thought I have is that the problem here is essentially one of mixing up abundant and natural properties. At least for property-quantification, we should go for the precisification-relative notion. It will indeed turn out that “(EX)Vague[Xx]” will be trivially true for every choice of X. But that’s no more surprising that the analogous result in the modal case: quantifying over abundant properties, it turns out that every object (even things like numbers) have a great range of contingent properties: being such that grass is green for example. Likewise, in the vagueness case, everything has a great deal of vague properties: being such that the cat is alive, for example (or whatever else is your favourite example of ontic indeterminacy).

What we need to get a substantive notion, is to restrict these quantifiers to interesting properties. So for example, the way to ask whether o has some vague sparse property is to ask whether the following is true “(EX:Natural(X))Vague[Xx]”. The extrinsically specified properties invoked above won’t count.

If the question is formulated in this way, then we can’t read off from the semantics whether there will be an object and a property such that it is vague whether the former has the latter. For this will turn, not on the semantics for quantifiers alone, but upon which among the variable assignments correspond to natural properties.

Something similar goes for the case of quantification over states of affairs. (ES)Vague[S] would be either vacuously true or vacuously false depending on what semantics we assign to the variables “X”. But if our interest is in whether there are sparse states of affairs which are such that it is vague whether they obtain, what we should do is e.g. let the assignment of values to S be functions from precisifications to truth values, and then ask the question:

(ES:Natural(S))Vague[S].

Where a function from precisifications to truth values is “natural” if it corresponds to some relatively sparse state of affairs (e.g. there being a live cat on the mat). So long as there’s a principled story about which states of affairs these are (and it’s the job of metaphysics to give us that) everything works fine.

A final note. It’s illuminating to think about the exactly analogous point that could be made in the modal case. If values are assigned to variables independently of the world, we’ll be able to prove that the following is never true on any variable assignment:

Contingently[Xx].

Again, the extensions assigned to X and x are non-world dependent, so if “Xx” is true relative to one world, it’s true at them all. Is this really an argument that there is no contingent instantiation of properties? Surely not. To capture the intended sense of the question, we have to adopt something like the tactic just suggested: first allow world-relative variable assignment, and then restrict the quantifiers to the particular instances of this that are metaphysically interesting.

Ontic vagueness

I’ve been frantically working this week on a survey article on metaphysical indeterminacy and ontic vagueness. Mind bending stuff: there really is so much going on in the literature, and people are working with *very* different conceptions of the thing. Just sorting out what might be meant by the various terms “vagueness de re”, “metaphysical vagueness”, “ontic vagueness”, “metaphysical indeterminacy” was a task (I don’t think there are any stable conventions in the literature). And that’s not to mention “vague objects” and the like.

I decided in the end to push a particular methodology, if only as a stalking horse to bring out the various presuppositions that other approaches will want to deny. My view is that we should think of “indefinitely” roughly parallel to the way we do “possibly”. There are various disambiguations one can make: “possibly” might mean metaphysical possibility, epistemic possibility, or whatever; “indefinitely” might mean linguistic indeterminacy, epistemic unclarity, or something metaphysical. To figure out whether you should buy into metaphysical indeterminacy, you should (a) get yourself in a position to at least formulate coherently theories involving that operator (i.e. specify what its logic is); and (b) run the usual Quinean cost/benefit analysis on a case-by-case basis.

The view of metaphysical indeterminacy most opposed to this is one that would identify it strongly with vagueness de re, paradigmatically there being some object and some property such that it is indeterminate whether the former instantiates the latter (this is how Williamson seems to conceive of matters in a 2003 article). If we had some such syntactic criterion for metaphysical indeterminacy, perhaps we could formulate everything without postulating a plurality of disambiguations of “definitely”. However, it seems that this de re formulation would miss out some of the most paradigmatic examples of putative metaphysical vagueness, such as the de dicto formulation: It is indeterminate whether there are exactly 29 things. (The quantifiers here to be construed unrestrictedly).

I also like to press the case against assuming that all theories of metaphysical indeterminacy must be logically revisionary (endorsing some kind of multi-valued logic). I don’t think the implication works in either direction: multi-valued logics can be part of a semantic theory of indeterminacy; and some settings for thinking about metaphysical indeterminacy are fully classical.

I finish off with a brief review of the basics of Evans’ argument, and the sort of arguments (like the one from Weatherson in the previous post) that might convert metaphysical vagueness of apparently unrelated forms into metaphysically vague identity arguably susceptable to Evans argument.

From vague parts to vague identity

(Update: as Dan notes in the comment below, I should have clarified that the initial assumption is supposed to be that it’s metaphysically vague what the parts of Kilimanjaro (Kili) are. Whether we should describe the conclusion as deriving a metaphysically vague identity is a moot point.)

I’ve been reading an interesting argument that Brian Weatherson gives against “vague objects” (in this case, meaning objects with vague parts) in his paper “Many many problems”.

He gives two versions. The easiest one is the following. Suppose it’s indeterminate whether Sparky is part of Kili, and let K+ and K- be the usual minimal variations of Kili (K+ differs from Kili only in determinately containing Sparky, K- only by determinately failing to contain Sparky).

Further, endorse the following principle (scp): if A and B coincide mereologically at all times, then they’re identical. (Weatherson’s other arguments weaken this assumption, but let’s assume we have it, for the sake of argument).

The argument then runs as follows:
1. either Sparky is part of Kili, or she isn’t. (LEM)
2. If Sparky is part of Kili, Kili coincides at all times with K+ (by definition of K+)
3. If Sparky is part of Kili, Kili=K+ (by 2, scp)
4. If Sparky is not part of Kili, Kili coincides at all times with K- (by definition of K-)
5. If Sparky is not part of Kili, Kili=K- (by 4, scp).
6. Either Kili=K+ or Kili=K- (1, 3,5).

At this point, you might think that things are fine. As my colleague Elizabeth Barnes puts it in this discussion of Weatherson’s argument you might simply think at this point that only the following been established: that it is determinate that either Kili=K+ or K-: but that it is indeterminate which.

I think we might be able to get an argument for this. First our all, presumably all the premises of the above argument hold determinately. So the conclusion holds determinately. We’ll use this in what follows.

Suppose that D(Kili=K+). Then it would follow that Sparky was determinately a part of Kili, contrary to our initial assumption. So ~D(Kili=K+). Likewise ~D(Kili=K-).

Can it be that they are determinately distinct? If D(~Kili=K+), then assuming that (6) holds determinately, D(Kili=K+ or Kili=K-), we can derive D(Kili=K-), which contradicts what we’ve already proven. So ~D(~Kili=K+) and likewise ~D(~Kili=K-).

So the upshot of the Weatherson argument, I think, is this: it is indeterminate whether Kili=K+, and indeterminate whether Kili=K-. The moral: vagueness in composition gives rise to vague identity.

Of course, there are well known arguments against vague identity. Weatherson doesn’t invoke them, but once he reaches (6) he seems to think the game is up, for what look to be Evans-like reasons.

My working hypothesis at the moment, however, is that whenever we get vague identity in the sort of way just illustrated (inherited from other kinds of ontic vagueness), we can wriggle out of the Evans reasoning without significant cost. (I go through some examples of this in this forthcoming paper). The over-arching idea is that the vagueness in parthood, or whatever, can be plausibly viewed as inducing some referential indeterminacy, which would then block the abstraction steps in the Evans proof.

Since Weatherson’s argument is supposed to be a general one against vague parthood, I’m at liberty to fix the case in any way I like. Here’s how I choose to do so. Let’s suppose that the world contains two objects, Kili and Kili*. Kili* is just like Kili, except that determinately, Kili and Kili* differ over whether they contain Sparky.

Now, think of reality as indeterminate between two ways: one in which Kili contains Sparky, the other where it doesn’t. What of our terms “K+” and “K-“? Well, if Kili contains Sparky, then “K+” denotes Kili. But if it doesn’t, then “K+” denotes Kili*. Mutatis Mutandis for “K-“. Since it is is indeterminate which option obtains, “K+” and “K-” are referentially indeterminate, and one of the abstraction steps in the Evans proof fail.

Now, maybe it’s built into Weatherson’s assumptions that the “precise” objects like K+ and K- exist, and perhaps we could still cause trouble. But I’m not seeing cleanly how to get it. (Notice that you’d need more than just the axioms of mereology to secure the existence of [objects determinately denoted by] K+ and K-: Kili and Kili* alone would secure the truth that there are fusions including Sparky and fusions not including Sparky). But at this point I think I’ll leave it for others to work out exactly what needs to be added…

The fuzzy link

Following up on one of my earlier posts on quantum stuff, I’ve been reading up on an interesting literature on relating ordinary talk to quantum mechanics. As before, caveats apply: please let me know if I’m making terrible technical errors, or if there’s relevant literature I should be reading/citing.

The topic here is GRW. This way of doing things, recall, involved random localizations of the wavefunction. Let’s think of the quantum wavefunction for a single particle system, and suppose it’s initially pretty wide. So the amplitude of the wavefunction pertaining to the “position” of the particle is spread out over a wide span of space. But, if one of the random localizations occurs, the wavefunction collapses into a very narrow spike, within a tiny region of space.

But what does all this mean? What does it say about the position of the particle? (Here I’m following the Albert/Loewer presentation, and ignoring alternatives, e.g. Ghirardi’s mass-density approach).

Well, one traditional line was that talk of position was only well defined when the particle was in an eigenstate of the position observable. Since on GRW the particles’ wavefunction is pretty much spread all over space, on this view talking of a particle’s location would never be well-defined.

Albert and Loewer’s suggestion is that we alter the link. As previously, think of the wavefunction as giving a measure over different situations in which the particle has a definite location. Rather than saying x is located within region R iff the set of situations in which the particle lies in R is measure 1, they suggest that x is located within region R iff the set of situations in which the particle lies in R is almost measure 1. The idea is that even if not all of a particle’s wavefunction places it right here, the vast majority of it is within a tiny subregion here. On the Albert/Loewer suggestion, we get to say on this basis, that the particle is located in that tiny subregion. They argue also that there are sensible choices of what “almost 1” should be that’ll give the right results, though it’s probably a vague matter exactly what the figure is.

Peter Lewis points out oddities with this. One oddity is that conjunction-introduction will fail. It might be true that marble i is in a particular region, for each i between 1 and 100; and yet it fail to be true that all these marbles are in the box.

Here’s another illustration of the oddities. Take a particle with a localized wavefunction. Choose some region R around the peak of the wavefunction which is minimal, such that enough of the wavefunction is inside for the particle to be within R. Then subdivide R into two pieces (the left half and the right half) such that the wavefunction is nonzero in each. The particle is within R. But it’s not within the left half of R. Nor is it within the right half of R (in each case by modus tollens on the Albert/Loewer’s biconditional). But the R is just the sum of the left half and right half of R. So either we’re committed to some very odd combination of claims about location, or something is going wrong with modus tollens.

So clearly this proposal is looking like it’s pretty revisionary of well-entrenched principles. While I don’t think it indefensible (after all, logical revisionism from science isn’t a new idea) I do think it’s a significant theoretical cost.

I want to suggest a slightly more general, and I think, much more satisfactory, way of linking up the semantics of ordinary talk with the GRW wavefunction. The rule will be this:

“Particle x is within region R” is true to degree equal to the wavefunction-measure of the set of situations where the particle is somewhere in region R.

On this view, then, ordinary claims about position don’t have a classical semantics. Rather, they have a degreed semantics (in fact, exactly the degreed-supervaluational semantics I talked about in a previous post). And ordinary claims about the location of a well-localized particle aren’t going to be perfectly true, but only almost-perfectly true.

Now, it’s easy but unwarranted to slide from “not perfectly true” to “not true”. The degree theorist in general shouldn’t concede that. It’s an open question for now how to relate ordinary talk of truth simpliciter to the degree-theorist’s setting.

One advantage of setting up things in this more general setting is that we can “off the peg” take results about what sort of behaviour we can expect the language to exhibit. An example: it’s well known that if you have a classically valid argument in this sort of setting, then the degree of untruth of the conclusion cannot exceed the sum of the degrees of untruth of the premises. This amounts to a “safety constraint” on arguments: we can put a cap on how badly wrong things can go, though there’ll always be the phenomenon of slight degradations of truth value across arguments, unless we’re working with perfectly true premises. So there’s still some point of classifying arguments like conjunction introduction as “valid” on this picture, for that captures a certain kind of important information.

Say that the figure that Albert and Loewer identified as sufficient for particle-location was 1-p. Then the way to generate something like the Albert and Loewer picture on this view is to identify truth with truth-to-degree-1-p. In the marbles case, the degrees of falsity of each premise “marble i is in the box” collectively “add up” in the conclusion to give a degree of falsity beyond the permitted limit. In the case

An alternative to the Albert-Loewer suggestion for making sense of ordinary talk is to go for a universal error-theory, supplemented with the specification of a norm for assertion. To do this, we allow the identification of truth simpliciter with true-to-degree 1. Since ordinary assertions of particle location won’t be true to degree 1, they’ll be untrue. But we might say that such sentences are assertible provided they’re “true enough”: true to the Albert/Loewer figure of 1-p, for example. No counterexamples to classical logic would threaten (Peter Lewis’s cases would all be unsound, for example). Admittedly, a related phenomenon would arise: we’d be able to go by classical reasoning from a set of premises all of which are assertible, to a conclusion that is unassertible. But there are plausible mundane examples of this phenomenon, for example, as exhibited in the preface “paradox”.

But I’d rather not go either for the error-theoretic approach, nor for the identification of a “threshold” for truth, as the Albert-Loewer inspired proposal suggests. I think there are more organic ways to handle utterance-truth within a degree theoretic framework. It’s a bit involved to go into here, but the basic ideas are extracted from recent work by Agustin Rayo, and involve only allowing “local” specifications of truth simpliciter, relative to a particular conversational context. The key thing is that on the semantic side, once we have the degree theory, we can take “off the peg” an account of how such degree theories interact with a general account of communication. So combining the degree-based understanding of what validity amounts to (in terms of limiting the creep of falsity into the conclusion) and this degree-based account
of assertion, I think we’ve got a pretty powerful, pretty well understood overview about how ordinary language position-talk works.