Category Archives: Indeterminacy

Williamson on vague states of affairs

In connection with the survey article mentioned below, I was reading through Tim Williamson’s “Vagueness in reality”. It’s an interesting paper, though I find its conclusions very odd.

As I’ve mentioned previously, I like a way of formulating claims of metaphysical indeterminacy that’s semantically similar to supervaluationism (basically, we have ontic precisifications of reality, rather than semantic sharpenings of our meanings. It’s similar to ideas put forward by Ken Akiba and Elizabeth Barnes).

Williamson formulates the question of whether there is vagueness in reality, as the question of whether the following can ever be true:

(EX)(Ex)Vague[Xx]

Here X is a property-quantifier, and x an object quantifier. His answer is that the semantics force this to be false. The key observation is that, as he sets things up, the value assigned to a variable at a precisification and a variable assignment depends only on the variable assignment, and not at all on the precisification. So at all precisifications, the same value is assigned to the variable. That goes for both X and x; with the net result that if “Xx” is true relative to some precisification (at the given variable assignment) it’s true at all of them. That means there cannot be a variable assignment that makes Vague[Xx] true.

You might think this is cheating. Why shouldn’t variables receive different values at different precisifications (formally, it’s very easy to do)? Williamson says that, if we allow this to happen, we’d end up making things like the following come out true:

(Ex)Def[Fx&~Fx’]

It’s crucial to the supervaluationist’s explanatory programme that this come out false (it’s supposed to explain why we find the sorites premise compelling). But consider a variable assignment to x which at each precisification maps x to that object which marks the F/non-F cutoff relative to that precisification. It’s easy to see that on this “variable assignment”, Def[Fx&Fx’] comes out true, underpinning the truth of the existential.

Again, suppose that we were taking the variable assignment to X to be a precisification-relative matter. Take some object o that intuitively is perfectly precise. Now consider the assignment to X that maps X at precisification 1 to the whole domain, and X at precisification 2 to the null set. Consider “Vague[Xx]”, where o is assigned to x at every precisification, and the assignment to X is as above. The sentence will be true relative to these variable assignments, and so we have “(EX)Vague[Xx]” relative to an assignment of o to x which is supposed to “say” that o has some vague property.

Although Williamson’s discussion is about the supervaluationist, the semantic point equally applies to the (pretty much isomorphic) setting that I like, and which is supposed to capture vagueness in reality. If one makes the variable assignments non-precisification relative, then trivially the quantified indeterminacy claims go false. If one makes the variable assignments precisification-relative, then it threatens to make them trivially true.

The thought I have is that the problem here is essentially one of mixing up abundant and natural properties. At least for property-quantification, we should go for the precisification-relative notion. It will indeed turn out that “(EX)Vague[Xx]” will be trivially true for every choice of X. But that’s no more surprising that the analogous result in the modal case: quantifying over abundant properties, it turns out that every object (even things like numbers) have a great range of contingent properties: being such that grass is green for example. Likewise, in the vagueness case, everything has a great deal of vague properties: being such that the cat is alive, for example (or whatever else is your favourite example of ontic indeterminacy).

What we need to get a substantive notion, is to restrict these quantifiers to interesting properties. So for example, the way to ask whether o has some vague sparse property is to ask whether the following is true “(EX:Natural(X))Vague[Xx]”. The extrinsically specified properties invoked above won’t count.

If the question is formulated in this way, then we can’t read off from the semantics whether there will be an object and a property such that it is vague whether the former has the latter. For this will turn, not on the semantics for quantifiers alone, but upon which among the variable assignments correspond to natural properties.

Something similar goes for the case of quantification over states of affairs. (ES)Vague[S] would be either vacuously true or vacuously false depending on what semantics we assign to the variables “X”. But if our interest is in whether there are sparse states of affairs which are such that it is vague whether they obtain, what we should do is e.g. let the assignment of values to S be functions from precisifications to truth values, and then ask the question:

(ES:Natural(S))Vague[S].

Where a function from precisifications to truth values is “natural” if it corresponds to some relatively sparse state of affairs (e.g. there being a live cat on the mat). So long as there’s a principled story about which states of affairs these are (and it’s the job of metaphysics to give us that) everything works fine.

A final note. It’s illuminating to think about the exactly analogous point that could be made in the modal case. If values are assigned to variables independently of the world, we’ll be able to prove that the following is never true on any variable assignment:

Contingently[Xx].

Again, the extensions assigned to X and x are non-world dependent, so if “Xx” is true relative to one world, it’s true at them all. Is this really an argument that there is no contingent instantiation of properties? Surely not. To capture the intended sense of the question, we have to adopt something like the tactic just suggested: first allow world-relative variable assignment, and then restrict the quantifiers to the particular instances of this that are metaphysically interesting.

Ontic vagueness

I’ve been frantically working this week on a survey article on metaphysical indeterminacy and ontic vagueness. Mind bending stuff: there really is so much going on in the literature, and people are working with *very* different conceptions of the thing. Just sorting out what might be meant by the various terms “vagueness de re”, “metaphysical vagueness”, “ontic vagueness”, “metaphysical indeterminacy” was a task (I don’t think there are any stable conventions in the literature). And that’s not to mention “vague objects” and the like.

I decided in the end to push a particular methodology, if only as a stalking horse to bring out the various presuppositions that other approaches will want to deny. My view is that we should think of “indefinitely” roughly parallel to the way we do “possibly”. There are various disambiguations one can make: “possibly” might mean metaphysical possibility, epistemic possibility, or whatever; “indefinitely” might mean linguistic indeterminacy, epistemic unclarity, or something metaphysical. To figure out whether you should buy into metaphysical indeterminacy, you should (a) get yourself in a position to at least formulate coherently theories involving that operator (i.e. specify what its logic is); and (b) run the usual Quinean cost/benefit analysis on a case-by-case basis.

The view of metaphysical indeterminacy most opposed to this is one that would identify it strongly with vagueness de re, paradigmatically there being some object and some property such that it is indeterminate whether the former instantiates the latter (this is how Williamson seems to conceive of matters in a 2003 article). If we had some such syntactic criterion for metaphysical indeterminacy, perhaps we could formulate everything without postulating a plurality of disambiguations of “definitely”. However, it seems that this de re formulation would miss out some of the most paradigmatic examples of putative metaphysical vagueness, such as the de dicto formulation: It is indeterminate whether there are exactly 29 things. (The quantifiers here to be construed unrestrictedly).

I also like to press the case against assuming that all theories of metaphysical indeterminacy must be logically revisionary (endorsing some kind of multi-valued logic). I don’t think the implication works in either direction: multi-valued logics can be part of a semantic theory of indeterminacy; and some settings for thinking about metaphysical indeterminacy are fully classical.

I finish off with a brief review of the basics of Evans’ argument, and the sort of arguments (like the one from Weatherson in the previous post) that might convert metaphysical vagueness of apparently unrelated forms into metaphysically vague identity arguably susceptable to Evans argument.

From vague parts to vague identity

(Update: as Dan notes in the comment below, I should have clarified that the initial assumption is supposed to be that it’s metaphysically vague what the parts of Kilimanjaro (Kili) are. Whether we should describe the conclusion as deriving a metaphysically vague identity is a moot point.)

I’ve been reading an interesting argument that Brian Weatherson gives against “vague objects” (in this case, meaning objects with vague parts) in his paper “Many many problems”.

He gives two versions. The easiest one is the following. Suppose it’s indeterminate whether Sparky is part of Kili, and let K+ and K- be the usual minimal variations of Kili (K+ differs from Kili only in determinately containing Sparky, K- only by determinately failing to contain Sparky).

Further, endorse the following principle (scp): if A and B coincide mereologically at all times, then they’re identical. (Weatherson’s other arguments weaken this assumption, but let’s assume we have it, for the sake of argument).

The argument then runs as follows:
1. either Sparky is part of Kili, or she isn’t. (LEM)
2. If Sparky is part of Kili, Kili coincides at all times with K+ (by definition of K+)
3. If Sparky is part of Kili, Kili=K+ (by 2, scp)
4. If Sparky is not part of Kili, Kili coincides at all times with K- (by definition of K-)
5. If Sparky is not part of Kili, Kili=K- (by 4, scp).
6. Either Kili=K+ or Kili=K- (1, 3,5).

At this point, you might think that things are fine. As my colleague Elizabeth Barnes puts it in this discussion of Weatherson’s argument you might simply think at this point that only the following been established: that it is determinate that either Kili=K+ or K-: but that it is indeterminate which.

I think we might be able to get an argument for this. First our all, presumably all the premises of the above argument hold determinately. So the conclusion holds determinately. We’ll use this in what follows.

Suppose that D(Kili=K+). Then it would follow that Sparky was determinately a part of Kili, contrary to our initial assumption. So ~D(Kili=K+). Likewise ~D(Kili=K-).

Can it be that they are determinately distinct? If D(~Kili=K+), then assuming that (6) holds determinately, D(Kili=K+ or Kili=K-), we can derive D(Kili=K-), which contradicts what we’ve already proven. So ~D(~Kili=K+) and likewise ~D(~Kili=K-).

So the upshot of the Weatherson argument, I think, is this: it is indeterminate whether Kili=K+, and indeterminate whether Kili=K-. The moral: vagueness in composition gives rise to vague identity.

Of course, there are well known arguments against vague identity. Weatherson doesn’t invoke them, but once he reaches (6) he seems to think the game is up, for what look to be Evans-like reasons.

My working hypothesis at the moment, however, is that whenever we get vague identity in the sort of way just illustrated (inherited from other kinds of ontic vagueness), we can wriggle out of the Evans reasoning without significant cost. (I go through some examples of this in this forthcoming paper). The over-arching idea is that the vagueness in parthood, or whatever, can be plausibly viewed as inducing some referential indeterminacy, which would then block the abstraction steps in the Evans proof.

Since Weatherson’s argument is supposed to be a general one against vague parthood, I’m at liberty to fix the case in any way I like. Here’s how I choose to do so. Let’s suppose that the world contains two objects, Kili and Kili*. Kili* is just like Kili, except that determinately, Kili and Kili* differ over whether they contain Sparky.

Now, think of reality as indeterminate between two ways: one in which Kili contains Sparky, the other where it doesn’t. What of our terms “K+” and “K-“? Well, if Kili contains Sparky, then “K+” denotes Kili. But if it doesn’t, then “K+” denotes Kili*. Mutatis Mutandis for “K-“. Since it is is indeterminate which option obtains, “K+” and “K-” are referentially indeterminate, and one of the abstraction steps in the Evans proof fail.

Now, maybe it’s built into Weatherson’s assumptions that the “precise” objects like K+ and K- exist, and perhaps we could still cause trouble. But I’m not seeing cleanly how to get it. (Notice that you’d need more than just the axioms of mereology to secure the existence of [objects determinately denoted by] K+ and K-: Kili and Kili* alone would secure the truth that there are fusions including Sparky and fusions not including Sparky). But at this point I think I’ll leave it for others to work out exactly what needs to be added…

The fuzzy link

Following up on one of my earlier posts on quantum stuff, I’ve been reading up on an interesting literature on relating ordinary talk to quantum mechanics. As before, caveats apply: please let me know if I’m making terrible technical errors, or if there’s relevant literature I should be reading/citing.

The topic here is GRW. This way of doing things, recall, involved random localizations of the wavefunction. Let’s think of the quantum wavefunction for a single particle system, and suppose it’s initially pretty wide. So the amplitude of the wavefunction pertaining to the “position” of the particle is spread out over a wide span of space. But, if one of the random localizations occurs, the wavefunction collapses into a very narrow spike, within a tiny region of space.

But what does all this mean? What does it say about the position of the particle? (Here I’m following the Albert/Loewer presentation, and ignoring alternatives, e.g. Ghirardi’s mass-density approach).

Well, one traditional line was that talk of position was only well defined when the particle was in an eigenstate of the position observable. Since on GRW the particles’ wavefunction is pretty much spread all over space, on this view talking of a particle’s location would never be well-defined.

Albert and Loewer’s suggestion is that we alter the link. As previously, think of the wavefunction as giving a measure over different situations in which the particle has a definite location. Rather than saying x is located within region R iff the set of situations in which the particle lies in R is measure 1, they suggest that x is located within region R iff the set of situations in which the particle lies in R is almost measure 1. The idea is that even if not all of a particle’s wavefunction places it right here, the vast majority of it is within a tiny subregion here. On the Albert/Loewer suggestion, we get to say on this basis, that the particle is located in that tiny subregion. They argue also that there are sensible choices of what “almost 1” should be that’ll give the right results, though it’s probably a vague matter exactly what the figure is.

Peter Lewis points out oddities with this. One oddity is that conjunction-introduction will fail. It might be true that marble i is in a particular region, for each i between 1 and 100; and yet it fail to be true that all these marbles are in the box.

Here’s another illustration of the oddities. Take a particle with a localized wavefunction. Choose some region R around the peak of the wavefunction which is minimal, such that enough of the wavefunction is inside for the particle to be within R. Then subdivide R into two pieces (the left half and the right half) such that the wavefunction is nonzero in each. The particle is within R. But it’s not within the left half of R. Nor is it within the right half of R (in each case by modus tollens on the Albert/Loewer’s biconditional). But the R is just the sum of the left half and right half of R. So either we’re committed to some very odd combination of claims about location, or something is going wrong with modus tollens.

So clearly this proposal is looking like it’s pretty revisionary of well-entrenched principles. While I don’t think it indefensible (after all, logical revisionism from science isn’t a new idea) I do think it’s a significant theoretical cost.

I want to suggest a slightly more general, and I think, much more satisfactory, way of linking up the semantics of ordinary talk with the GRW wavefunction. The rule will be this:

“Particle x is within region R” is true to degree equal to the wavefunction-measure of the set of situations where the particle is somewhere in region R.

On this view, then, ordinary claims about position don’t have a classical semantics. Rather, they have a degreed semantics (in fact, exactly the degreed-supervaluational semantics I talked about in a previous post). And ordinary claims about the location of a well-localized particle aren’t going to be perfectly true, but only almost-perfectly true.

Now, it’s easy but unwarranted to slide from “not perfectly true” to “not true”. The degree theorist in general shouldn’t concede that. It’s an open question for now how to relate ordinary talk of truth simpliciter to the degree-theorist’s setting.

One advantage of setting up things in this more general setting is that we can “off the peg” take results about what sort of behaviour we can expect the language to exhibit. An example: it’s well known that if you have a classically valid argument in this sort of setting, then the degree of untruth of the conclusion cannot exceed the sum of the degrees of untruth of the premises. This amounts to a “safety constraint” on arguments: we can put a cap on how badly wrong things can go, though there’ll always be the phenomenon of slight degradations of truth value across arguments, unless we’re working with perfectly true premises. So there’s still some point of classifying arguments like conjunction introduction as “valid” on this picture, for that captures a certain kind of important information.

Say that the figure that Albert and Loewer identified as sufficient for particle-location was 1-p. Then the way to generate something like the Albert and Loewer picture on this view is to identify truth with truth-to-degree-1-p. In the marbles case, the degrees of falsity of each premise “marble i is in the box” collectively “add up” in the conclusion to give a degree of falsity beyond the permitted limit. In the case

An alternative to the Albert-Loewer suggestion for making sense of ordinary talk is to go for a universal error-theory, supplemented with the specification of a norm for assertion. To do this, we allow the identification of truth simpliciter with true-to-degree 1. Since ordinary assertions of particle location won’t be true to degree 1, they’ll be untrue. But we might say that such sentences are assertible provided they’re “true enough”: true to the Albert/Loewer figure of 1-p, for example. No counterexamples to classical logic would threaten (Peter Lewis’s cases would all be unsound, for example). Admittedly, a related phenomenon would arise: we’d be able to go by classical reasoning from a set of premises all of which are assertible, to a conclusion that is unassertible. But there are plausible mundane examples of this phenomenon, for example, as exhibited in the preface “paradox”.

But I’d rather not go either for the error-theoretic approach, nor for the identification of a “threshold” for truth, as the Albert-Loewer inspired proposal suggests. I think there are more organic ways to handle utterance-truth within a degree theoretic framework. It’s a bit involved to go into here, but the basic ideas are extracted from recent work by Agustin Rayo, and involve only allowing “local” specifications of truth simpliciter, relative to a particular conversational context. The key thing is that on the semantic side, once we have the degree theory, we can take “off the peg” an account of how such degree theories interact with a general account of communication. So combining the degree-based understanding of what validity amounts to (in terms of limiting the creep of falsity into the conclusion) and this degree-based account
of assertion, I think we’ve got a pretty powerful, pretty well understood overview about how ordinary language position-talk works.

Supervaluations and revisionism once more

I’ve just spent the afternoon thinking about an error I found in my paper “supervaluational consequence” (see this previous post). I’ve figured out how to patch it now, so thought I’d blog about it.

The background is the orthodox view that supervaluational consequence will lead to revisions of classical logic. The strongest case I know for this (due to Williamson) is the following. Consider the claim “p&~Determinately(p)”. This (it is claimed) cannot be true on any serious supervaluational model of our language. Equivalently, you can’t have both p and ~Determinately(p) both true in a single model. If classical reductio were an ok rule of inference, therefore, you’d be able argue from ~Determinately(p) to ~p. But nobody thinks that’s supervaluationally valid: any indeterminate sentence will be a counterexample to it. So classical reductio should be given up.

This is stronger than the more commonly cited argument: that supervaluational semantics vindicates the move from p to Determinately(p), but not the material conditional “if p then Determinately(p)” (a counterexample to conditional proof). The reason is that, if “Determinately” itself is vague, arguably the supervaluationist won’t be committed to the former move. The key here is the thought that as well as things that are determinately sharpenings of our language, their may be interpretations which are borderline-sharpenings. Perhaps interpretation X is an “admissible interpretation of our language” on some sharpenings, but not on others. If p is true at all the definite sharpenings, but false at X, then that may lead to a situation where p is supertrue, but Determinately(p) isn’t.

But orthodoxy says that this sort of situation (non-transitivity in the accessibility relation among interpretations of our language) does nothing to undermine the case for revisionism I mentioned in the first paragraph.

One thing I do in the paper is construct what seems to me a reasonable-looking toy semantics for a language, on which one can have both p and ~Determinately p. Here it is.

Suppose you have five colour patches, ranging from red to orange (non-red). Call them A,B,C,D,E.

Suppose that our thought and talk makes it the case that only interpretations which put the cut-off between B and D are determinately “sharpenings” of the language we use. Suppose, however, that there’s some fuzziness around in what it is to be an “admissible interpretation”. For example, an interpretation that places the cut-off between B and C, thinks that both interpretations placing the cut-off between C and D, and interpretations placing the cut-off between A and B, are admissible. And likewise, an interpretation that place the cut-off between C and D think that interpretations that place the cut-off between B and C are admissible, but also thinks that interpretations that place the cut-off between D and E are admissible.

Modelling the situation with four interpretations, labelled AB, BC, CD, DE, for where they place the red/non-red cut-off, we can express the thought like this: each intepretation accesses (thinks admissible) itself and its immediate neighbours, but nothing else. But BC and CD are the sharpenings.

My first claim is that all this is a perfectly coherent toy model for the supervaluationist: nothing dodgy or “unintended” is going on.

Now let’s think about the truths values assigned to particular claims. Notice, to start with, that the claim “B is red” will be true at each sharpening. The claim “Determinately, B is red” will be true at the sharpening CD, but it won’t be true at the sharpening BC, for that accesses an interpretation on which B counts as non-red (viz. AB).

Likewise, the claim “D is not red” will be true at each sharpening, but “Determinately, D is not red” will be true at the sharpening BC, but fails at CD, due to the latter seeing the (non-sharpening) interpretation DE, at which D counts as red.

In neither of these atomic cases do we find “p and ~Det(p)” coming out true (that’s where I made a mistake previously). But by considering the following, we can find such a case:

Consider “B is red and D is not red”. It’s easy to see that this is true at each of the sharpenings, from what’s been said above. But also “Determinately(B is red and D is not red)” is false at each of the sharpenings. It’s false at BC because of the accessible interpretation AB at which B counts as non-red. It’s false at CD because of the accessible interpretation DE at which D counts as red.

So we’ve got “B is red and D is not red, & ~Determinately(B is red and D is non-red).” And we’ve got that in a perfectly reasonable toy model for a language of colour predicates.

(Why do people think otherwise? Well, the standard way of modelling the consequence relation in settings where the accessibility relation is non-transitive is to think of the sharpenings as *all the interpretations accessible from some designated interpretation*. And that imposes additional structure which, for example, the model just sketch doesn’t satisfy. But the additional structure seems to me totally unmotivated, and I provide an alternative framework in the paper for freeing oneself from those assumptions. The key thing is not to try and define “sharpening” in terms of the accessibility relation.).

The conclusion: the best extant case for (global) supervaluational consequence being revisionary fails.

Gavagai again again

A new version of my discussion of Quine’s “argument from below” is now up online (shorter! punchier! better!) Turns out it was all to do with counterpart theory all along.

Here’s the blurb: Gavagai gets discussed all the time. But (unless I’m missing something in the literature) I’ve never seen an advocate of gavagai-style indeterminacy spell out in detail what exactly the deviant interpretations or translations are, that incorporating the different ways of dividing reference (over rabbits, rabbit-stages or undetached rabbit-parts). And without this it is to say the least, a bit hard to evaluate the supposed counterexamples to such interpretations! So the main job of the paper is to spell out, for a significant fragment of language, what the rival accounts of reference-division amount to.

One audience for the paper (who might not realize they are an audience for it initially) are folks interested in the stage theory/worm theory debate in the philosophy of persistence. The neuvo-Gavagai guy, according to me, is claiming that there’s no fact of the matter whether our semantics is stage-theoretic or worm-theoretic. I think there’s a reasonable chance that that he’s right.

Stronger than this: so long as there are both 4D worms and instantaneous temporal parts thereof around (even if they’re “dependent entities” or “rabbit histories” or “mere sums” as opposed to Real Objects), the Gavagai guy asks you to explain why our words don’t refer to those worms or stages rather than whatever entity you think *really are* rabbits (say, enduring objects wholly present at each time).

By the way, even if these semantic indeterminacy results were right, I don’t think that this forecloses the metaphysical debate about which of endurance, perdurance or exdurance is the right account of *persistence*. But I do think that it forces us to think hard about what the difference is between semantic and metaphysical claims, and what sort of reasons we might offer for either.

Supervaluational consequence again

I’ve just finished a new version of my paper supervaluational consequence. A pdf version is available here. I thought I’d post something explaining what’s going on therein.

Let’s start at the beginning. Classical semantics requires, inter alia, the following. For every expression, there has to be a unique intended interpretation. This single interpretation will assign to each name, a single referent. To each predicate, it will assign a set of individuals. Similarly for other grammatical categories.

But sometimes, the idea that there are such unique referents, extensions and so on, looks absurd. What supervaluationism (in the liberal sense I’m interested in) gives you is the flexibility to accommodate this. Supervaluationism requires, not a single intended interpretation, but a set of interpretations.

So if you’re interested in the problem of the many, and think that there’s more than one optimal candidate referent for “Kilimanjaro”; if you’re interested in theory change, and think that relativist and rest mass are equi-optimal candidate properties to be what “mass” picks out; if you are interested in inscrutability of reference, and think that rabbit-slices, undetached rabbit parts as well as rabbits themselves are in the running to be in the extension of “rabbit”; if you’re interested in counterfactuals, and think that it’s indeterminate which world is the closest one where Bizet and Verdi were compatriots; if you think vagueness can be analyzed as a kind of multiple-candidate indeterminacy of reference; if you find any of these ideas plausible, then you should care about supervaluationism.

It would be interesting, therefore, if supervaluationism undermined the tenants of the kind of logic that we rely on. For either, in the light of the compelling applications of supervaluationism, we will have to revise our logic to accommodate these phenomena; or else supervaluationism as a theory of these phenomena is itself misconceived. Either way, there’s lots at stake.

Orthodoxy is that supervaluationism is logically revisionary, in that it involves admitting counterexamples to some of the most familiar classical inferential moves: conditional proof, reductio, argument by cases, contraposition. There’s a substantial hetrodox movement which recommends a hetrodox way of defining supervaluational consequence (so called “local consequence”) which is entirely non-revisionary.

My paper aims to do a number of things:

  1. to give persuasive arguments against the local consequence heterodoxy
  2. to establish, contra orthodoxy, that standard supervaluational consequence is not revisionary (this, granted a certain assumption)
  3. to show that, even if the assumption is refused, the usual case for revisionism is flawed
  4. to give a final fallback option: even if supervaluational consequence is revisionary, it is not damagingly so, for it in no way involves revision of inferential practice.

It convinces me that supervaluationists shouldn’t feel bad: they probably don’t revise logic, and if they do, it’s in a not-terribly-significant way.

Primitivism about vagueness

One role this blog is playing is allowing me to put down thoughts before I lose them.

So here’s another idea I’ve been playing with. If you think about the literature on vagueness, it’s remarkable that each of the main players seems to be broadly reductionist about vagueness. The key term here is “definitely”. The Williamsonian epistemicist reduces “definitely” to a concept constructed out of knowability. The supervaluationist typically appeals to semantic indecision, on one reading, that reduces vagueness to semantic facts; on another reading, that reduces vagueness to metasemantic facts concerning the link between semantic facts and their subvening base. Things are a little less clear with the degree theorist, but if “definite truth” is identified with “truth to degree 1”, then what they’re doing is reducing vagueness to semantic facts again.

If you think of the structure of the debate like this, then it makes sense of some of the dialectic on higher-order vagueness. For example, if vagueness is nothing but semantics, then the question immediately arises: what about those cases where semantic facts themselves appear to be vague? The parallel question for the epistemicist is: what about cases where it’s vague whether such-and-such is knowable? The epistemicists look like they’ve got a more stable position at this point, though exactly why this is is hard to spell out.

Consider other debates, e.g. in the philosophy of modality. Sure, there are reductionist views: Lewis wanting to reduce modality to what goes on in other concrete space-times; people who want to reduce it to a priori consistency; and so on. But a big player in that debate is the modalist, who just takes “possibility” and “necessity” as primitive, and refuses to offer a reductive story.

It seems to me pretty clear that a position analogous to modalism should be a central part of the vagueness literature; but I’m not aware of any self-conscious proponents of this position. Let me call it “primitivism” about vagueness. I think that perhaps some self-described semantic theorists would be better classified as primitivists.

At the end of ch 5 of the “Vagueness” book, Tim Williamson has just finished beating up on traditional supervaluationism, which equates truth with supertruth. He then briefly considers people who drop that identification. Here’s my take on this position. Proponents say that semantically, there’s a single precisification of our language which is the intended one, but which one it is is (semantically) vague. Truth is truth on the intended precisification; but definite truth is defined to be truth on all the precisifications which aren’t determinately unintended. Definite truth (supertruth) and truth come apart. This position, from a logical point of view, is entirely classical; satisfies bivalence; and looks like it thereby avoids many of Williamson’s objections to supervaluationism.

I think Williamson puts exactly the right challenge to this line. In what sense is this a semantic theory of vagueness? After all, you haven’t characterized “Definitely” in semantic terms: rather, what we’ve done is characterized “Definitely” using that very notion again in the metalanguage. One might resist this, claiming that “Definitely” should be defined using the term “admissible precisification” or some such. But then one wonders what account could be made of “admissible”: it plays no role in defining semantic notions such as “true” or “consequence” for this theorist. What sense can be made of it?

I think the challenge can be met by metasemantic versions of supervaluationism, who give a substantive theory of what makes a precisification admissible. I take that to be something like the McGee/McLaughlin line, and I spent a chapter of my thesis trying to lay out precisely what was involved. But that’s another story.

What I want to suggest now is that Primitivism about vagueness gives us a genuinely distinct option. This accepts Williamson’s contention that when we drop supertruth=truth, “nothing articulate” remains of the semantic theory of vagueness. But it questions the idea that this should lead us towards epistemicism. Let’s just take determinacy (or lack of it) as a fundamental part of reality, and then use it in constructing theories that make sense of the phenomenon of vagueness. Of course, there’s nothing positive this theorist has to say that distinguishes her from reductive rivals such as the epistemicist; but she has plenty of negative things to say disclaiming various reductive theses.

The present time

One notorious issue for presentists (and other kinds of A-theorist) is the following: special relativity tells us (I gather) that among the slices of space-time that “look like time slices”, there’s no one that is uniquely privileged as “the present” (i.e. simulataneous with what’s going on here-now). But the presentist says that only the present exists. So it looks like her metaphysics entails that there is a metaphysically privileged time-slice: the only one that exists. (Of course, I suppose the science is just telling us that there’s no physically significance sense in which one is privileged, and it’s not obvious the presentist is saying anything that conflicts with that. But it does seem worrying…)

One option is to retreat into “here-now”ism: the only things that exist are those that exist right here right now. No problems with relativity there.

I was idly wondering about the following line: say that it’s (ontically) vague which time-slice is present, and so (for the presentist) say that it’s ontically vague what exists. As I’m thinking of it, there’ll be some kind of here-now-ish element to the metaphysics. From the point of view of a certain position p in space time, all that exists are those “time-like” slices of space time that contain the point, then it will be determinately the case that p exists. But for every other space-time point q, there will (I take it) be a reference frame according to which p and q are non-simultaneous. So it won’t determinately be the case that q exists.

The details are going to get quite involved. I think some hard thinking about higher-order indeterminacy will be in order. But here’s a quick sketch: choose a point r such that there’s a choice of reference-frame that make q and r simultaneous. Then it sort of seems to me that, from p’s perspective, the following should hold:

r doesn’t exist
determinately, r doesn’t exist
not determinately determinately r doesn’t exist

The idea is that while r isn’t “present” (and so fails to exist), relative to the perspective of some of the things that are present, it is present.

What I’d like to do is model this in a “supervaluation-style” framework like that one I talk about here. First, consider the set of all centred time-like-slices. It’ll end up determinate that one and only one of these exists: but it’ll be a vague matter which one. Let centred time-like-slice x access centred time-slice y iff the centre of y is somewhere in the time-slice x.

Now take a set of time-slices P which are all and only those with common centre p. These are the ontic candidates for being the present time. Next, consider the set P*, containing a set of time-slices which are all and only those accessed by some time-slice in P. And similarly construct P**, P*** etc etc etc.

Now, among space-time points, only the “here-now” point p determinately exists. All and only points which are within some some time-slice in P don’t determinately fail to exist. All and only points which are within some time-slice in P* don’t determinately determinately fail to exist. All and only points which are within some time-slice in P* don’t determinately determinately determinately fail to exist. And so on. (If you like, existence shades of into greater and greater indeterminacy as we look further away from the privileged here-now point).

Well, I’m no longer sure that this deserves the name “presentism”. Kit Fine distinguishes some versions of A-theory in a paper in “Modality and tense” which this view might fit better with (the Fine-esque way of setting this up would be to have the whole of space-time existing, but only some time-slices really or fundamentally existing. The above framework then models vagueness in what really or fundamentally exist). It is anyway up to it’s neck in ontic vagueness, which you might already dislike. But I’ve no problem with ontic vagueness, and insofar as I can simulate being a presentist, I quite like this option.

There should be other variants too for different forms of A-theory. Consider, for example, the growing block view of reality (the time-slices in the model can be thought of as the front edges of a growing block: as we go through time, more slices get added to the model). The differences may be interesting: for the growing block, future space-time points determinately don’t exist, but they don’t det …det fail to exist for some amount of iterations of “det”; while past space-time points determinately exist, but they don’t det …. det exist for some amount of iterations of “det”.

Any thoughts most welcome, and references to any related literature particularly invited!

Existence and just more theory

I’ve been spending much time recently in coffee shops with colleagues talking about the stuff that’s coming up in the fantastically named RIP Being conference (happening in Leeds this weekend). Hopefully I won’t be treading on toes if I draw out one strand of those conversations that I’ve been finding particularly interesting.

(continued below the fold)

The story for me begins with an old paper by Hartry Field. His series of papers in the 70’s is one of the all-time great runs: from “Tarski’s theory of truth” through “Quine and the correspondance theory”, “Theory Change”, “Logic, meaning and conceptual role”, “Conventionalism and Instrumentalism in semantics” and finishing off with “Mental representation”. (All references can be found here). Not all of them are reprinted in his collection Truth and the absence of fact, which seems a pity. The papers I mentioned above really seemed to me to lay out the early Fieldian programme in most of the details. Specifically, in missing out the papers “Logic, meaning …” and “Conventionalism and instrumentalism…”, you miss out on the early-Field’s take on how the cognitive significance of language relates to semantic theory; and the most interesting discussion I know of concerning what Putnam’s notorious “just more theory” argument might amount to.

The “just more theory” move is supposed to be the following. It’s familiar that you can preserve sensible truth conditions, by assigning wildly permuted reference-schemes to language (see my other recent posts for more details and links). But, prima facie, these permuted reference schemes are going to vitiate some plausible conditions of what it takes for a term to refer to something (e.g. that the object be causally connected to the term). Now, some theorists of meaning don’t build causal constraints into their metasemantic account. Davidson, early Lewis and the view Putnam describes as “standard” in his early paper, are among these (I call these “interpretationisms” elsewhere). But the received view, I guess, is to assume that some such causal constraint will be in play.

Inscrutability argument dead-in-the-water? No, says Putnam. For look! the permuted interpretation has the resources to render true sentences like “reference is a relation which is causally constrained”. For just as, on the permuted interpretation “reference” will be assigned as semantic value some weirdo twisted relation Reference*, so on the same interpretation “causation” will be assigned some weirdo twisted relation Causation. And it’ll turn out to be true that Reference* and Causation* match up in the right way. So (you might think), how can metasemantic theories tell you rule in favour of the sensible interpretation over this twisted one? For whichever no matter which of these we imagine to be the real interpretation of our language, everything we say will come out true.

Well, most people I speak to think this is a terrible argument. (For a particularly effective critique of Putnam—showing how badly things go if you allow him the “just more theory” move—see this paper by Tim Bays.) I’ll take it the reasons are pretty familiar (if not, Lewis’s “Putnam’s paradox” has a nice presentation of a now-standard response). Anyway, what’s interesting about Field’s paper is that it gives an alternative reading of Putnam’s challenge, which makes it much more interesting.

Let’s start by granting ourselves that we’ve got a theory which really has tied down reference pretty well. Suppose, for example, that we say “Billy” refers to Billy in virtue of appropriate causal connections between tokenings of that word and the person, Billy. The “Wild” inscrutability results threatened by permutation arguments simply don’t hold.

But now we can ask the following question: what’s special about that metasemantic theory you’re endorsing? Why should we be interested in Reference (=Causal relation C)? What if we tried to do all the explanatory work that we want semantics for, in terms of a different relation Reference*? We could then have a metasemantic* theory of reference*, which would explain that it is constrained to match a weirdo relation causation*. But, notice, that the relation “S expresses* proposition* p” (definable via reference*) and “S expresses proposition p” (definable via reference*) are coextensional. Now, if all the explanatory work we want semantics to do (e.g. explaining why people make those sounds when they believe the world is that way) only ever makes appeal to what propositions sentences express, then there just isn’t any reason (other than convenience) to talk about semantic properties rather than semantic* ones.

The conclusion of these considerations isn’t the kind of inscrutability I’m familiar with. It’s not that there’s some agreed-upon semantic relation, which is somehow indeterminate. It’s rather that (the consideration urges) it’ll be an entirely thin and uninteresting matter that we choose to pursue science via appeal to the determinate semantic properties rather than the determinate semantic* properties. You might think of this as a kind of metasemantic inscrutability, in contrast to the more usual semantic inscrutability: setting aside mere convenience, there’s no reason why we ought to give this metasemantic theory rather than that one.

Now, let’s turn to a different kind of inscrutability challenge. For one reason or another, lots of people are very worried over whether we can really secure determinate quantification over an absolutely unrestricted domain. Just suppose you’re convinced that there are no abstracta. Suppose you’re very careful to never say anything that commits you to their existence. However, suppose you’re wrong: abstracta exist. Intuitively, when you say “There are no abstracta, and I’m quantifying over absolutely everything!” you’re speaking falsely. But this is only so if your quantifiers range over the abstracta out there as well as the concreta: and why should that be? In virtue of what can your word “everything” range over the unrestricted domain? After all, what you say would be true if I interpreted the word as ranging over only concreta. I’d just take you to be saying that no concreta exist (within your domain; and that you were quantifying over absolutely everything in your domain. Both of these are true, given that your domain happens to contain only concreta!

Bring in causality doesn’t look like it helps here; neither would the form of reference-magnetism that Lewis endorsed, which demands that our predicates latch onto relatively natural empirical kinds, help. Ted Sider, in a paper he’s presenting at the RIP conference, advocates extending the Lewis point to make appeal to logical “natural kinds” (such as existence) at this point. However, let me sketch instead a variant of the Sider thought that seems more congenial to me (I’ll sketch at the end how to transfer it back).

My take on Lewis’s theory is the following. First, identify a “meaning building language”. This will contain only predicates for empirical natural kinds, plus some other stuff (quantifiers, connectives, perhaps terms for metaphysically basic things such as mereological notions). Now, what it is for a semantic theory for a natural language to be the correct one, is for there to be a semantic theory phrased in the meaning-building language, which (a) assigns to sentences of the natural language truth-conditions which fit with actual patterns of assent and dissent; and (b) is as syntactically simple as possible. (I defend this take on what Lewis is doing here).

Now, clearly we need to use some logical resources in constructing the semantic theory. Which should we allow? Sider’s answer: the logically natural ones. But for the moment let’s suppose we don’t want to commit ourselves to logically natural kinds. Well, why don’t we just stipulate that the meaning building language is going to contain this, that, and the next logical operator/connective? In the case of predicates, there’s the worry that our meaning-building theory should contain all the empirical kinds there are or could be: since we don’t know what these are, we need to give a general definition such as “the meaning building language will contain predicates for all and only natural kinds”. But there seems no comparible reason not simply to lay it down that “the meaning building language will contain negation, conjunction and the existential quantifier).

Indeed, we could go one further, and simply stipulate that the existential quantifier it contains is the absolutely unrestricted one. The effect will be just like the one Sider proposes: this metasemantic proposal has a built-in-bias towards ascribing truly unrestricted generality to the quantifiers of natural language, because it is syntactically simpler to lay down clauses for such quantifiers in the meaning-building language, than for the restricted alternatives. You quantify over everything, not just concreta, because the semantic theory that ascribes you this is more eligible than one that doesn’t, where eligibility is a matter of how simple the theory is when formulated in the meaning-building language just described.

Ok. So finally finally I get to the point. It seems to me that Field’s form of Putnam’s worries can be put to work here too. Let’s grant that the metasemantic theory just described delivers the right results about semantic properties of my language; and shows my unrestricted quantification to be determinate. But why choose just that metasemantic theory? Why not, for example, describe a metasemantic theory where semantic properties are determined by syntactic simplicity of a semantic theory in a meaning building language where the sole existential quantifier is restricted to concreta? Maybe we should grant that our way picks out the semantic properties: but we’ve yet to be told why we should be interested in the semantic properties, rather than the semantic* properties delivered by the rival metasemantic theory just sketched. Metasemantic inscrutability threatens once more.

(I think the same challenge can be put to the Sider-style proposal: e.g., consider the Lewis* metasemantic theory whereby the meaning-building language contains expressions for all those entities (of whatever category) which are natural*: i.e. are the intersection of genuinely natural properties (emprical or logical) with restricted domain D.)

I have suspicians that metasemantic inscrutability will turn out to be a worrying thing. That’s a substantive claim: but it’s got to be a matter for another posting!

(Major thanks here go to Andy and Joseph for discussions that shaped my thoughts on this stuff; though they are clearly not to be blamed..).