Vagueness survey IV: supervaluations

Ok, part IV of the survey article. This includes the second of three sample theories: and I’ve chosen to talk about what’s often called “the most popular account of vagueness”: supervaluationism.

I’m personally pretty convinced there are (at least) two different *types* of theses/theories that get the label “supervaluationism” (roughly, the formal logic-semantics stuff, and the semantic indecision story about the source of indeterminacy). And even once you have both on board, I reckon there are at least three *very different* ways of understanding what the theory is saying (that’s separate from, but related to, the various subtle issues that Varzi picks out in the recent Mind paper).

But what I want to present is one way of working out this stuff, so I’m keeping fairly close to the Fine-Keefe (with a bit of Lewis) axis that I think is most people’s reference point around here. I try to strip out the more specialist bits of Fine—which comes at a cost, since I don’t mention “penumbral connection” here. But again, I wanted to keep the focus on the basics of the theory, and the application to the central puzzles, so stuff had to come out.

One thing I’m fairly unapologetic about is to present supervaluationism as a theory where truth=supertruth. It seems terminologically bizarre to do otherwise—given that “supervaluations” have their origins in a semantic-logico technique for defining truth, when you have multiple truth-on-i to play with. I’m happy to think that “semantic indecision” views can be paired with a classical semantics, and the semantics for definiteness given in terms of delineations/sharpenings (as indeed, can the epistemicist D-operator). But, as a matter of terminology, I don’t see these as “supervaluational”. “Ambiguity” views like McGee and McLaughlin are a mixed case, but I don’t have room to fit them in. In any case: bivalence failure is such a common thought that I thought it was worth giving it a run for its money, straightforwardly construed, and I do think that it does lead to interesting and distinctive positions on what’s going on in the sorites.

Actually, the stuff about confusing rejection with accepting-negation is something that’s again a bit of a choice-point. I could have talked some about the “confusion hypothesis” way of explaining the badness of the sorites (and work in particular by Fine, Keefe and Greenough on this—and a fabulous paper by Brian Weatherson that he’s never published “Vagueness and pragmatics”).  But when I tried with this it’s a bit tricky to explain, takes quite a bit of thinking about—and I felt the rejection stuff (also a kind of “confusion hypothesis”) was more organically related to the way I was presenting things. I need to figure out some references for this. I’m sure there must be lots of people making the “since it’s untrue, we’re tempted to say ~p” move, which is essentially what’s involved. I’m wondering whether Greg Restall has some explicit discussion of the temptation of the sorites in his paper on denial and multiple conclusions and supervaluations…

One place below where a big chunk of stuff had to come out to get me near the word limit, was material about the inconsistency of the T-scheme and denials of bivalence. It fitted in very naturally, and I’d always planned that stuff to go in (just because it’s such a neat and simple little argument). But it just didn’t fit, and of all the things I was looking at, it seemed the most like a digression. So, sadly, all that remains is “cite cite cite” where I’m going to give the references.

One thing that does put in a brief appearance at the end of the section is the old bugbear: higher order vagueness. I don’t discuss this very much in the piece, which again is a bit weird, but then it’s very hard to state simply what the issue is there (especially as there seems to be at least three different things going by that name in the literature, the relations between them not being very obvious).

Another issue that occurred to me here is whether I should be strict in separating/distinguishing semantics and model theory. I do think the distinction (between set-theoretic interpretations, and axiomatic specifications of those interpretations) is important, and in the first instance what we get from supervaluations is a model theory, not a semantics. But in the end it seemed not to earn its place. Actually: does anyone know of somewhere where a supervaluational *axiomatic semantics* is given (as opposed to supervaluational models?) I’m guessing it’ll look something like: [“p”]=T on d iff At d, p.—i.e. we’ll carry the relativization through the axioms just as we do in the modal case.

The section itself:

Survey paper on vagueness: part IV

REVISIONARY SEMANTICS: SUPERVALUATIONS

A very common thought about borderline cases is that they’re neither true nor false. Given that one can only know what is true, this would explain our inevitable lack of knowledge in borderline cases. It’s often thought to be a rather plausible suggestion in itself.

Classical semantics builds in the principle that each meaningful claim is either true or false (bivalence). So if we’re to pursue the thought that borderline claims are truth value gaps, we must revise our semantic framework to some extent. Indeed, we can know in advance that any semantic theory with truth-value gaps will diverge from classical semantics even on some of the most intuitively plausible (platitudinous seeming) consequences: for it can be shown under very weak assumptions that truth value gaps are incompatible with accepting disquotational principles such as: “Harry is bald” is true if and only if Harry is bald (see cite cite cite).

How will the alternation of the classical framework go? One suggestion goes under the heading “supervaluationism” (though as we’ll see, the term is somewhat ambiguous).

As an account of the nature of vagueness, supervaluationism is a view on which borderlineness arises from what we might call “semantic indecision”. Think of the sort of things that might fix the meanings of words: conventions to apply the word “bald” to clear cases; conventions to apply “not bald” to clear non-cases; various conventions of a more complex sort—for example, that anyone with less hair than a bald person should count as bald. The idea is that when we list these and other principles constraining the correct interpretation of language, we’ll be able to narrow down the space of acceptable (and entirely classical) interpretations of English down a lot—but not to the single intended interpretation hypothesized by classical semantics. At best, what we’ll get is a cluster of candidates. Let’s call these the sharpenings for English. Each will assign to each vague predicate a sharp boundary. But very plausibly the location of such a boundary is something the different sharpenings will disagree about. A sentence is indeterminate (and if it involves a vague predicate, is a borderline case) just in case there’s a sharpening on which it comes out true, and another on which it comes out false.

As an account of the semantics of vague language, the core of the supervaluationist proposal is a generalization of the idea found in classical semantics, that for something to be true is for it to be true at the intended interpretation. Supervaluationism offers a replacement: it works with a set of “co-intended interpretations”, and says that for a sentence to be true, it must be true at all the co-intended interpretations (this is sometimes called “supertruth”). This dovetails nicely with the semantic indecision picture, since we can take the “co-intended interpretations” to be what we called above the sharpenings—and hence when a sentence is indeterminate (true on one sharpening and false on another) neither it nor its negation will be true: and hence we have a truth value gap. (The core proposal for defining truth finds application in settings where “semantic indecision” idea seems inappropriate: see for example Thomason’s treatment of the semantics of branching time in his (cite)).

The slight tweak to the classical picture leaves a lot unchanged. Consider the tautologies of classical logic, for example. Every classical interpretation will make them true; and so each sharpening is guaranteed to make them true. Any classical tautology will be supertrue, therefore. So – at least at this level – classical logic is retained (It’s a matter of dispute whether more subtle departures from classical logic are involved, and whether this matters: see (cite cite cite)).

So long as (super)truth is a constraint on knowledge, supervaluationists can explain why we can’t know whether borderline bald Harry is bald. On some developments of the position, they can go interestingly beyond this explanation of ignorance. One might argue that insofar as one should only invest credence in a claim to the extent one believes it true, obvious truth-value-gaps are cases where we should utterly reject (invest no credence in) both the claim and its negation. This goes beyond mere lack of knowledge, for it means the information that such-and-such is borderline gives us a direct fix on what our degree of belief should be in such-and-such (by contrast, on the epistemicist picture, though we can’t gain knowledge, we’ve as yet no reason to think that inquiry could raise or lower the probability we assign to Harry being bald, making investigation of the point perfectly sensible, despite initial appearances).

What about the sorites? Every sharpening draws a line between bald and non-bald, so “there is a single hair that makes the difference between baldness and non-baldness” will be supertrue. However, no individual conjunction of form (N) will be true—many of them will instead be truth value gaps, true on some sharpenings and false on others (this highlights one of the distinctive (disturbing?) features of supervaluationist—the ability of disjunctions and existential generalizations to be true, even if no disjunct or instance is.) As truth-value gaps, instances of (N*) will also fail to be true, so some of the needed premises for the sorites paradox are not granted.

(There is one thing that some supervaluationists can point to in attempt to explain the appeal of the paradoxical premises. Suppose that—as I think is plausible—we take as the primary data in the sorites the horribleness of the conjunctions (N). These are untrue, and so (for one kind of supervaluationist) should be utterly rejected. It’s tempting, though mistaken, to try to express that rejection by accepting a negated form of the same claim—that is the move that takes us from the rejection of each of (N) to the acceptance of each of (N*). This temptation is one possible source of the “seductiveness” of sorites reasoning.)

Two points to bear in mind about supervaluationism. First, the supervaluationist endorses the claim that “there is a cut-off” — a pair of men differing by only one hair, with the first bald and the second not. Insofar as one considered that first-order claim to be what was most incredible about (say) epistemicism, one won’t feel much advance has been made. The supervaluationist must try to persuade you that once one understands the sense in which “there’s no fact of the matter” where that cut-off is, the incredulity will dissipate.  Second, many want to press the charge that the supervaluationist makes no progress over the classicist, for reasons of “higher order vagueness”. The thought is the task of explaining how a set of sharpenings gets selected by the meaning fixing facts is no easier or harder than explaining how a single classical interpretation gets picked out. However, (a) the supervaluationist can reasonably argue that if she spells out the notion of “sharpening” in vague terms, she will regard the boundary between the sharpenings and non-sharpenings as vague (see Keefe (cite)); (b) even if both epistemicist and supervaluationist were both in some sense “committed to sharp boundaries”, the account they give of the nature of vagueness is vastly different, and we can evaluate their positive claim on its own merits.

Advertisements

One response to “Vagueness survey IV: supervaluations

  1. This is my favorite one yet (perhaps because of my lingering sympathy for supervaluationism, but who knows). I especially like that you clearly indicated that we’re talking about ‘sharpenings’ of an *entire* language — I see the error on this point way too much. In any case, just a couple of recommendations.

    I do like the term ‘sharpening’ a lot (especially since it works nicely with ‘sharp’ in the section on epistemicism). However, I do think it’d be good to throw in the tongue-twister that is ‘precisification’ somewhere in there — if only because of how common it is in the literature. If there was one of philosophical term of art I associate with vagueness, it’s that one — so if you’re going to include any of the old-school terms, I’d recommend that one.

    Another recommendation — this one less important — is (if you have the space), it might be worth mentioning (very briefly) that at least part of the reason people might be willing to accept non-classical logic for vagueness is motivated by factors *other* than vagueness — say, the truth paradoxes, other kinds of indeterminacy, etc. Needless to say, I’d bet good money that a lot of supervaluationists feel extra-comfortable giving up bivalence (or LEM & even LNC, for that matter) for reasons entirely independent of vagueness. The fact that vagueness might also give us reason to throw out bivalence is then icing on the cake, so to speak.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s