Monthly Archives: October 2009

Vagueness survey paper: V (rejecting excluded middle)

So this was the biggest selection-problem I faced: there are so many many-valued systems out there, and so many ways to think about them, which to choose?

I would have liked to talk a bunch on the interpretation of “third truth values”. This seems often glossed over badly to me. In the vagueness literature, it’s often assumed that once we’ve got a third truth value, we might as well be degree theorists. But it seems to me that “gap” interpretations of the third truth value are super-different from the “half-true” interpretation. But to make the case that this is more than a verbal dispute, I think we have to say a whole lot more about the cognitive role of indeterminacy, the role of logic, etc etc. All good stuff (in fact, very close to what I’m working on right now). But I chose not to go that way.

Another thing I could have done is talk directly on degree theories. Nick Smith has a new book length treatment of them, which makes some really nice moves both in motivating and defending the account. And of course they’re historically popular —and “fuzzy logic” is what you always hear talked about in non-philosophical treatments. In Williamson’s big vagueness book, degree theories are really the focus of the chapter corresponding to this section.

On the other hand, I felt is was really important to get a representative of the “logic first” view into the picture—someone who really treated semantics kind of instrumentally, and who saw the point of talking about vagueness in a rather different way to the way it’s often presented in the extant survey books. And the two that sprang to mind here were Hartry Field and Crispin Wright. Of these, Crispin’s intuitionism is harder to set up, and has less connections to other many valued theories. And his theory of a quandary cognitive role, while really interesting, just takes longer to explain than Hartry’s rejectionist suggestion. Wright’s agnosticism is a bit hard to explain too—I take it the view is supposed to be that we’re poised between the Williamsonian style picture, and an option where you assert the negation of bivalence—and the first seems unbelievable and the second incoherent, so we remain agnostic. But if something is incoherent, how can we remain agnostic? (So, actually, I think the better way to present the view is as agnosticism between bivalence endorsing views and Field-style rejectionist views, albeit carried out in an intuitionist rather than Kleene-based system. But if that’s the way to work things out, rejectionism is conceptually prior to agnosticism).

So in the end I started with a minimal intro to the many-valued truth tables, a brief pointer in the direction of extensions and interpretations, and then concentrate on the elements of the Field view—the Quinean translate-and-deflate theory of language, the logical revisionism (and instrumentalism about model theory) and the cognitive role that flows from it.

Just as with the epistemicism section, there are famous objections I just didn’t have room for. The whole issue over the methodological ok-ness of revising logic, burdens that entails… nothing will remain of that but a few references.

Vagueness survey paper: section V

REVISIONARY LOGIC: MANY-VALUED SETTINGS

A distinctive feature of supervaluationism was that while it threw out bivalence (“Harry is bald” is either true or false) it preserved the corresponding instance of excluded middle (“Harry is either bald or not bald”). Revising the logic in a more thorough-going way would allow for a coherent picture where we can finally reject the claim “there is a  single hair that makes the difference between bald and non-bald” without falling into paradox.

“Many valued” logics can be characterized by increasing the number of truth-values we work with—perhaps to three, perhaps infinitely many—and offering generalizations of the familiar stories of how logical constants behave to accommodate this tweak. There are all sorts of ways in which this can be developed, and even more choice points in extracting notions of “consequence” out of the mass of relations that then become available.

Here is a sample many-valued logic, for a propositional language with conjunctions, disjunctions and negations. To characterize the logic, we postulate three values (let’s call them, neutrally, “1” “1/2” and “0”). For the propositional case, the idea will be that each atomic sentence will be assigned some one of the truth values; and then the truth values get assigned to complex sentences recursively. Thus, a conjunction will get that truth value which is the minimum of the truth values of its conjuncts; a disjunction will get that truth value which is the maximum of the truth values of its disjuncts; and a negation will get assigned 1 minus the truth value of the claim negated (you can easily check that, ignoring the value ½ altogether, we get back exactly classical truth-tables.)

A many-valued logic (the strong Kleene logic) is defined by looking at the class of those arguments that are “1-preserving”, i.e. such that when all the premises are value 1, the conclusion is value 1 too. It has some distinctive features; e.g. excluded middle “Av~A” is no longer a tautology, since it can be value ½ when A is value ½. “A&~A” is still treated as a logical contradiction, on the other hand (every sentence whatsoever follows from it), since it will never attain value 1, no matter what value A is assigned.

One option at this point is to take this model theory seriously—much as the classicist and supervaluationist do, and hypothesise that natural language has (or is modelled by?) some many-valued interpretation (or set of interpretations?). This view is a major player in the literature (cite cite cite).

For the remainder of this section, I focus on a different framework in which to view the proposal to revise logic. This begins with the rejection of the very idea of an “intended interpretation” of a language, or a semantic treatment of truth. Rather, one treats truth as a “device of disquotation”—perhaps introduced by means of the kind of disquotational principles mentioned earlier (such a device proves useful, argue its fans, in increasing our expressive power—allowing us to endorse or deny such claims as “everything the pope says is true”). The disquotation principles capture all that there is to be said about truth, and the notion doesn’t need any “model theoretic foundation” in an “intended interpretation” to be in good-standing.

In the first instance, such a truth-predicate is “local”—only carving the right distinctions in the very language for which it was introduced (via disquotation). To allow us to speak sensibly of true French sentences (for example), Field (cite) following Quine (cite) analyzes

“la neige est blanc” is true

as:

there’s a good translation of “la neige est blanc” into English, such that the translated version is disquotationally true.

Alongside this disquotationism is a distinctive attitude to logic. On Field’s view, logical consequence does not need to be “analyzed” in model-theoretic terms. Consequence is taken as primitive, and model theory seen as a useful instrument for characterizing the extension of this relation.

Such disquotationism elegantly avoids the sort of worries plaguing the classical, supervaluational, and traditional many-valued approaches – in particular, since there’s no explanatory role for an “intended interpretation”, we simply avoid worries about how such an intended interpretation might be settled on. Moreover, if model-theory is mere instrument, there’s no pressure to say anything about the nature of the “truth values” it uses.

So far, no appeal to revisionary logic has been made. The utility of hypothesizing a non-classical (Kleene-style) logic rather than a classical one comes in explaining the puzzles of vagueness. For Field, when indeterminacy surfaces, we should reject the relevant instance of excluded middle. Thus we should reject “either Harry is bald or he isn’t”—and consequently, also reject the claim that Harry is bald (from which the former follows, in any sensible logic). We then must reject “Harry is bald and the man with one hair less isn’t” (again because something we reject follows from it). So, from our rejection of excluded middle, we derive the core data behind the “little by little” worry—rejection of the horrible conjunctions (N).

So, like one form of supervaluationism, Field sees borderline cases of vague predicates as characterized by forced rejection. No wonder further inquiry into whether Harry is bald seems pointless. Again in parallel, Field accommodates rejection of (N) without accepting (N*)—and this is at least a start on explaining where our perplexity over the sorites comes from. Unlike the supervaluationist, he isn’t committed to the generalizations “there is some bald guy next to a non-bald guy”—the Kleene logic (extended to handle quantifiers) enforces rejection of this claim.

One central concern about this account of vagueness (setting aside very general worries about the disquotational setting) is whether in weakening our logic we have thrown the baby out with the bathwater. Some argue that it is methodologically objectionable to revise logic without overwhelming reason to do so, given the way that classical assumptions are built into successful, progressive science even when vagueness is clearly in play (think of applications of physics, or the classical assumptions in probability and decision theory, for example). This is an important issue: but let’s set it aside for now.

More locally, the logic we’ve looked at so far seems excessively weak in expressive power. It’s not clear, for example, how one should capture platitudes like “if someone is balder than a bald person, that person too is bald” (translating the “if” here as a kind of disjunction or negated conjunction, as is standard in the classical case, we get something entailing instances of excluded middle we want to reject – we do not seem to yet have in the system a suitable material conditional). For another thing, we haven’t yet said anything about how the central notion “it is determinate whether” fits in. It seems to have interesting logical behaviour—for example, the key connection between excluded middle and indeterminacy would be nicely captured if from Av~A one could infer “it is determinate whether A”. Much of Field’s positive project involves extending the basic Kleene logic to accommodate a suitable conditional and determinacy operators, in particular to capture thoroughly “higher-order” kinds of vagueness (borderline cases of borderline cases, and so on).

Vagueness survey IV: supervaluations

Ok, part IV of the survey article. This includes the second of three sample theories: and I’ve chosen to talk about what’s often called “the most popular account of vagueness”: supervaluationism.

I’m personally pretty convinced there are (at least) two different *types* of theses/theories that get the label “supervaluationism” (roughly, the formal logic-semantics stuff, and the semantic indecision story about the source of indeterminacy). And even once you have both on board, I reckon there are at least three *very different* ways of understanding what the theory is saying (that’s separate from, but related to, the various subtle issues that Varzi picks out in the recent Mind paper).

But what I want to present is one way of working out this stuff, so I’m keeping fairly close to the Fine-Keefe (with a bit of Lewis) axis that I think is most people’s reference point around here. I try to strip out the more specialist bits of Fine—which comes at a cost, since I don’t mention “penumbral connection” here. But again, I wanted to keep the focus on the basics of the theory, and the application to the central puzzles, so stuff had to come out.

One thing I’m fairly unapologetic about is to present supervaluationism as a theory where truth=supertruth. It seems terminologically bizarre to do otherwise—given that “supervaluations” have their origins in a semantic-logico technique for defining truth, when you have multiple truth-on-i to play with. I’m happy to think that “semantic indecision” views can be paired with a classical semantics, and the semantics for definiteness given in terms of delineations/sharpenings (as indeed, can the epistemicist D-operator). But, as a matter of terminology, I don’t see these as “supervaluational”. “Ambiguity” views like McGee and McLaughlin are a mixed case, but I don’t have room to fit them in. In any case: bivalence failure is such a common thought that I thought it was worth giving it a run for its money, straightforwardly construed, and I do think that it does lead to interesting and distinctive positions on what’s going on in the sorites.

Actually, the stuff about confusing rejection with accepting-negation is something that’s again a bit of a choice-point. I could have talked some about the “confusion hypothesis” way of explaining the badness of the sorites (and work in particular by Fine, Keefe and Greenough on this—and a fabulous paper by Brian Weatherson that he’s never published “Vagueness and pragmatics”).  But when I tried with this it’s a bit tricky to explain, takes quite a bit of thinking about—and I felt the rejection stuff (also a kind of “confusion hypothesis”) was more organically related to the way I was presenting things. I need to figure out some references for this. I’m sure there must be lots of people making the “since it’s untrue, we’re tempted to say ~p” move, which is essentially what’s involved. I’m wondering whether Greg Restall has some explicit discussion of the temptation of the sorites in his paper on denial and multiple conclusions and supervaluations…

One place below where a big chunk of stuff had to come out to get me near the word limit, was material about the inconsistency of the T-scheme and denials of bivalence. It fitted in very naturally, and I’d always planned that stuff to go in (just because it’s such a neat and simple little argument). But it just didn’t fit, and of all the things I was looking at, it seemed the most like a digression. So, sadly, all that remains is “cite cite cite” where I’m going to give the references.

One thing that does put in a brief appearance at the end of the section is the old bugbear: higher order vagueness. I don’t discuss this very much in the piece, which again is a bit weird, but then it’s very hard to state simply what the issue is there (especially as there seems to be at least three different things going by that name in the literature, the relations between them not being very obvious).

Another issue that occurred to me here is whether I should be strict in separating/distinguishing semantics and model theory. I do think the distinction (between set-theoretic interpretations, and axiomatic specifications of those interpretations) is important, and in the first instance what we get from supervaluations is a model theory, not a semantics. But in the end it seemed not to earn its place. Actually: does anyone know of somewhere where a supervaluational *axiomatic semantics* is given (as opposed to supervaluational models?) I’m guessing it’ll look something like: [“p”]=T on d iff At d, p.—i.e. we’ll carry the relativization through the axioms just as we do in the modal case.

The section itself:

Survey paper on vagueness: part IV

REVISIONARY SEMANTICS: SUPERVALUATIONS

A very common thought about borderline cases is that they’re neither true nor false. Given that one can only know what is true, this would explain our inevitable lack of knowledge in borderline cases. It’s often thought to be a rather plausible suggestion in itself.

Classical semantics builds in the principle that each meaningful claim is either true or false (bivalence). So if we’re to pursue the thought that borderline claims are truth value gaps, we must revise our semantic framework to some extent. Indeed, we can know in advance that any semantic theory with truth-value gaps will diverge from classical semantics even on some of the most intuitively plausible (platitudinous seeming) consequences: for it can be shown under very weak assumptions that truth value gaps are incompatible with accepting disquotational principles such as: “Harry is bald” is true if and only if Harry is bald (see cite cite cite).

How will the alternation of the classical framework go? One suggestion goes under the heading “supervaluationism” (though as we’ll see, the term is somewhat ambiguous).

As an account of the nature of vagueness, supervaluationism is a view on which borderlineness arises from what we might call “semantic indecision”. Think of the sort of things that might fix the meanings of words: conventions to apply the word “bald” to clear cases; conventions to apply “not bald” to clear non-cases; various conventions of a more complex sort—for example, that anyone with less hair than a bald person should count as bald. The idea is that when we list these and other principles constraining the correct interpretation of language, we’ll be able to narrow down the space of acceptable (and entirely classical) interpretations of English down a lot—but not to the single intended interpretation hypothesized by classical semantics. At best, what we’ll get is a cluster of candidates. Let’s call these the sharpenings for English. Each will assign to each vague predicate a sharp boundary. But very plausibly the location of such a boundary is something the different sharpenings will disagree about. A sentence is indeterminate (and if it involves a vague predicate, is a borderline case) just in case there’s a sharpening on which it comes out true, and another on which it comes out false.

As an account of the semantics of vague language, the core of the supervaluationist proposal is a generalization of the idea found in classical semantics, that for something to be true is for it to be true at the intended interpretation. Supervaluationism offers a replacement: it works with a set of “co-intended interpretations”, and says that for a sentence to be true, it must be true at all the co-intended interpretations (this is sometimes called “supertruth”). This dovetails nicely with the semantic indecision picture, since we can take the “co-intended interpretations” to be what we called above the sharpenings—and hence when a sentence is indeterminate (true on one sharpening and false on another) neither it nor its negation will be true: and hence we have a truth value gap. (The core proposal for defining truth finds application in settings where “semantic indecision” idea seems inappropriate: see for example Thomason’s treatment of the semantics of branching time in his (cite)).

The slight tweak to the classical picture leaves a lot unchanged. Consider the tautologies of classical logic, for example. Every classical interpretation will make them true; and so each sharpening is guaranteed to make them true. Any classical tautology will be supertrue, therefore. So – at least at this level – classical logic is retained (It’s a matter of dispute whether more subtle departures from classical logic are involved, and whether this matters: see (cite cite cite)).

So long as (super)truth is a constraint on knowledge, supervaluationists can explain why we can’t know whether borderline bald Harry is bald. On some developments of the position, they can go interestingly beyond this explanation of ignorance. One might argue that insofar as one should only invest credence in a claim to the extent one believes it true, obvious truth-value-gaps are cases where we should utterly reject (invest no credence in) both the claim and its negation. This goes beyond mere lack of knowledge, for it means the information that such-and-such is borderline gives us a direct fix on what our degree of belief should be in such-and-such (by contrast, on the epistemicist picture, though we can’t gain knowledge, we’ve as yet no reason to think that inquiry could raise or lower the probability we assign to Harry being bald, making investigation of the point perfectly sensible, despite initial appearances).

What about the sorites? Every sharpening draws a line between bald and non-bald, so “there is a single hair that makes the difference between baldness and non-baldness” will be supertrue. However, no individual conjunction of form (N) will be true—many of them will instead be truth value gaps, true on some sharpenings and false on others (this highlights one of the distinctive (disturbing?) features of supervaluationist—the ability of disjunctions and existential generalizations to be true, even if no disjunct or instance is.) As truth-value gaps, instances of (N*) will also fail to be true, so some of the needed premises for the sorites paradox are not granted.

(There is one thing that some supervaluationists can point to in attempt to explain the appeal of the paradoxical premises. Suppose that—as I think is plausible—we take as the primary data in the sorites the horribleness of the conjunctions (N). These are untrue, and so (for one kind of supervaluationist) should be utterly rejected. It’s tempting, though mistaken, to try to express that rejection by accepting a negated form of the same claim—that is the move that takes us from the rejection of each of (N) to the acceptance of each of (N*). This temptation is one possible source of the “seductiveness” of sorites reasoning.)

Two points to bear in mind about supervaluationism. First, the supervaluationist endorses the claim that “there is a cut-off” — a pair of men differing by only one hair, with the first bald and the second not. Insofar as one considered that first-order claim to be what was most incredible about (say) epistemicism, one won’t feel much advance has been made. The supervaluationist must try to persuade you that once one understands the sense in which “there’s no fact of the matter” where that cut-off is, the incredulity will dissipate.  Second, many want to press the charge that the supervaluationist makes no progress over the classicist, for reasons of “higher order vagueness”. The thought is the task of explaining how a set of sharpenings gets selected by the meaning fixing facts is no easier or harder than explaining how a single classical interpretation gets picked out. However, (a) the supervaluationist can reasonably argue that if she spells out the notion of “sharpening” in vague terms, she will regard the boundary between the sharpenings and non-sharpenings as vague (see Keefe (cite)); (b) even if both epistemicist and supervaluationist were both in some sense “committed to sharp boundaries”, the account they give of the nature of vagueness is vastly different, and we can evaluate their positive claim on its own merits.

Vagueness survey paper III (epistemicism)

This is the third section of my first draft of a survey paper on vagueness, which I’m distributing in the hope of getting feedback, from the picky to the substantive!

In the first two sections, I introduced some puzzles, and said some general methodology things about accounting for vague language—in effect, some of the issues that come up in giving a theory in a vague metalanguage. The next three sections are the three sample accounts I look at. The first, flowing naturally on from “textbook semantic theories”, is the least revisionary: semantics in a classical setting.

Now, if I lots of time, I’d talk about Delia Graff Fara’s contextualism, the “sneaky classicism” of people like McGee and McLaughlin (and Dave Barnett, and Cian Dorr, and Elizabeth Barnes [joint with me in one place!]). But there’s only so much I can fit in, and Williamson’s epistemicism seems the natural representative theory here.

Then there’s the issue of how to present it. I’m always uncomfortable when people use “epistemicism” as synonym for just the straightforward reading of classical semantics—sharp cut-offs and so on (somebody suggested the name “sharpism” for that view–I can’t remember who though). The point about epistemicism—why people finding it interesting—is surely not that bit of it. It’s that it seems to predict where others retrodict; and it seems principled where others seem ad hoc. Williamson takes formulations of parts of the basic toolkit of epistemology—safety principles; and uses these to explain borderineness (and ultimately the higher-order structure of vagueness). That’s what’s so super-cool about it.

I’m a bit worried in the current version I’ve downplayed the sharpist element so much. After all, that’s where a lot of the discussion has gone on. In part, that betrays my frustration with the debate—there are some fun little arguments around the details, but on the big issue I don’t see that much progress has been made. It feels to me like we’ve got a bit of  a standoff. At minimum I’m going to have to add a bunch of references to this stuff, but I wonder what people think about the balance as it is here.

I have indulged a little bit in raising one of the features that always puzzles me about epistemicism: I see that Williamson has an elegant explanation about why we can’t ever identify a cut-off. But I just don’t see what the story is about why we find the existential itself so awful. The analogy to lottery cases seems helpful here. Anyway, on with the section:

Vagueness Survey Paper, Part III.

VAGUENESS IN A CLASSICAL SETTING: EPISTEMICISM

One way to try to explain the puzzles of vagueness look to resources outwith the philosophy of language. This is the direction pursued by epistemicists such as Timothy Williamson.

One distinctive feature of the epistemicist package is retaining classical logic and semantics. It’s a big advantage of this view that we can keep textbook semantic clauses described earlier, as well as seemingly obvious truths such as that “Harry is bald” is true iff Harry is bald (revisionary semantic theorists have great trouble saving this apparently platudinous claim). Another part of the package is a robust face-value reading of what’s involved in doing this. There really is a specific set that is the extension of “bald”—a particular cut-off in the sorites series for bald, and so on (some one of the horrible conjunctions given earlier is just true). Some other theorists say these things but try to sweeten the pill—to say that admitting all this is compatible with saying that in a strong sense there’s no fact of the matter where this cut-off is (see McGee McLaughlin; Barnett; Dorr; Barnes). Williamson takes the medicine straight: incredible as it might sound, our words really do carve the world in a sharp, non-fuzzy way.

The hard-nosed endorsement of classical logic and semantics at a face-value reading is just scene-setting: the real task is to explain the puzzles that vagueness poses. If the attempt to make sense of “no fact of the matter” rhetoric is given up, what else can we appeal to?

As the name suggests, Williamson and his ilk appeal to epistemology to defuse the puzzle. Let us consider borderlineness first. Start again from the idea that we are ignorant of whether Harry is bald, when he is a borderline case. The puzzle was to explain why this was so, and why the unknowability was of such a strong and ineliminable sort.

Williamson’s proposal makes use of a general constraint on knowledge: the idea that in order to know that p, it cannot be a matter of luck that one’s belief that p is true. Williamson articulates this as the following “safety principle”:

For “S knows that p” to be true (in such situation s), “p” must be true in any marginally different situation s* (where one forms the same beliefs using the same methods) in which “S believes p” is true.

The idea is that the situations s* represent “easy possibilities”: falsity at an easy possibility makes a true belief too lucky to count as knowledge.

This first element of Williamson’s view is independently motivated epistemology. The second element is that the extensions of vague predicates, though sharp, are unstable. They depend on exact details of the patterns of use of vague predicates, and small shifts in the latter can induce small shifts in the (sharp) boundaries of vague predicates.

Given these two, we can explain our ignorance in borderline cases. A borderline case of “bald”  is just one where the boundary of “bald” is close enough that a marginally different pattern of usage could induce a switch from (say) Harry being a member of the extension of “bald” to not being in that extension. If that’s the case, then even if one truly believed that Harry was bald, there will be an easy possibility where one forms the same beliefs for the same reasons, but that sentence is false. Applying the safety principle,  the belief can’t count as knowledge.

Given that the source of ignorance resides in the sharp but unstable boundaries of vague predicates, one can see why gathering information about hair-distributions won’t overcome the relevant obstacle to knowledge. This is why the ignorance in borderline cases seems ineliminable.

What about the sorites? Williamson, of course, will say that one of the premises if false—there is a sharp boundary, we simply can’t know what that is. It’s unclear whether this is enough to “solve” the sorites paradox however. As well as knowing what premise to reject, we’d like to know why we found the case paradoxical in the first place. Why do we find the idea of a sharp cut off so incredible (especially since there’s a very simple, valid argument from obvious premises to this effect available)? Williamson can give an account of why we’d never feel able to accept any one of the individual conjunctions (Man n is bald and man n+1 is not). But that doesn’t explain why we’re uneasy (to say the least) with the thought that some such conjunction is true—i.e. that there is a sharp cut-off. I’ll never know in advance which ticket will win a lottery; but I’m entirely comfortable with the thought that one will win. Why don’t we feel the same about the sorites?

Vagueness survey paper: II (vague metalanguages)

Ok, so here’s part II of the paper. One thing that struck me having taught this stuff over the last few years is how much the “vague metalanguages” thing strikes people as ad hoc if brought in at a late stage—it feels that we were promised something—-effectively, something that’d relate vague terms to a precise world—and we’ve failed. And so what I wanted to do is put that debate up front, so we can be aware right from the start that there’ll be an issue here.

When writing this, it seemed to me that effectively the Evans argument arises naturally when discussing this stuff. So it seemed quite nice to put in those references.

Sorry for the missing references, by the way—that’s going to be fixed once I look up what the style guide says about them. (And I’ll add some extras).

USING VAGUENESS TO HANDLE VAGUENESS

One textbook style of semantic theory assigns extensions (sets of objects) as the semantic values of vague terms (Heim & Kratzer xxxx). This might seem dubious. Sets of objects as traditionally conceived are definite totalities—each object is either definitely a member, or definitely not a member. Wouldn’t associating one of such totalities with a vague predicate force us, unjustifiably, to “draw sharp boundaries”?

On the other hand, it seems that we easily say which set should be the extension of “is red”:

[[“red”]]={x: x is red}

There’s no need for this to be directly disquotational:

[[“rouge”]]={x: x is red}

The trick here is to use vague language (“in the theorist’s metalanguage”) to say what the extension should be (“of the object-language predicate”). If this is legitimate, there’s no obstacle to taking textbook semantics over wholesale.

Perhaps the above seems unsatisfactory: one is looking for illumination from one’s semantic clauses. So, for example, one might want to hear something in the semantics about dispositions to judge things red, to reflect the response-dependent character of redness. It’s highly controversial whether this is a reasonable demand. One might suspect that it mixes up the jobs of the semanticist (to characterize the representational properties of words) and the metaphysician (to say what redness is, in more fundamental terms). But even if one wants illumination wherever possible in one’s semantic theory, there’s not even a prima facie problem here, so long one is still able to work within a vague metalanguage. Thus Lewis, in discussing his semantic treatment of counterfactuals in terms of the (admittedly vague) notion of similarity, says “I … seek to rest an unfixed distinction upon a swaying foundation, claiming that the two sway together rather than independently.” (Lewis, 1973, p.92). While Lewis recognizes that “similarity” is vague, he thinks this is exactly what we need to faithfully capture the vagueness of counterfactuals in an illuminating way. One might see trouble for the task of constructing a semantics, if one imposed the requirement that the metalanguage should (at least ideally?) be perfectly “precise”. But one would have to be very clear about why such a strong requirement was being imposed.

Let us leave this worry to those bold enough to impose such constraints on the theorist of language. Are there further problems with textbook semantics?

One worry might be that the resources it appeals to are ill-understood. Let us go back to the thought that sets (as traditionally conceived) are definite totalities. Then borderline-bald Harry (say) is either definitely in or definitely out of any given set. But Harry better be a borderline member of {x: x is bald}. Don’t we now have to go and provide a theory of these new and peculiar entities (“vague sets”) — who knows where that will lead us?

The implicit argument that we’re dealing with “new” entities here can be formulated as follows:

(1) Harry is definitely a member of S

(2) It is not the case that Harry is definitely a member of {x: x is bald}.

(3) So: S is not identical to {x: x is bald}

(Parallel considerations can be given for the non-identity of {x: x is bald} with sets Harry is definitely not a member of). The argument seems appealing, seeming to appeal only to the indiscernability of identicals. If we suppose S and {x: x is bald} to be identical, then we should be able to swap one for the other in the following context without change of truth value:

Harry is definitely a member of ….

But (1) and (2) show us this doesn’t happen. (3) follows by reductio.

The issues this raises are discussed extensively in the literature on the “Evans-Salmon” argument (and in parallel debates on the indiscernability of identicals in connection to modality and tense). One moral from that discussion is that the argument given above is probably not valid as it stands. Very roughly, “{x: x is bald}” can denote some precisely bounded set of entities, consistent with everything we’ve said, so long as it is indefinite which such totality it denotes. Interested readers are directed to (cite cite cite) for further discussion.

Vague metalanguages seem legitimate; and there’s no reason as yet to think that appeal to vaguely specified sets commits one to a novel “vague set theory”. But we still face the issue of how the distinctive puzzles of vagueness are to be explained.

Vagueness survey paper I (puzzles)

I’ve been asked to write a survey paper on vagueness. It can’t be longer than 6000 words. And that’s a pretty big topic.

I’ve been wondering how best to get some feedback on whether I’m covering the right issues, and, of course, whether what I’m doing is fair and accurate. So what I thought is that I’d post up chunks of my first draft of the paper here—maybe 1000 words at a time, and see what happens. So comments *very much* welcomed, whether substantive or picky.

Basically, the plan for the paper is that it be divided into five sections (it’s mostly organized around the model theory/semantics, for reasons to do with the venue). So first I have a general introduction to the puzzles of vagueness, of which I identify two: effectively, soriticality, and borderline cases. Then I go on to say something about giving a semantics for vagueness in a vague metalanguage. The next three sections give three representative approaches to “semantics”, and their take on the original puzzles. First, classical approaches (taking Williamsonian epistemicism as representative). Second, something based on Fine’s supervaluations. And third, many valued theories. I chose to focus on Field’s recent stuff, even though it’s perhaps not the most prominent, since it allows me to have some discussion of how things look when we have a translate-and-deflate philosophy of language rather than the sort of interpretational approach you often find. Finally, I have a wrap up that mentions some additional issues (e.g. contextualism), and alternative methodologies/approaches.

So that’s how I’ve chosen to cut the cake. Without more ado, here’s the first section.

Vagueness Survey Paper. Part I.

The puzzles of vagueness

Take away grains, one by one, from a heap of rice. At what point is there no longer a heap in front of you? It seems hard to believe that there’s a sharp boundary – a single grain of rice removing which turns a heap into a non-heap. But if removing one grain can’t achieve this, how can removing a hundred do so? It seems small changes can’t make a difference to whether or not something is a heap; but big changes obviously do. How can this be, since big changes are nothing but small changes chained together?

Call this the “little by little” puzzle.

Pausing midway through removing grains from the original heap, ask yourself: “is what I have at this moment a heap?” At the initial stages, the answer will clearly be “yes”. At the late stages, the answer will clearly be “no”. But at intermediate stages, the question will generate perplexity: it’s not clearly right to say “yes”, nor is it clearly right to say “no”. A hedged response seems better: “it sorta is and sorta isn’t”, or “it’s a borderline case of a heap”. Those are fine things to say, but they’re not a direct answer to the original question: is this a heap? So what is the answer to that question when confronted with (what we can all agree to be) a borderline case of a heap?

Call this the “borderlineness” puzzle.

Versions of the “little by little” and “borderline case” puzzles are ubiquitous. As hairs fall from the head of a man as he ages, at what point is he bald? How could losing a single hair turn a non-bald man bald? What should one say of intermediate “borderline” cases?  Likewise for colour terms: a series of colour patches may be indiscriminable from one another (put them side by side and you couldn’t tell them apart). Yet if they vary in the wavelength of light they reflect, chaining enough such cases might take one from pure red to pure yellow.

As Peter Unger (cite) emphasized, we can extend the idea further. Imagine an angel annihilating the molecules that make up a table, one by one. Certainly at the start of the process annihilating one molecule would still leave us with a table; at the end of the process we have a single molecule—no table that! But how could annihilating a single molecule destroy a table? It’s hard to see what terms (outside mathematics) do not give rise to these phenomena.

The little by little puzzle leads to the sorites paradox (from “sorites” – the Greek word for “heap”). Take a line of 10001 adjacent men, the first with no hairs, the last with 10000 hairs, with each successive man differing from the previous by the addition of a single hair (we call this a sorites series for “bald”). Let “Man N” name the man with N hairs.

Obviously, Man 0 is bald. Man 10000 is not bald. Furthermore, the following seem entirely reasonable (in fact, capture the thought that “a single hair can’t make a difference to baldness”):

No consider the following collection of horrible-sounding claims:

(1):       Man 0 is bald, and man 1 is not bald.

(2):       Man 1 is bald, and man 2 is not bald.

(10000): Man 99999 is bald, and man 10000 is not bald.

It seems that each of these must be rejected, if anything in the vicinity of the “little differences can’t make a difference” principle is right. But if we reject the above, surely we must accept their negations:

(1*):     it is not the case that: Man 0 is bald, and man 1 is not bald.

(2*):     it is not the case that: Man 1 is bald, and man 2 is not bald.

(10000*): it is not the case that: Man 99999 is bald, and man 10000 is not bald.

But given the various (N*), and the two obvious truths about the extreme cases, a contradiction follows.

One way you can see this is by noting that each (N*) is (classically) equivalent to the material conditional reading of:

(N**) if Man N-1 is bald, then Man N is bald

Since Man 0 is bald, a series of Modus Ponens inferences allow us to derive that Man 10000 was bald, contrary to our assumptions.

Alternatively, one can reason with the (N*) directly. Suppose that Man 9999 is bald. As we already know that Man 10000 was not bald, this contradicts (10000*). So, by reductio, Man 9999 is not bald. Repeat, and one eventually derives that Man 0 is not bald, contrary to our assumptions.

Whichever way we go, a contradiction follows from our premises, so we must either find some way of rejecting seemingly compelling premises, or find a flaw in the seemingly obviously valid reasoning.

We turn next to the puzzle of borderlineness: that given Harry is intermediate between clear cases and clear non-cases of baldness, “Is Harry bald?” seems to have no good, direct, answer.

There are familiar cases where we cannot answer such questions: if I’ve never seen Jimmy I might be in no position to say whether he’s bald, simply because I don’t know one way or the other. And indeed, ignorance is one model one could appeal to in the case of borderlineness. If we simply don’t know whether or not Harry is bald, that’d explain why we can’t answer the question directly!

This simply moves the challenge one stage back, however. Why do we lack knowledge? After all, it seems we can know all the relevant underlying facts (the number and distribution of hairs on a man’s head). Nor is does there seem to be any way of mounting a sensible inquiry into the question, to resolve the ignorance, unlike in the case of Jimmy. What kind of status is this, where the question of baldness is not only something we’re in no position to answer, but where we can’t even conceive of how to go about getting in a position to answer? What explains this seemingly inevitable absence of knowledge?

A final note on borderline cases. It’s often said that if Harry is a borderline case of baldness, then it’s indefinite or there’s no fact of the matter or it is indeterminate whether Harry is bald, and I’ll do so myself. Now, indeterminacy may be a more general phenomenon than vagueness (it’s appealed to in cases that seem to have little to do with sorites series: future contingents, conditionals, quantum physics, set theory); but needless to say, labeling  borderline cases as “indeterminate” doesn’t explain what’s going on with them unless one has a general account of indeterminacy to appeal to.

Nominalizing statistical mechanics

Frank Arntzenius gave the departmental seminar here at Leeds the other day. Given I’ve been spending quite a bit of time just recently thinking about the Fieldian nominalist project, it was really interesting to hear about his updating and extension of the technical side of the nominalist programme (he’s working on extending it to differential geometry, gauge theories and the like).

One thing I’ve been wondering about is how  theories like statistical mechanics fit into the nominalist programme. These were raised as a problem for Field in one of the early reviews (by Malament). There’s a couple of interesting papers recently out in Philosophia Mathematica on this topic, by Glen Meyer and Mark Colyvan/Aidan Lyon. Now, one of the assumptions, as far as I can tell, is that even sticking with the classical, Newtonian framework, the Field programme is incomplete, because it fails to “nominalize” statistical mechanical reasoning (in particular, stuff naturally represented by measures over phase space).

Now one thing that I’ll mention just to set aside is that some of this discussion would look rather different if we increased our nominalistic ontology. Suppose that reality, Lewis-style, contains a plurality of concrete, nominalistic, space-times—at least one for each point in phase space (that’ll work as an interpretation of phase space, right?). Then the project of postulating qualitative probability synthetic structure over such worlds from which a representation theorem for the quantitative probabilities of statistical mechanics looks far easier. Maybe it’s still technically or philosophically problematic. Just a couple of thoughts on this. From the technical side, it’s probably not enough to show that the probabilities can be represented nominalistically—we want to show how to capture the relevant laws. And it’s not clear to me what a nominalistic formulation of something like the past hypothesis looks like (BTW, I’m working with something like the David Albert picture of stat mechanics here). Philosophically, what I’ve described looks like a nominalistic version of primitive propensities, and there are various worries about treating probability in this primitive way (e.g. why should information about such facts constrain credences in the distinctive way information about chance seems to)? I doubt Field would want to go in for this sort of ontological inflation in any case, but it’d be worth working through it as a case study.

Another idea I won’t pursue is the following: Field in the 80’s was perfectly happy to take a (logical) modality as primitive. From this, and nominalistic formulations of Newtonian laws, presumably a nomic modality could be defined. Now, it’s one thing to have a modality, another thing to earn the right to talk of possible worlds (or physical relations between them). But given that phase space looks so much like we’re talking about the space of nomically possible worlds (or time-slices thereof) it would be odd not to look carefully about whether we can use nomic modalities to help us out.

But even setting these kind of resources aside, I wonder what the rules of the game are here. Field’s programme really has two aspects. The first is the idea that there’s some “core” nominalistic science, C. And the second claim is that mathematics, and standard mathematized science, is conservative over C. Now, if the core was null, the conservativeness claim would be trivial, but nobody would be impressed by the project! But Field emphasizes on a number of occasions that the conservativeness claim is not terribly hard to establish, for a powerful block of applied mathematics (things that can be modelled in ZFCU, essentially).

(Actually, things are more delicate than you’d think from Science without Numbers, as emerged in the JPhil exchange between Shapiro and Field. The upshot, I take it  if (a) we’re allowed second-order logic in the nominalistic core; or (b) we can argue that best justified mathematized theory aren’t quite the usual versions, but systematically weakened versions; then the conservativeness results go through).

As far as I can tell, we can have the conservativeness result without a representation theorem. Indeed, for the case of arithmetic (as opposed to geometry and Newtonian gravitational theory) Field relies on conservativeness without giving anything like a representation theorem. I think therefore, that there’s a heel-digging response to all this open to Field. He could say that phase-space theories are all very fine, but they’re just part of the mathematized superstructure—there’s nothing in the core which they “represent”, nor do we need there to be.

Now, maybe this is deeply misguided. But I’d like to figure out exactly why. I can think of two worries: one based on loss of explanatory power; the other on the constraint to explain applicability.

Explanations. One possibility is that nominalistic science without statistical mechanics is a worse theory than mathematized science including phase space formulations—in a sense relevant to the indispensibility argument. But we have to treat this carefully. Clearly, there’s all sorts of ways in which mathematized science is more tractable than nominalized science—that’s Field’s explanation for why we indulge in the former in the first place. One objective of the Colyvan and Lyon article cited earlier is to give examples of the explanatory power of stat mechanical explanations, so that’s one place to start looking.

Here’s one thought about that. It’s not clear that the sort of explanations we get from statistical mechanics, cool though they may be, are of relevantly similar kind to the “explanations” given in classical mechanics. So one idea would be to try to pin down this difference (if there is one) and figure out how they relate to the “goodness” relevant to indispensibility arguments.

Applicability. The second thought is that the “mere conservativeness” line is appropriate either where the applicability of the relevant area of mathematics is unproblematic (as perhaps in arithmetic) or where there aren’t any applications to explain (the higher reaches of pure set theory). In other cases—like geometry, there is a prima facie challenge to tell a story about how claims about abstracta can tell us stuff about the world we live in. And representation theorems scratch this itch, since they show in detail how particular this-worldly structures can exactly call for a representation in terms of abstracta (so in some sense the abstracta are “coding for” purely nominalistic processes—“intrinsic processes” in Field’s terminology). Lots of people unsympathetic to nominalism are sympathetic to representation theorems as an account of the application of mathematics—or so the folklore says.

But, on the one hand, statistical mechanics does appear to feature in explanations of macro-phenomena; and second, the reason that talking about measures over some abstract “space” can be relevant to explaining facts about ripples on a pond is at least as unobvious as the applications of geometry.

I don’t have a very incisive way to end this post. But here’s one thought I have if the real worry is one of accounting for applicability, rather than explanatory power. Why think in these cases that applicability should be explained via representation theorems? In the case of geometry, Newtonian mechanics etc, it’s intuitively appealing to think there are nominalistic relations that our mathematized theories are encoding. Even if one is a platonist, that seems like an attractive part of a story about the applicability of the relevant theories. But when one looks at statistical mechanics, is there any sense that it’s applicability would be explained if we found a way to “code” within Newtonian space-time all the various points of phase space (and then postulate relations between the codings)? It seems like this is the wrong sort of story to be giving here. That thought goes back, I guess, to the point raised earlier in the “modal realist” version: even if we had the resources, would primitive nominalistic structure over some reconstruction of configuration of phase space really give us an attractive story about the applicability of statistical mechanical probabilities?

But if representation theorems don’t look like the right kind of story, what is? Can the Lewis-style “best theory theory” of chance, applied to stat mechanical case (as Barry Loewer has suggested) be wheeled in here? Can the Fieldian nominalist just appeal to (i) conservativeness (ii) the Lewisian account of how the probability-invoking theory and laws gets fixed by the patterns of nominalistic facts in a single classical space? Questions, questions…

Error theories and Revolutions

I’ve been thinking about Hartry Field’s nominalist programme recently. In connection with this (and a draft of a paper I’ve been preparing for the Nottingham metaphysics conference) I’ve been thinking about parallels between the error theories that threaten if ontology is sparse (e.g. nominalistic, or van Inwagenian); and scientific revolutions.

One (Moorean) thought is that we are better justified in our commonsense beliefs (e.g. `I have hands’) than we could be in any philosophical premises incompatible with them. So we should always regard “arguments against the existence of hands” as reductios of the premises that entail that one has no hands. This thought, I take it, extends to commonsense claims about the number of hands I possess. Something similar might be formulated in terms of the comparative strength of justification for (mathematicized) science as against the philosophical premises that motivate its replacement.

So presented, Field (for one) has a response: he argues in several places that we exactly lack good justification for the existence of numbers. He simply rejects the premise of this argument.

A better way presentation of the worry focuses, not on the relative justification for one’s beliefs, but on conditions under which it is rational to change one’s beliefs. I presently have a vast array of beliefs that, according to Field, are simply false.

Forget issues of relative justification. It’s simply that the belief state I would have to be in to consistently accept Field’s view is very distant from my own—it’s not clear whether I’m even psychologically capable of genuinely disbelieving that if there are exactly two things in front of me, then the number of things in front of me is two. (If you don’t feel the pressure in this particular case, consider the suggestion that no macroscopic objects exist—then pretty much all of your existing substantive beliefs are false). Given my starting set of beliefs, it’s hard to see how speculative philosophical considerations could make it rational to change my views so much.

Here’s one way of trying to put some flesh on this general worry. In order to assess an empirical theory, we need to measure it against relevant phenomena to establish theory’s predictive and explanatory power. But what do we take these phenomena to be? A very natural thought is that they include platitudinous statements about the positions of pointers on readers, statements about how experiments were conducted, and whatever is described by records of careful observation. But Field’s theory says that the content of numerical records of experimental data will be false; as will be claims such as “the data points approximate an exponential function”. On a van Inwagenian ontology, there are no pointers, and experimental reports will be pretty much universally false (at least on an error-theoretic reading of his position). Sure, each theorist has a view on how to reinterpret what’s going on. But why should we allow them to skew the evidence to suit their theory? Surely, given what we reasonably take the evidence to be, we should count their theories as disastrously unsuccessful?

But this criticism is based on certain epistemological presuppositions, and these can be disputed. Indeed Field in the introduction to Realism Mathematics and Modality (preemptively) argues that the specific worries just outlined are misguided. He points to cases he thinks analogous, where scientific evidence has forced a radical change in view. He argues that when a serious alternative to our existing system of beliefs (and rules for belief-formation) is suggested to us, it is rational to (a) bracket relevant existing beliefs and (b) consider the two rival theories on their individual merits, adopting whichever one regards as the better theory. The revolutionary theory is not necessarily measured against we believe the data to be, but against what the revolutionary theory says the data is. Field thinks, for example, that in the grip of a geocentric model of the universe, we should treat `the sun moves in absolute upward motion in the morning’ as data. However, even for those within the grip of that model, when the heliocentric model is proposed, it’s rational to measure its success against the heliocentric take on what the proper data is (which, of course, will not describe sunrises in terms of absolute upward motion). Notice that on this model, there’s is effectively no `conservative influence’ constraining belief-change—since when evaluating new theories, one’s prior opinions on relevant matters are bracketed.

If this is the right account of (one form of) belief change, then the version of the Moorean challenge sketched above falls flat (maybe others would do better). Note that for this strategy to work, it doesn’t matter that philosophical evidence is more shaky than scientific evidence which induces revolutionary changes in view—Field can agree that the cases are disanalogous in terms of the weight of evidence supporting revolution. The case of scientific revolutions is meant to motivate the adoption of a certain epistemology of belief revision. This general epistemology, in application to the philosophy of mathematics, tells us we need not worry about the massive conflicts with existing beliefs that so concerned the Mooreans.

On the other hand, the epistemology that Field sketches is contentious. It’s certainly not obvious that the responsible thing to do is to measure revisionary theory T against T’s take on the data, rather than against one’s best judgement about what the data is. Why bracket what one takes to be true, when assessing new theories? Even if we do want to make room for such bracketing, it is questionable whether it is responsible to pitch us into such a contest whenever someone suggests some prima facie coherent revolutionary alternative. A moderated form of the proposal would require there to be extant reasons for dissatisfaction with current theory (a “crisis in normal science”) in order to make the kind of radical reappraisal appropriate. If that’s right, it’s certainly not clear whether distinctively philosophical worries of the kind Field raises should count as creating crisis conditions in the relevant sense. Scientific revolutions and philosophical error theories might reasonably be thought to be epistemically disanalogous in a way unhelpful to Field.

Two final notes. It is important to note what kind of objection a Moorean would put forward. It doesn’t engage in any way with the first-order case that the Field constructs for his error-theoretic conclusion. If substantiated, the result will be that it would not be rational for me (and people like me) to come to believe the error-theoretic position.

The second note is that we might save the Fieldian ontology without having to say contentious stuff in epistemology, by pursuing reconciliation strategies. Hermeneutic fictionalism—for example in Steve Yablo’s figuralist version—is one such. If we never really believed that the number of peeps was twelve, but only pretended this to be so, then there’s no prima facie barrier from “belief revision” considerations that prevents us from explicitly adopting a nominalist ontology. Another reconciliation strategy is to do some work in the philosophy of language to make the case that “there are numbers” can be literally true, even if Field is right about the constituents of reality. (There are a number of ways of cashing out that thought, from traditional Quinean strategies, to the sort of stuff on varying ontological commitments I’ve been working on recently).

In any case, I’d be really interested in people’s take on the initial tension here—and particularly on how to think about rational belief change when confronted with radically revisionary theories—pointers to the literature/state of the art on this stuff would be gratefully received!