Category Archives: Vagueness

Vagueness survey paper: V (rejecting excluded middle)

So this was the biggest selection-problem I faced: there are so many many-valued systems out there, and so many ways to think about them, which to choose?

I would have liked to talk a bunch on the interpretation of “third truth values”. This seems often glossed over badly to me. In the vagueness literature, it’s often assumed that once we’ve got a third truth value, we might as well be degree theorists. But it seems to me that “gap” interpretations of the third truth value are super-different from the “half-true” interpretation. But to make the case that this is more than a verbal dispute, I think we have to say a whole lot more about the cognitive role of indeterminacy, the role of logic, etc etc. All good stuff (in fact, very close to what I’m working on right now). But I chose not to go that way.

Another thing I could have done is talk directly on degree theories. Nick Smith has a new book length treatment of them, which makes some really nice moves both in motivating and defending the account. And of course they’re historically popular —and “fuzzy logic” is what you always hear talked about in non-philosophical treatments. In Williamson’s big vagueness book, degree theories are really the focus of the chapter corresponding to this section.

On the other hand, I felt is was really important to get a representative of the “logic first” view into the picture—someone who really treated semantics kind of instrumentally, and who saw the point of talking about vagueness in a rather different way to the way it’s often presented in the extant survey books. And the two that sprang to mind here were Hartry Field and Crispin Wright. Of these, Crispin’s intuitionism is harder to set up, and has less connections to other many valued theories. And his theory of a quandary cognitive role, while really interesting, just takes longer to explain than Hartry’s rejectionist suggestion. Wright’s agnosticism is a bit hard to explain too—I take it the view is supposed to be that we’re poised between the Williamsonian style picture, and an option where you assert the negation of bivalence—and the first seems unbelievable and the second incoherent, so we remain agnostic. But if something is incoherent, how can we remain agnostic? (So, actually, I think the better way to present the view is as agnosticism between bivalence endorsing views and Field-style rejectionist views, albeit carried out in an intuitionist rather than Kleene-based system. But if that’s the way to work things out, rejectionism is conceptually prior to agnosticism).

So in the end I started with a minimal intro to the many-valued truth tables, a brief pointer in the direction of extensions and interpretations, and then concentrate on the elements of the Field view—the Quinean translate-and-deflate theory of language, the logical revisionism (and instrumentalism about model theory) and the cognitive role that flows from it.

Just as with the epistemicism section, there are famous objections I just didn’t have room for. The whole issue over the methodological ok-ness of revising logic, burdens that entails… nothing will remain of that but a few references.

Vagueness survey paper: section V

REVISIONARY LOGIC: MANY-VALUED SETTINGS

A distinctive feature of supervaluationism was that while it threw out bivalence (“Harry is bald” is either true or false) it preserved the corresponding instance of excluded middle (“Harry is either bald or not bald”). Revising the logic in a more thorough-going way would allow for a coherent picture where we can finally reject the claim “there is a  single hair that makes the difference between bald and non-bald” without falling into paradox.

“Many valued” logics can be characterized by increasing the number of truth-values we work with—perhaps to three, perhaps infinitely many—and offering generalizations of the familiar stories of how logical constants behave to accommodate this tweak. There are all sorts of ways in which this can be developed, and even more choice points in extracting notions of “consequence” out of the mass of relations that then become available.

Here is a sample many-valued logic, for a propositional language with conjunctions, disjunctions and negations. To characterize the logic, we postulate three values (let’s call them, neutrally, “1” “1/2” and “0”). For the propositional case, the idea will be that each atomic sentence will be assigned some one of the truth values; and then the truth values get assigned to complex sentences recursively. Thus, a conjunction will get that truth value which is the minimum of the truth values of its conjuncts; a disjunction will get that truth value which is the maximum of the truth values of its disjuncts; and a negation will get assigned 1 minus the truth value of the claim negated (you can easily check that, ignoring the value ½ altogether, we get back exactly classical truth-tables.)

A many-valued logic (the strong Kleene logic) is defined by looking at the class of those arguments that are “1-preserving”, i.e. such that when all the premises are value 1, the conclusion is value 1 too. It has some distinctive features; e.g. excluded middle “Av~A” is no longer a tautology, since it can be value ½ when A is value ½. “A&~A” is still treated as a logical contradiction, on the other hand (every sentence whatsoever follows from it), since it will never attain value 1, no matter what value A is assigned.

One option at this point is to take this model theory seriously—much as the classicist and supervaluationist do, and hypothesise that natural language has (or is modelled by?) some many-valued interpretation (or set of interpretations?). This view is a major player in the literature (cite cite cite).

For the remainder of this section, I focus on a different framework in which to view the proposal to revise logic. This begins with the rejection of the very idea of an “intended interpretation” of a language, or a semantic treatment of truth. Rather, one treats truth as a “device of disquotation”—perhaps introduced by means of the kind of disquotational principles mentioned earlier (such a device proves useful, argue its fans, in increasing our expressive power—allowing us to endorse or deny such claims as “everything the pope says is true”). The disquotation principles capture all that there is to be said about truth, and the notion doesn’t need any “model theoretic foundation” in an “intended interpretation” to be in good-standing.

In the first instance, such a truth-predicate is “local”—only carving the right distinctions in the very language for which it was introduced (via disquotation). To allow us to speak sensibly of true French sentences (for example), Field (cite) following Quine (cite) analyzes

“la neige est blanc” is true

as:

there’s a good translation of “la neige est blanc” into English, such that the translated version is disquotationally true.

Alongside this disquotationism is a distinctive attitude to logic. On Field’s view, logical consequence does not need to be “analyzed” in model-theoretic terms. Consequence is taken as primitive, and model theory seen as a useful instrument for characterizing the extension of this relation.

Such disquotationism elegantly avoids the sort of worries plaguing the classical, supervaluational, and traditional many-valued approaches – in particular, since there’s no explanatory role for an “intended interpretation”, we simply avoid worries about how such an intended interpretation might be settled on. Moreover, if model-theory is mere instrument, there’s no pressure to say anything about the nature of the “truth values” it uses.

So far, no appeal to revisionary logic has been made. The utility of hypothesizing a non-classical (Kleene-style) logic rather than a classical one comes in explaining the puzzles of vagueness. For Field, when indeterminacy surfaces, we should reject the relevant instance of excluded middle. Thus we should reject “either Harry is bald or he isn’t”—and consequently, also reject the claim that Harry is bald (from which the former follows, in any sensible logic). We then must reject “Harry is bald and the man with one hair less isn’t” (again because something we reject follows from it). So, from our rejection of excluded middle, we derive the core data behind the “little by little” worry—rejection of the horrible conjunctions (N).

So, like one form of supervaluationism, Field sees borderline cases of vague predicates as characterized by forced rejection. No wonder further inquiry into whether Harry is bald seems pointless. Again in parallel, Field accommodates rejection of (N) without accepting (N*)—and this is at least a start on explaining where our perplexity over the sorites comes from. Unlike the supervaluationist, he isn’t committed to the generalizations “there is some bald guy next to a non-bald guy”—the Kleene logic (extended to handle quantifiers) enforces rejection of this claim.

One central concern about this account of vagueness (setting aside very general worries about the disquotational setting) is whether in weakening our logic we have thrown the baby out with the bathwater. Some argue that it is methodologically objectionable to revise logic without overwhelming reason to do so, given the way that classical assumptions are built into successful, progressive science even when vagueness is clearly in play (think of applications of physics, or the classical assumptions in probability and decision theory, for example). This is an important issue: but let’s set it aside for now.

More locally, the logic we’ve looked at so far seems excessively weak in expressive power. It’s not clear, for example, how one should capture platitudes like “if someone is balder than a bald person, that person too is bald” (translating the “if” here as a kind of disjunction or negated conjunction, as is standard in the classical case, we get something entailing instances of excluded middle we want to reject – we do not seem to yet have in the system a suitable material conditional). For another thing, we haven’t yet said anything about how the central notion “it is determinate whether” fits in. It seems to have interesting logical behaviour—for example, the key connection between excluded middle and indeterminacy would be nicely captured if from Av~A one could infer “it is determinate whether A”. Much of Field’s positive project involves extending the basic Kleene logic to accommodate a suitable conditional and determinacy operators, in particular to capture thoroughly “higher-order” kinds of vagueness (borderline cases of borderline cases, and so on).

Vagueness survey IV: supervaluations

Ok, part IV of the survey article. This includes the second of three sample theories: and I’ve chosen to talk about what’s often called “the most popular account of vagueness”: supervaluationism.

I’m personally pretty convinced there are (at least) two different *types* of theses/theories that get the label “supervaluationism” (roughly, the formal logic-semantics stuff, and the semantic indecision story about the source of indeterminacy). And even once you have both on board, I reckon there are at least three *very different* ways of understanding what the theory is saying (that’s separate from, but related to, the various subtle issues that Varzi picks out in the recent Mind paper).

But what I want to present is one way of working out this stuff, so I’m keeping fairly close to the Fine-Keefe (with a bit of Lewis) axis that I think is most people’s reference point around here. I try to strip out the more specialist bits of Fine—which comes at a cost, since I don’t mention “penumbral connection” here. But again, I wanted to keep the focus on the basics of the theory, and the application to the central puzzles, so stuff had to come out.

One thing I’m fairly unapologetic about is to present supervaluationism as a theory where truth=supertruth. It seems terminologically bizarre to do otherwise—given that “supervaluations” have their origins in a semantic-logico technique for defining truth, when you have multiple truth-on-i to play with. I’m happy to think that “semantic indecision” views can be paired with a classical semantics, and the semantics for definiteness given in terms of delineations/sharpenings (as indeed, can the epistemicist D-operator). But, as a matter of terminology, I don’t see these as “supervaluational”. “Ambiguity” views like McGee and McLaughlin are a mixed case, but I don’t have room to fit them in. In any case: bivalence failure is such a common thought that I thought it was worth giving it a run for its money, straightforwardly construed, and I do think that it does lead to interesting and distinctive positions on what’s going on in the sorites.

Actually, the stuff about confusing rejection with accepting-negation is something that’s again a bit of a choice-point. I could have talked some about the “confusion hypothesis” way of explaining the badness of the sorites (and work in particular by Fine, Keefe and Greenough on this—and a fabulous paper by Brian Weatherson that he’s never published “Vagueness and pragmatics”).  But when I tried with this it’s a bit tricky to explain, takes quite a bit of thinking about—and I felt the rejection stuff (also a kind of “confusion hypothesis”) was more organically related to the way I was presenting things. I need to figure out some references for this. I’m sure there must be lots of people making the “since it’s untrue, we’re tempted to say ~p” move, which is essentially what’s involved. I’m wondering whether Greg Restall has some explicit discussion of the temptation of the sorites in his paper on denial and multiple conclusions and supervaluations…

One place below where a big chunk of stuff had to come out to get me near the word limit, was material about the inconsistency of the T-scheme and denials of bivalence. It fitted in very naturally, and I’d always planned that stuff to go in (just because it’s such a neat and simple little argument). But it just didn’t fit, and of all the things I was looking at, it seemed the most like a digression. So, sadly, all that remains is “cite cite cite” where I’m going to give the references.

One thing that does put in a brief appearance at the end of the section is the old bugbear: higher order vagueness. I don’t discuss this very much in the piece, which again is a bit weird, but then it’s very hard to state simply what the issue is there (especially as there seems to be at least three different things going by that name in the literature, the relations between them not being very obvious).

Another issue that occurred to me here is whether I should be strict in separating/distinguishing semantics and model theory. I do think the distinction (between set-theoretic interpretations, and axiomatic specifications of those interpretations) is important, and in the first instance what we get from supervaluations is a model theory, not a semantics. But in the end it seemed not to earn its place. Actually: does anyone know of somewhere where a supervaluational *axiomatic semantics* is given (as opposed to supervaluational models?) I’m guessing it’ll look something like: [“p”]=T on d iff At d, p.—i.e. we’ll carry the relativization through the axioms just as we do in the modal case.

The section itself:

Survey paper on vagueness: part IV

REVISIONARY SEMANTICS: SUPERVALUATIONS

A very common thought about borderline cases is that they’re neither true nor false. Given that one can only know what is true, this would explain our inevitable lack of knowledge in borderline cases. It’s often thought to be a rather plausible suggestion in itself.

Classical semantics builds in the principle that each meaningful claim is either true or false (bivalence). So if we’re to pursue the thought that borderline claims are truth value gaps, we must revise our semantic framework to some extent. Indeed, we can know in advance that any semantic theory with truth-value gaps will diverge from classical semantics even on some of the most intuitively plausible (platitudinous seeming) consequences: for it can be shown under very weak assumptions that truth value gaps are incompatible with accepting disquotational principles such as: “Harry is bald” is true if and only if Harry is bald (see cite cite cite).

How will the alternation of the classical framework go? One suggestion goes under the heading “supervaluationism” (though as we’ll see, the term is somewhat ambiguous).

As an account of the nature of vagueness, supervaluationism is a view on which borderlineness arises from what we might call “semantic indecision”. Think of the sort of things that might fix the meanings of words: conventions to apply the word “bald” to clear cases; conventions to apply “not bald” to clear non-cases; various conventions of a more complex sort—for example, that anyone with less hair than a bald person should count as bald. The idea is that when we list these and other principles constraining the correct interpretation of language, we’ll be able to narrow down the space of acceptable (and entirely classical) interpretations of English down a lot—but not to the single intended interpretation hypothesized by classical semantics. At best, what we’ll get is a cluster of candidates. Let’s call these the sharpenings for English. Each will assign to each vague predicate a sharp boundary. But very plausibly the location of such a boundary is something the different sharpenings will disagree about. A sentence is indeterminate (and if it involves a vague predicate, is a borderline case) just in case there’s a sharpening on which it comes out true, and another on which it comes out false.

As an account of the semantics of vague language, the core of the supervaluationist proposal is a generalization of the idea found in classical semantics, that for something to be true is for it to be true at the intended interpretation. Supervaluationism offers a replacement: it works with a set of “co-intended interpretations”, and says that for a sentence to be true, it must be true at all the co-intended interpretations (this is sometimes called “supertruth”). This dovetails nicely with the semantic indecision picture, since we can take the “co-intended interpretations” to be what we called above the sharpenings—and hence when a sentence is indeterminate (true on one sharpening and false on another) neither it nor its negation will be true: and hence we have a truth value gap. (The core proposal for defining truth finds application in settings where “semantic indecision” idea seems inappropriate: see for example Thomason’s treatment of the semantics of branching time in his (cite)).

The slight tweak to the classical picture leaves a lot unchanged. Consider the tautologies of classical logic, for example. Every classical interpretation will make them true; and so each sharpening is guaranteed to make them true. Any classical tautology will be supertrue, therefore. So – at least at this level – classical logic is retained (It’s a matter of dispute whether more subtle departures from classical logic are involved, and whether this matters: see (cite cite cite)).

So long as (super)truth is a constraint on knowledge, supervaluationists can explain why we can’t know whether borderline bald Harry is bald. On some developments of the position, they can go interestingly beyond this explanation of ignorance. One might argue that insofar as one should only invest credence in a claim to the extent one believes it true, obvious truth-value-gaps are cases where we should utterly reject (invest no credence in) both the claim and its negation. This goes beyond mere lack of knowledge, for it means the information that such-and-such is borderline gives us a direct fix on what our degree of belief should be in such-and-such (by contrast, on the epistemicist picture, though we can’t gain knowledge, we’ve as yet no reason to think that inquiry could raise or lower the probability we assign to Harry being bald, making investigation of the point perfectly sensible, despite initial appearances).

What about the sorites? Every sharpening draws a line between bald and non-bald, so “there is a single hair that makes the difference between baldness and non-baldness” will be supertrue. However, no individual conjunction of form (N) will be true—many of them will instead be truth value gaps, true on some sharpenings and false on others (this highlights one of the distinctive (disturbing?) features of supervaluationist—the ability of disjunctions and existential generalizations to be true, even if no disjunct or instance is.) As truth-value gaps, instances of (N*) will also fail to be true, so some of the needed premises for the sorites paradox are not granted.

(There is one thing that some supervaluationists can point to in attempt to explain the appeal of the paradoxical premises. Suppose that—as I think is plausible—we take as the primary data in the sorites the horribleness of the conjunctions (N). These are untrue, and so (for one kind of supervaluationist) should be utterly rejected. It’s tempting, though mistaken, to try to express that rejection by accepting a negated form of the same claim—that is the move that takes us from the rejection of each of (N) to the acceptance of each of (N*). This temptation is one possible source of the “seductiveness” of sorites reasoning.)

Two points to bear in mind about supervaluationism. First, the supervaluationist endorses the claim that “there is a cut-off” — a pair of men differing by only one hair, with the first bald and the second not. Insofar as one considered that first-order claim to be what was most incredible about (say) epistemicism, one won’t feel much advance has been made. The supervaluationist must try to persuade you that once one understands the sense in which “there’s no fact of the matter” where that cut-off is, the incredulity will dissipate.  Second, many want to press the charge that the supervaluationist makes no progress over the classicist, for reasons of “higher order vagueness”. The thought is the task of explaining how a set of sharpenings gets selected by the meaning fixing facts is no easier or harder than explaining how a single classical interpretation gets picked out. However, (a) the supervaluationist can reasonably argue that if she spells out the notion of “sharpening” in vague terms, she will regard the boundary between the sharpenings and non-sharpenings as vague (see Keefe (cite)); (b) even if both epistemicist and supervaluationist were both in some sense “committed to sharp boundaries”, the account they give of the nature of vagueness is vastly different, and we can evaluate their positive claim on its own merits.

Vagueness survey paper III (epistemicism)

This is the third section of my first draft of a survey paper on vagueness, which I’m distributing in the hope of getting feedback, from the picky to the substantive!

In the first two sections, I introduced some puzzles, and said some general methodology things about accounting for vague language—in effect, some of the issues that come up in giving a theory in a vague metalanguage. The next three sections are the three sample accounts I look at. The first, flowing naturally on from “textbook semantic theories”, is the least revisionary: semantics in a classical setting.

Now, if I lots of time, I’d talk about Delia Graff Fara’s contextualism, the “sneaky classicism” of people like McGee and McLaughlin (and Dave Barnett, and Cian Dorr, and Elizabeth Barnes [joint with me in one place!]). But there’s only so much I can fit in, and Williamson’s epistemicism seems the natural representative theory here.

Then there’s the issue of how to present it. I’m always uncomfortable when people use “epistemicism” as synonym for just the straightforward reading of classical semantics—sharp cut-offs and so on (somebody suggested the name “sharpism” for that view–I can’t remember who though). The point about epistemicism—why people finding it interesting—is surely not that bit of it. It’s that it seems to predict where others retrodict; and it seems principled where others seem ad hoc. Williamson takes formulations of parts of the basic toolkit of epistemology—safety principles; and uses these to explain borderineness (and ultimately the higher-order structure of vagueness). That’s what’s so super-cool about it.

I’m a bit worried in the current version I’ve downplayed the sharpist element so much. After all, that’s where a lot of the discussion has gone on. In part, that betrays my frustration with the debate—there are some fun little arguments around the details, but on the big issue I don’t see that much progress has been made. It feels to me like we’ve got a bit of  a standoff. At minimum I’m going to have to add a bunch of references to this stuff, but I wonder what people think about the balance as it is here.

I have indulged a little bit in raising one of the features that always puzzles me about epistemicism: I see that Williamson has an elegant explanation about why we can’t ever identify a cut-off. But I just don’t see what the story is about why we find the existential itself so awful. The analogy to lottery cases seems helpful here. Anyway, on with the section:

Vagueness Survey Paper, Part III.

VAGUENESS IN A CLASSICAL SETTING: EPISTEMICISM

One way to try to explain the puzzles of vagueness look to resources outwith the philosophy of language. This is the direction pursued by epistemicists such as Timothy Williamson.

One distinctive feature of the epistemicist package is retaining classical logic and semantics. It’s a big advantage of this view that we can keep textbook semantic clauses described earlier, as well as seemingly obvious truths such as that “Harry is bald” is true iff Harry is bald (revisionary semantic theorists have great trouble saving this apparently platudinous claim). Another part of the package is a robust face-value reading of what’s involved in doing this. There really is a specific set that is the extension of “bald”—a particular cut-off in the sorites series for bald, and so on (some one of the horrible conjunctions given earlier is just true). Some other theorists say these things but try to sweeten the pill—to say that admitting all this is compatible with saying that in a strong sense there’s no fact of the matter where this cut-off is (see McGee McLaughlin; Barnett; Dorr; Barnes). Williamson takes the medicine straight: incredible as it might sound, our words really do carve the world in a sharp, non-fuzzy way.

The hard-nosed endorsement of classical logic and semantics at a face-value reading is just scene-setting: the real task is to explain the puzzles that vagueness poses. If the attempt to make sense of “no fact of the matter” rhetoric is given up, what else can we appeal to?

As the name suggests, Williamson and his ilk appeal to epistemology to defuse the puzzle. Let us consider borderlineness first. Start again from the idea that we are ignorant of whether Harry is bald, when he is a borderline case. The puzzle was to explain why this was so, and why the unknowability was of such a strong and ineliminable sort.

Williamson’s proposal makes use of a general constraint on knowledge: the idea that in order to know that p, it cannot be a matter of luck that one’s belief that p is true. Williamson articulates this as the following “safety principle”:

For “S knows that p” to be true (in such situation s), “p” must be true in any marginally different situation s* (where one forms the same beliefs using the same methods) in which “S believes p” is true.

The idea is that the situations s* represent “easy possibilities”: falsity at an easy possibility makes a true belief too lucky to count as knowledge.

This first element of Williamson’s view is independently motivated epistemology. The second element is that the extensions of vague predicates, though sharp, are unstable. They depend on exact details of the patterns of use of vague predicates, and small shifts in the latter can induce small shifts in the (sharp) boundaries of vague predicates.

Given these two, we can explain our ignorance in borderline cases. A borderline case of “bald”  is just one where the boundary of “bald” is close enough that a marginally different pattern of usage could induce a switch from (say) Harry being a member of the extension of “bald” to not being in that extension. If that’s the case, then even if one truly believed that Harry was bald, there will be an easy possibility where one forms the same beliefs for the same reasons, but that sentence is false. Applying the safety principle,  the belief can’t count as knowledge.

Given that the source of ignorance resides in the sharp but unstable boundaries of vague predicates, one can see why gathering information about hair-distributions won’t overcome the relevant obstacle to knowledge. This is why the ignorance in borderline cases seems ineliminable.

What about the sorites? Williamson, of course, will say that one of the premises if false—there is a sharp boundary, we simply can’t know what that is. It’s unclear whether this is enough to “solve” the sorites paradox however. As well as knowing what premise to reject, we’d like to know why we found the case paradoxical in the first place. Why do we find the idea of a sharp cut off so incredible (especially since there’s a very simple, valid argument from obvious premises to this effect available)? Williamson can give an account of why we’d never feel able to accept any one of the individual conjunctions (Man n is bald and man n+1 is not). But that doesn’t explain why we’re uneasy (to say the least) with the thought that some such conjunction is true—i.e. that there is a sharp cut-off. I’ll never know in advance which ticket will win a lottery; but I’m entirely comfortable with the thought that one will win. Why don’t we feel the same about the sorites?

Vagueness survey paper: II (vague metalanguages)

Ok, so here’s part II of the paper. One thing that struck me having taught this stuff over the last few years is how much the “vague metalanguages” thing strikes people as ad hoc if brought in at a late stage—it feels that we were promised something—-effectively, something that’d relate vague terms to a precise world—and we’ve failed. And so what I wanted to do is put that debate up front, so we can be aware right from the start that there’ll be an issue here.

When writing this, it seemed to me that effectively the Evans argument arises naturally when discussing this stuff. So it seemed quite nice to put in those references.

Sorry for the missing references, by the way—that’s going to be fixed once I look up what the style guide says about them. (And I’ll add some extras).

USING VAGUENESS TO HANDLE VAGUENESS

One textbook style of semantic theory assigns extensions (sets of objects) as the semantic values of vague terms (Heim & Kratzer xxxx). This might seem dubious. Sets of objects as traditionally conceived are definite totalities—each object is either definitely a member, or definitely not a member. Wouldn’t associating one of such totalities with a vague predicate force us, unjustifiably, to “draw sharp boundaries”?

On the other hand, it seems that we easily say which set should be the extension of “is red”:

[[“red”]]={x: x is red}

There’s no need for this to be directly disquotational:

[[“rouge”]]={x: x is red}

The trick here is to use vague language (“in the theorist’s metalanguage”) to say what the extension should be (“of the object-language predicate”). If this is legitimate, there’s no obstacle to taking textbook semantics over wholesale.

Perhaps the above seems unsatisfactory: one is looking for illumination from one’s semantic clauses. So, for example, one might want to hear something in the semantics about dispositions to judge things red, to reflect the response-dependent character of redness. It’s highly controversial whether this is a reasonable demand. One might suspect that it mixes up the jobs of the semanticist (to characterize the representational properties of words) and the metaphysician (to say what redness is, in more fundamental terms). But even if one wants illumination wherever possible in one’s semantic theory, there’s not even a prima facie problem here, so long one is still able to work within a vague metalanguage. Thus Lewis, in discussing his semantic treatment of counterfactuals in terms of the (admittedly vague) notion of similarity, says “I … seek to rest an unfixed distinction upon a swaying foundation, claiming that the two sway together rather than independently.” (Lewis, 1973, p.92). While Lewis recognizes that “similarity” is vague, he thinks this is exactly what we need to faithfully capture the vagueness of counterfactuals in an illuminating way. One might see trouble for the task of constructing a semantics, if one imposed the requirement that the metalanguage should (at least ideally?) be perfectly “precise”. But one would have to be very clear about why such a strong requirement was being imposed.

Let us leave this worry to those bold enough to impose such constraints on the theorist of language. Are there further problems with textbook semantics?

One worry might be that the resources it appeals to are ill-understood. Let us go back to the thought that sets (as traditionally conceived) are definite totalities. Then borderline-bald Harry (say) is either definitely in or definitely out of any given set. But Harry better be a borderline member of {x: x is bald}. Don’t we now have to go and provide a theory of these new and peculiar entities (“vague sets”) — who knows where that will lead us?

The implicit argument that we’re dealing with “new” entities here can be formulated as follows:

(1) Harry is definitely a member of S

(2) It is not the case that Harry is definitely a member of {x: x is bald}.

(3) So: S is not identical to {x: x is bald}

(Parallel considerations can be given for the non-identity of {x: x is bald} with sets Harry is definitely not a member of). The argument seems appealing, seeming to appeal only to the indiscernability of identicals. If we suppose S and {x: x is bald} to be identical, then we should be able to swap one for the other in the following context without change of truth value:

Harry is definitely a member of ….

But (1) and (2) show us this doesn’t happen. (3) follows by reductio.

The issues this raises are discussed extensively in the literature on the “Evans-Salmon” argument (and in parallel debates on the indiscernability of identicals in connection to modality and tense). One moral from that discussion is that the argument given above is probably not valid as it stands. Very roughly, “{x: x is bald}” can denote some precisely bounded set of entities, consistent with everything we’ve said, so long as it is indefinite which such totality it denotes. Interested readers are directed to (cite cite cite) for further discussion.

Vague metalanguages seem legitimate; and there’s no reason as yet to think that appeal to vaguely specified sets commits one to a novel “vague set theory”. But we still face the issue of how the distinctive puzzles of vagueness are to be explained.

Vagueness survey paper I (puzzles)

I’ve been asked to write a survey paper on vagueness. It can’t be longer than 6000 words. And that’s a pretty big topic.

I’ve been wondering how best to get some feedback on whether I’m covering the right issues, and, of course, whether what I’m doing is fair and accurate. So what I thought is that I’d post up chunks of my first draft of the paper here—maybe 1000 words at a time, and see what happens. So comments *very much* welcomed, whether substantive or picky.

Basically, the plan for the paper is that it be divided into five sections (it’s mostly organized around the model theory/semantics, for reasons to do with the venue). So first I have a general introduction to the puzzles of vagueness, of which I identify two: effectively, soriticality, and borderline cases. Then I go on to say something about giving a semantics for vagueness in a vague metalanguage. The next three sections give three representative approaches to “semantics”, and their take on the original puzzles. First, classical approaches (taking Williamsonian epistemicism as representative). Second, something based on Fine’s supervaluations. And third, many valued theories. I chose to focus on Field’s recent stuff, even though it’s perhaps not the most prominent, since it allows me to have some discussion of how things look when we have a translate-and-deflate philosophy of language rather than the sort of interpretational approach you often find. Finally, I have a wrap up that mentions some additional issues (e.g. contextualism), and alternative methodologies/approaches.

So that’s how I’ve chosen to cut the cake. Without more ado, here’s the first section.

Vagueness Survey Paper. Part I.

The puzzles of vagueness

Take away grains, one by one, from a heap of rice. At what point is there no longer a heap in front of you? It seems hard to believe that there’s a sharp boundary – a single grain of rice removing which turns a heap into a non-heap. But if removing one grain can’t achieve this, how can removing a hundred do so? It seems small changes can’t make a difference to whether or not something is a heap; but big changes obviously do. How can this be, since big changes are nothing but small changes chained together?

Call this the “little by little” puzzle.

Pausing midway through removing grains from the original heap, ask yourself: “is what I have at this moment a heap?” At the initial stages, the answer will clearly be “yes”. At the late stages, the answer will clearly be “no”. But at intermediate stages, the question will generate perplexity: it’s not clearly right to say “yes”, nor is it clearly right to say “no”. A hedged response seems better: “it sorta is and sorta isn’t”, or “it’s a borderline case of a heap”. Those are fine things to say, but they’re not a direct answer to the original question: is this a heap? So what is the answer to that question when confronted with (what we can all agree to be) a borderline case of a heap?

Call this the “borderlineness” puzzle.

Versions of the “little by little” and “borderline case” puzzles are ubiquitous. As hairs fall from the head of a man as he ages, at what point is he bald? How could losing a single hair turn a non-bald man bald? What should one say of intermediate “borderline” cases?  Likewise for colour terms: a series of colour patches may be indiscriminable from one another (put them side by side and you couldn’t tell them apart). Yet if they vary in the wavelength of light they reflect, chaining enough such cases might take one from pure red to pure yellow.

As Peter Unger (cite) emphasized, we can extend the idea further. Imagine an angel annihilating the molecules that make up a table, one by one. Certainly at the start of the process annihilating one molecule would still leave us with a table; at the end of the process we have a single molecule—no table that! But how could annihilating a single molecule destroy a table? It’s hard to see what terms (outside mathematics) do not give rise to these phenomena.

The little by little puzzle leads to the sorites paradox (from “sorites” – the Greek word for “heap”). Take a line of 10001 adjacent men, the first with no hairs, the last with 10000 hairs, with each successive man differing from the previous by the addition of a single hair (we call this a sorites series for “bald”). Let “Man N” name the man with N hairs.

Obviously, Man 0 is bald. Man 10000 is not bald. Furthermore, the following seem entirely reasonable (in fact, capture the thought that “a single hair can’t make a difference to baldness”):

No consider the following collection of horrible-sounding claims:

(1):       Man 0 is bald, and man 1 is not bald.

(2):       Man 1 is bald, and man 2 is not bald.

(10000): Man 99999 is bald, and man 10000 is not bald.

It seems that each of these must be rejected, if anything in the vicinity of the “little differences can’t make a difference” principle is right. But if we reject the above, surely we must accept their negations:

(1*):     it is not the case that: Man 0 is bald, and man 1 is not bald.

(2*):     it is not the case that: Man 1 is bald, and man 2 is not bald.

(10000*): it is not the case that: Man 99999 is bald, and man 10000 is not bald.

But given the various (N*), and the two obvious truths about the extreme cases, a contradiction follows.

One way you can see this is by noting that each (N*) is (classically) equivalent to the material conditional reading of:

(N**) if Man N-1 is bald, then Man N is bald

Since Man 0 is bald, a series of Modus Ponens inferences allow us to derive that Man 10000 was bald, contrary to our assumptions.

Alternatively, one can reason with the (N*) directly. Suppose that Man 9999 is bald. As we already know that Man 10000 was not bald, this contradicts (10000*). So, by reductio, Man 9999 is not bald. Repeat, and one eventually derives that Man 0 is not bald, contrary to our assumptions.

Whichever way we go, a contradiction follows from our premises, so we must either find some way of rejecting seemingly compelling premises, or find a flaw in the seemingly obviously valid reasoning.

We turn next to the puzzle of borderlineness: that given Harry is intermediate between clear cases and clear non-cases of baldness, “Is Harry bald?” seems to have no good, direct, answer.

There are familiar cases where we cannot answer such questions: if I’ve never seen Jimmy I might be in no position to say whether he’s bald, simply because I don’t know one way or the other. And indeed, ignorance is one model one could appeal to in the case of borderlineness. If we simply don’t know whether or not Harry is bald, that’d explain why we can’t answer the question directly!

This simply moves the challenge one stage back, however. Why do we lack knowledge? After all, it seems we can know all the relevant underlying facts (the number and distribution of hairs on a man’s head). Nor is does there seem to be any way of mounting a sensible inquiry into the question, to resolve the ignorance, unlike in the case of Jimmy. What kind of status is this, where the question of baldness is not only something we’re in no position to answer, but where we can’t even conceive of how to go about getting in a position to answer? What explains this seemingly inevitable absence of knowledge?

A final note on borderline cases. It’s often said that if Harry is a borderline case of baldness, then it’s indefinite or there’s no fact of the matter or it is indeterminate whether Harry is bald, and I’ll do so myself. Now, indeterminacy may be a more general phenomenon than vagueness (it’s appealed to in cases that seem to have little to do with sorites series: future contingents, conditionals, quantum physics, set theory); but needless to say, labeling  borderline cases as “indeterminate” doesn’t explain what’s going on with them unless one has a general account of indeterminacy to appeal to.

Williamson on vague states of affairs

In connection with the survey article mentioned below, I was reading through Tim Williamson’s “Vagueness in reality”. It’s an interesting paper, though I find its conclusions very odd.

As I’ve mentioned previously, I like a way of formulating claims of metaphysical indeterminacy that’s semantically similar to supervaluationism (basically, we have ontic precisifications of reality, rather than semantic sharpenings of our meanings. It’s similar to ideas put forward by Ken Akiba and Elizabeth Barnes).

Williamson formulates the question of whether there is vagueness in reality, as the question of whether the following can ever be true:

(EX)(Ex)Vague[Xx]

Here X is a property-quantifier, and x an object quantifier. His answer is that the semantics force this to be false. The key observation is that, as he sets things up, the value assigned to a variable at a precisification and a variable assignment depends only on the variable assignment, and not at all on the precisification. So at all precisifications, the same value is assigned to the variable. That goes for both X and x; with the net result that if “Xx” is true relative to some precisification (at the given variable assignment) it’s true at all of them. That means there cannot be a variable assignment that makes Vague[Xx] true.

You might think this is cheating. Why shouldn’t variables receive different values at different precisifications (formally, it’s very easy to do)? Williamson says that, if we allow this to happen, we’d end up making things like the following come out true:

(Ex)Def[Fx&~Fx’]

It’s crucial to the supervaluationist’s explanatory programme that this come out false (it’s supposed to explain why we find the sorites premise compelling). But consider a variable assignment to x which at each precisification maps x to that object which marks the F/non-F cutoff relative to that precisification. It’s easy to see that on this “variable assignment”, Def[Fx&Fx’] comes out true, underpinning the truth of the existential.

Again, suppose that we were taking the variable assignment to X to be a precisification-relative matter. Take some object o that intuitively is perfectly precise. Now consider the assignment to X that maps X at precisification 1 to the whole domain, and X at precisification 2 to the null set. Consider “Vague[Xx]”, where o is assigned to x at every precisification, and the assignment to X is as above. The sentence will be true relative to these variable assignments, and so we have “(EX)Vague[Xx]” relative to an assignment of o to x which is supposed to “say” that o has some vague property.

Although Williamson’s discussion is about the supervaluationist, the semantic point equally applies to the (pretty much isomorphic) setting that I like, and which is supposed to capture vagueness in reality. If one makes the variable assignments non-precisification relative, then trivially the quantified indeterminacy claims go false. If one makes the variable assignments precisification-relative, then it threatens to make them trivially true.

The thought I have is that the problem here is essentially one of mixing up abundant and natural properties. At least for property-quantification, we should go for the precisification-relative notion. It will indeed turn out that “(EX)Vague[Xx]” will be trivially true for every choice of X. But that’s no more surprising that the analogous result in the modal case: quantifying over abundant properties, it turns out that every object (even things like numbers) have a great range of contingent properties: being such that grass is green for example. Likewise, in the vagueness case, everything has a great deal of vague properties: being such that the cat is alive, for example (or whatever else is your favourite example of ontic indeterminacy).

What we need to get a substantive notion, is to restrict these quantifiers to interesting properties. So for example, the way to ask whether o has some vague sparse property is to ask whether the following is true “(EX:Natural(X))Vague[Xx]”. The extrinsically specified properties invoked above won’t count.

If the question is formulated in this way, then we can’t read off from the semantics whether there will be an object and a property such that it is vague whether the former has the latter. For this will turn, not on the semantics for quantifiers alone, but upon which among the variable assignments correspond to natural properties.

Something similar goes for the case of quantification over states of affairs. (ES)Vague[S] would be either vacuously true or vacuously false depending on what semantics we assign to the variables “X”. But if our interest is in whether there are sparse states of affairs which are such that it is vague whether they obtain, what we should do is e.g. let the assignment of values to S be functions from precisifications to truth values, and then ask the question:

(ES:Natural(S))Vague[S].

Where a function from precisifications to truth values is “natural” if it corresponds to some relatively sparse state of affairs (e.g. there being a live cat on the mat). So long as there’s a principled story about which states of affairs these are (and it’s the job of metaphysics to give us that) everything works fine.

A final note. It’s illuminating to think about the exactly analogous point that could be made in the modal case. If values are assigned to variables independently of the world, we’ll be able to prove that the following is never true on any variable assignment:

Contingently[Xx].

Again, the extensions assigned to X and x are non-world dependent, so if “Xx” is true relative to one world, it’s true at them all. Is this really an argument that there is no contingent instantiation of properties? Surely not. To capture the intended sense of the question, we have to adopt something like the tactic just suggested: first allow world-relative variable assignment, and then restrict the quantifiers to the particular instances of this that are metaphysically interesting.

Ontic vagueness

I’ve been frantically working this week on a survey article on metaphysical indeterminacy and ontic vagueness. Mind bending stuff: there really is so much going on in the literature, and people are working with *very* different conceptions of the thing. Just sorting out what might be meant by the various terms “vagueness de re”, “metaphysical vagueness”, “ontic vagueness”, “metaphysical indeterminacy” was a task (I don’t think there are any stable conventions in the literature). And that’s not to mention “vague objects” and the like.

I decided in the end to push a particular methodology, if only as a stalking horse to bring out the various presuppositions that other approaches will want to deny. My view is that we should think of “indefinitely” roughly parallel to the way we do “possibly”. There are various disambiguations one can make: “possibly” might mean metaphysical possibility, epistemic possibility, or whatever; “indefinitely” might mean linguistic indeterminacy, epistemic unclarity, or something metaphysical. To figure out whether you should buy into metaphysical indeterminacy, you should (a) get yourself in a position to at least formulate coherently theories involving that operator (i.e. specify what its logic is); and (b) run the usual Quinean cost/benefit analysis on a case-by-case basis.

The view of metaphysical indeterminacy most opposed to this is one that would identify it strongly with vagueness de re, paradigmatically there being some object and some property such that it is indeterminate whether the former instantiates the latter (this is how Williamson seems to conceive of matters in a 2003 article). If we had some such syntactic criterion for metaphysical indeterminacy, perhaps we could formulate everything without postulating a plurality of disambiguations of “definitely”. However, it seems that this de re formulation would miss out some of the most paradigmatic examples of putative metaphysical vagueness, such as the de dicto formulation: It is indeterminate whether there are exactly 29 things. (The quantifiers here to be construed unrestrictedly).

I also like to press the case against assuming that all theories of metaphysical indeterminacy must be logically revisionary (endorsing some kind of multi-valued logic). I don’t think the implication works in either direction: multi-valued logics can be part of a semantic theory of indeterminacy; and some settings for thinking about metaphysical indeterminacy are fully classical.

I finish off with a brief review of the basics of Evans’ argument, and the sort of arguments (like the one from Weatherson in the previous post) that might convert metaphysical vagueness of apparently unrelated forms into metaphysically vague identity arguably susceptable to Evans argument.

From vague parts to vague identity

(Update: as Dan notes in the comment below, I should have clarified that the initial assumption is supposed to be that it’s metaphysically vague what the parts of Kilimanjaro (Kili) are. Whether we should describe the conclusion as deriving a metaphysically vague identity is a moot point.)

I’ve been reading an interesting argument that Brian Weatherson gives against “vague objects” (in this case, meaning objects with vague parts) in his paper “Many many problems”.

He gives two versions. The easiest one is the following. Suppose it’s indeterminate whether Sparky is part of Kili, and let K+ and K- be the usual minimal variations of Kili (K+ differs from Kili only in determinately containing Sparky, K- only by determinately failing to contain Sparky).

Further, endorse the following principle (scp): if A and B coincide mereologically at all times, then they’re identical. (Weatherson’s other arguments weaken this assumption, but let’s assume we have it, for the sake of argument).

The argument then runs as follows:
1. either Sparky is part of Kili, or she isn’t. (LEM)
2. If Sparky is part of Kili, Kili coincides at all times with K+ (by definition of K+)
3. If Sparky is part of Kili, Kili=K+ (by 2, scp)
4. If Sparky is not part of Kili, Kili coincides at all times with K- (by definition of K-)
5. If Sparky is not part of Kili, Kili=K- (by 4, scp).
6. Either Kili=K+ or Kili=K- (1, 3,5).

At this point, you might think that things are fine. As my colleague Elizabeth Barnes puts it in this discussion of Weatherson’s argument you might simply think at this point that only the following been established: that it is determinate that either Kili=K+ or K-: but that it is indeterminate which.

I think we might be able to get an argument for this. First our all, presumably all the premises of the above argument hold determinately. So the conclusion holds determinately. We’ll use this in what follows.

Suppose that D(Kili=K+). Then it would follow that Sparky was determinately a part of Kili, contrary to our initial assumption. So ~D(Kili=K+). Likewise ~D(Kili=K-).

Can it be that they are determinately distinct? If D(~Kili=K+), then assuming that (6) holds determinately, D(Kili=K+ or Kili=K-), we can derive D(Kili=K-), which contradicts what we’ve already proven. So ~D(~Kili=K+) and likewise ~D(~Kili=K-).

So the upshot of the Weatherson argument, I think, is this: it is indeterminate whether Kili=K+, and indeterminate whether Kili=K-. The moral: vagueness in composition gives rise to vague identity.

Of course, there are well known arguments against vague identity. Weatherson doesn’t invoke them, but once he reaches (6) he seems to think the game is up, for what look to be Evans-like reasons.

My working hypothesis at the moment, however, is that whenever we get vague identity in the sort of way just illustrated (inherited from other kinds of ontic vagueness), we can wriggle out of the Evans reasoning without significant cost. (I go through some examples of this in this forthcoming paper). The over-arching idea is that the vagueness in parthood, or whatever, can be plausibly viewed as inducing some referential indeterminacy, which would then block the abstraction steps in the Evans proof.

Since Weatherson’s argument is supposed to be a general one against vague parthood, I’m at liberty to fix the case in any way I like. Here’s how I choose to do so. Let’s suppose that the world contains two objects, Kili and Kili*. Kili* is just like Kili, except that determinately, Kili and Kili* differ over whether they contain Sparky.

Now, think of reality as indeterminate between two ways: one in which Kili contains Sparky, the other where it doesn’t. What of our terms “K+” and “K-“? Well, if Kili contains Sparky, then “K+” denotes Kili. But if it doesn’t, then “K+” denotes Kili*. Mutatis Mutandis for “K-“. Since it is is indeterminate which option obtains, “K+” and “K-” are referentially indeterminate, and one of the abstraction steps in the Evans proof fail.

Now, maybe it’s built into Weatherson’s assumptions that the “precise” objects like K+ and K- exist, and perhaps we could still cause trouble. But I’m not seeing cleanly how to get it. (Notice that you’d need more than just the axioms of mereology to secure the existence of [objects determinately denoted by] K+ and K-: Kili and Kili* alone would secure the truth that there are fusions including Sparky and fusions not including Sparky). But at this point I think I’ll leave it for others to work out exactly what needs to be added…

Seduction and the sorites

Consider a red-yellow sorites sequence. Famously, “There is a red patch right next to a non-red patch” looks awful. But deny it (assert its negation) and you have the major premise of the sorites paradox. Plenty of theorists want to say that the “sharp boundary” sentence turns out to be true. They then face the burden of saying why it’s unacceptable. Call that the burden of explaining the seductiveness of the sorites paradox.

There is a fair amount of discussion of this kind of thing, and I have my own favourites. But in reading the literature, I keep coming across one particular line. It is to explain, on the basis of your favoured theory of vagueness, why we should think that each instance of the existential is false. So, theorists explain why we’d be confident that this isn’t a red patch next to a non-red patch, and that isn’t a red patch next to a non-red patch. And so on throughout the series.

However, there’s something suspicious about that strategy. Consider the situation that generates the preface “paradox”. Of each sentence I write in my book, I’m highly confident that it’s true. But on the basis of general considerations, I’m highly confident that there’s some sentence somewhere in it that’s false.

Suppose we accept that, of each pair in the sorites series, we have grounds for thinking that the red/non-red boundary is not located there. Still, we have excellent general grounds (e.g. a short logical proof, from obvious premises using apparently uncontroversial principles) for the truth of the existential claim that the boundary is located somewhere. So far, it looks like we should be something like the preface situation. We should be comfortable with the existential claim that there is a cut-off somewhere (/there is an error somewhere in the book) while disbelieving each instance, that the cut-off is here (/the error occurs in this sentence).

But, of coures, the situation with the sorites is strikingly not like this. Despite the apparently compelling general grounds we can give for the truth of the existential, most of us find it really hard to believe.

The trouble is this: the simple fact that each instance of an existential appears false does not in general lead us to believe that the existential itself is false (the preface situation illustrates this). So there must be something special about the sorites case that makes the move seem compelling in this case. And I can’t see that the authors that I’ve been reading explain what that is.

(A variation on this theme occurs in Graff Fara’s “Shifting sands”. Roughly, she gives a contextualist(-ish) story about why each instance asserting that the cut-off is not here will be true. She then says that it is “no wonder” will count universal generalization (the major premise of the sorites) as true.

But again, it’s hard to see what general pattern of inferring this falls into (remembering that it has to be one so compelling that it survives confrontation with a short proof of the truth of the existential). After all, as I look around my room, the following are successively true: “my chair is currently visible” “my table is currently visible”, “my cabinet is currently visible” etc. I feel no temptation to generalize to “all of the medium sized objects in my room are currently visible”. I have reasons to think this general statement false, and that totally swamps my tendancy to generalize from the various instances. So again, the real question here is to explain why something similar doesn’t happen in the sorites. And I don’t see that question being addressed.)

Against against against vague existence

Carrie Jenkins recently posted on Ted Sider‘s paper “Against Vague Existence“.

Suppose you think it’s vague whether some collection of cat-atoms compose some further thing (perhaps because you’re a organicist about composition, and it’s vague whether kitty is still living). It’s then natural to think that there’ll be corresponding vagueness in the range of (unrestricted) first order quantifier: it might be vague whether it ranges over one billion and fifty five thing or one billion and fifty six things, for example: with the putative one billion and fifty-sixth entity being kitty, if she still exists. Sider thinks there are insuperable problems for this view; Carrie thinks the problems can be avoided. Below the fold, I present a couple of problems for (what I take to be) Carrie’s way of addressing the Sider-challenge.

Sider’s interested in “precisificational” theories of vagueness, such as supervaluationism and (he urges) epistemicism. The vagueness of an expression E consists in there being multiple ways in which the term could be made precise, between which, perhaps, the semantic facts don’t select (supervaluationism), or between which we can’t discriminate the uniquely correct one (epistemicism). (On my account, ontic vagueness turns out to be precisificational too). The trouble is alleged to be that vague existence claims can’t fit this model. One underlying idea is that multiple precifications of an unrestricted existential quantifier would have to include different domains: perhaps precisification E1 has domain D1, whereas precisification E2 has domain D2, which is larger since includes everything in D1, plus one extra thing: kitty.

But wait! If it is indeterminate whether kitty exists, how can we maintain that the story I just gave is true? When I say “D2 contains one extra thing: kitty”, it seems it should be at best indeterminate whether that is true: for it can only be true if kitty exists. Likewise, it will be indeterminate whether or not the name “kitty” suffers reference-failure.

Ok, so that’s what I think of as the core of Sider’s argument. Carrie’s response is very interesting. I’m not totally sure whether what I’m going to say is really what Carrie intends, so following the standard philosophical practice, I’ll attribute what follows to Carrie*. Whereas you’d standardly formulate a semantics by using relativized semantic relations, e.g. “N refers to x relative to world w, time t, precification p”, Carrie* proposes that we replace the relativization with an operator. So the clause for the expression N might look like: “At world w, At time t, At precisification p, N referes to x”. In particular, we’ll say:

“At precisfication 1, “E” ranges over the domain D1;
At precisification 2, “E” ranges over the domain D1+{kitty}.”

In the metalanguage, “At p” works just as it does in the object language, binding any quantifiers within its scope. So, when within the scope of the “At precisification 2” operator, the metalinguistic name “kitty” will have reference, and, again within the scope of that operator, the unrestricted existential quantifier will have kitty within its range.

This seems funky so far as it goes. It’s a bit like a form of modalism that takes “At w” as the primitive modal operator. I’ve got some worries though.

Here’s the first. A burden on Carrie*’s approach (as I’m understanding it) will be to explain under what circumstances a sentence is true. usually, this is just done by quantification into the parameter position of the parameterized “truth”, i.e.

“S” is true simpliciter iff for all precisifications p, “S” is true relative to p.

What’s the translation of this into the operator account? Maybe something like:

“S” is true simpliciter iff for all precisifications p, At p “S” is true.

For this to make sense, “p” has to be a genuine metalinguistic variable. And this undermines some of the attractions of Carrie*’s account: i.e. it looks like we’ll now the burden of explaining what “precisifications” are (the sort of thing that Sider is pushing for in his comments on Carrie’s post). More attractive is the “modalist” position where “At p” is a primitive idiom. Perhaps then, the following could be offered:

“S” is true simpliciter iff for all precisification-operators O, [O: “S” is true].

My second concern is the following: I’m not sure how the proposal would deal with quantification into a “precisification” context. E.g. how do we evaluate the following metalanguage sentence?

“on precisification 2, there is an x such that x is in the range of “E”, and on precisification 1, x is not within the range of “E””

The trouble is that, for this to be true, it looks like kitty has to be assigned as the value of “x”. But the third occurence is within the scope of “on precisification 2”. On the most natural formulation, for “on precisification 2, x is F” to be true on the assignment of an object to x, x will have to be within the scope of the unrestricted existential quantifier at precisification 1. But Kitty isn’t! There might be a technical fix here, but I can’t see it at the moment. Here’s the modal analogue: let a be the actual world, and b be a merely possible world where I don’t exist. What should the modalist say about the following?

“At a, there is an object x (identical to Robbie) and At b, nothing is identical to x”

Again, for this to be true, we require an open sentence “At b, nothing is identical to x” to be true relative to an assignment where some object not existing at b is the value of “x”. And I’m just not sure that we can make sense of this without allowing ourselves the resources to define a “precisification neutral” quantifier within the metalanguage in reference to which Sider’s original complaint could be reintroduced.