Category Archives: Semantics

Must, Might and Moore.

I’ve just been enjoying reading a paper by Thony Gillies. One thing that’s very striking is the dilemma he poses—quite generally—for “iffy” accounts of “if” (i.e. accounts that see English “if” as expressing a sentential connective, pace Kratzer’s restrictor account).

The dilemma is constructed around finding a story that handles the interaction between modals and conditionals. The prima facie data is that the following pairs are equivalent:

  • If p, must be q
  • If p, q

and

  • If p, might be q
  • Might be (p&q)

The dilemma proceeds by first looking at whether you want to say that the modals scope over the conditional or vice versa, and then (on the view where the modal is wide-scoped) looking into the details of how the “if” is supposed to work and showing that one or other of the pairs come out inequivalent. The suggestion in the paper is if we have the right theory of context-shiftiness, and narrow-scope the modals, then we can be faithful to the data. I don’t want to take issue with that positive proposal. I’m just a bit worried about the alleged data itself.

It’s a really familiar tactic, when presented with a putative equivalence that causes trouble for your favourite theory, to say that the pairs aren’t equivalent at all, but can be “reasonably inferred” from each other (think of various ways of explaining away “or-to-if” inferences). But taken cold such pragmatic explanations can look a bit ad hoc.

So it’d be nice if we could find independent motivation for the inequivalence we need. In a related setting, Bob Stalnaker uses the acceptability of Moorean-patterns to do this job. To me, the Stalnaker point seems to bear directly on the Gillies dilemma above.

Before we even consider conditionals, notice that “p but it might be that not p” sounds terrible. Attractive story: this is because you shouldn’t assert something unless you know it to be true; and to say that p might not be the case is (inter alia) to deny you know it. One way of bringing out the pretty obviously pragmatic nature of the tension in uttering the conjunction here is to note that asserting the following sort of thing looks much much better:

  • it might be that not p; but I believe that p

(“I might miss the train; but I believe I’ll just make it”). The point is that whereas asserting “p” is appropriate only if you know that p, asserting “I believe that p” (arguably) is appropriate even if you know you don’t know it. So looking at these conjunctions and figuring out whether they sound “Moorean” seems like a nice way of filtering out some of the noise generated by knowledge-rules for assertion.

(I can sometimes still hear a little tension in the example: what are you doing believing that you’ll catch the train if you know you might not? But for me this goes away if we replace “I believe that” with “I’m confident that” (which still, in vanilla cases, gives you Moorean phenomena). I think in the examples to be given below, residual tension can be eliminated in the same way. The folks who work on norms of assertion I’m sure have explored this sort of territory lots.)

That’s the prototypical case. Let’s move on to examples where there are more moving parts. David Lewis famously alleged that the following pair are equivalent:

  • it’s not the case that: if were the case that p, it would have been that q
  • if were that p, it might have been that ~q

Stalnaker thinks that this is wrong, since instances of the following sound ok:

  • if it were that p, it might have been that not q; but I believe if it were that p it would have been that q.

Consider for example: “if I’d left only 5 mins to walk down the hill, (of course!) I might have missed the train; but I believe that, even if I’d only left 5 mins, I’d have caught it. ” That sounds totally fine to me. There’s a few decorations to that speech (“even” “of course” “only”). But I think the general pattern here is robust, once we fill in the background context. Stalnaker thinks this cuts against Lewis, since if mights and woulds were obvious contradictories, then the latter speech would be straightforwardly equivalent to something of the form “A and I don’t believe that A”. But things like that sounds terrible, in a way that the speech above doesn’t.

We find pretty much the same cases for “must” and indicative “if”.

  • It’s not true that if p, then it must be that q; but I believe that if p, q.

(“it’s not true that if Gerry is at the party, Jill must be too—Jill sometimes gets called away unexpectedly by her work. But nevertheless I believe that if Gerry’s there, Jill’s there.”). Again, this sounds ok to me; but if the bare conditional and the must-conditional were straightforwardly equivalent, surely this should sound terrible.

These sorts of patterns make me very suspicious of claims that “if p, must q” and “if p, q” are equivalent, just as the analogous patterns make me suspicious of the Lewis idea that “if p, might ~q” and “if p, q” are contradictories when the “if” is subjunctive. So I’m thinking the horns of Gillies’ dilemma aren’t equal: denying the must conditional/bare conditional equivalence is independently motivated.

None of this is meant to undermine the positive theory that Thony Gillies is presenting in the paper: his way of accounting for lots of the data looks super-interesting, and I’ve got no reason to suppose his positive story won’t have a story about everything I’ve said here. I’m just wondering whether the dilemma that frames the debate should suck us in.

Degrees of belief and supervaluations

Suppose you’ve got an argument with one premise and one conclusion, and you think its valid. Call the premise p and the conclusion q. Plausibly, constraints on rational belief follow: in particular, you can’t rationally have a lesser degree of belief in q than you have in p.

The natural generalization of this to multi-premise cases is that if p1…pn|-q, then your degree of disbelief in q can’t rationally exceed the sum of your degrees of disbelief in the premises.

FWIW, there’s a natural generalization to the multi-conclusion case too (a multi-conclusion argument is valid, roughly, if the truth of all the premises secures the truth of at least one conclusion). If p1…pn|-q1…qm, then the sum of your degrees of disbelief in the conclusions can’t rationally exceed the sum of your degrees of disbelief in the premises.

What I’m interested in at the moment is to what extent this sort of connection can be extended to non-classical settings. In particular (and connected with the last post) I’m interested in what the supervaluationist should think about all this.

There’s a fundamental choice to be made at the get-go. Do we think that “degrees of belief” in sentences of a vague language can be represented by a standard classical probability function? Or do we need to be a bit more devious?

Let’s take a simple case. Construct the artificial predicate B(x), so that numbers less than 5 satisfy B, and numbers greater than5 fail to satisfy it. We’ll suppose that it is indeterminate whether 5 itself is B, and that supervaluationism gives the right way to model this.

First observation. It’s generally accepted that for the standard supervaluationist

p &~Det(p)|-absurdity;

Given this and the constraints on rational credence mentioned earlier, we’d have to conclude that my credence in B(5)&~Det(B(5)) must be 0. I have credence 0 in absurdity; and the degree of disbelief in the conclusion of this valid argument (1) must not exceed the degree of disbelief in its premises.

Let’s think that through. Notice that in this case, my credence in ~Det(B(5)) can be taken to be 1. So given minimal assumptions about the logic of credences, my credence in B(5) must be 0.

A parallel argument running from ~B(5)&~Det(~B(5))|-absurdity gives us that my credence in ~B(5) must be 0.

Moreover, supervaluational entails all classical tautologies. So in particular we have the validity: |-B(5)v~B(5). The standard constraint in this case tells us that rational credence in this disjunction must be 1. And so, we have a disjunction in which we have credence 1, each disjunct of which we have credence 0 in. (Compare the standard observation that supervaluational disjunctions can be non-prime: the disjunction can be true when neither disjunct is).

This is a fairly direct argument that something non-classical has to be going on with the probability calculus. One move at this point is to consider Shafer functions (which I know little about: but see here). Now maybe that works out nicely, maybe it doesn’t. But I find it kinda interesting that the little constraint on validity and credences gets us so quickly into a position where something like this is needed if the constraint is to work. It also gives us a recipe for arguing against standard supervaluationism: argue against the Shafer-function like behaviour in our degrees of belief, and you’ll ipso facto have an argument against supervaluationism. For this, the probablistic constraint on validity is needed (as far as I can see): for its this that makes the distinctive features mandatory.

I’d like to connect this to two other issues I’ve been working on. One is the paper on the logic of supervaluationism cited below. The key thing here is that it raises the prospect of p&~Dp|-absurdity not holding, even for your standard “truth=supertruth” supervaluationist. If that works, the key premise of the argument that forces you to have degree of belief 0 in both an indeterminate sentence ‘p’ and its negation goes missing.

Maybe we can replace it by some other argument. If you read “D” as “it is true that…” as the standard supervaluationist encourages you to, then “p&~Dp” should be read “p&it is not true that p”. And perhaps that sounds to you just like an analytic falsity (it sure sounded to me that way); and analytic falsities are the sorts of things one should paradigmatically have degree of belief 0 in.

But here’s another observation that might give you pause (I owe this point to discussions with Peter Simons and John Hawthorne). Suppose p is indeterminate. Then we have ~Dp&~D~p. And given supervaluationism’s conservativism, we also have pv~p. So by a bit of jiggery-pokery, we’ll get (p&~Dp v ~p&~D~p). But in moods where I’m hyped up thinking that “p&~Dp” is analytically false and terrible, I’m equally worried by this disjunction. But that suggests that the source of my intuitive repulsion here isn’t the sort of thing that the standard supervaluationist should be buying. Of course, the friend of Shafer functions could just say that this is another case where our credence in the disjunction is 1 while our credences in each disjunct is 0. That seems dialectically stable to me: after all, they’ll have *independent* reason for thinking that p&~Dp should have credence 0. All I want to insist is that the “it sounds really terrible” reason for assigning p&~Dp credence 0 looks like it overgeneralizes, and so should be distrusted.

I also think that if we set aside truth-talk, there’s some plausibility in the claim that “p&~Dp” should get non-zero credence. Suppose you’re initially in a mindset where you should be about half-confident of a borderline case. Well, one thing that you absolutely want to say about borderline cases is that they’re neither true nor false. So why shouldn’t you be at least half-confident in the combination of these?

And yet, and yet… there’s the fundamental implausibility of “p&it’s not true that p” (the standard supervaluationist’s reading of “p&~Dp”) having anything other than credence 0. But ex hypothesi, we’ve lost the standard positive argument for that claim. So we’re left, I think, with the bare intuition. But it’s a powerful one, and something needs to be said about it.

Two defensive maneuvers for the standard supervaluationist:

(1) Say that what you’re committed to is just “p& it’s not supertrue that p”. Deny that the ordinary concept of truth can be identified with supertruth (something that as many have emphasized, is anyway quite plausible given the non-disquotational nature of supertruth). But crucially, don’t seek to replace this with some other gloss on supertruth: just say that supertruth, superfalsity and gap between them are appropriate successor concepts, and that ordinary truth-talk is appropriate only when we’re ignoring the possibility of the third case. If we disclaim conceptual analysis in this way, then it won’t be appropriate to appeal to intuitions about the English word “true” to kick away independently motivated theoretical claims about supertruth. In particular, we can’t appeal to intuitions to argue that “p&~supertrue that p” should be assigned credence 0. (There’s a question of whether this should be seen as an error-theory about English “truth”-ascriptions. I don’t see it needs to be. It might be that the English word “true” latches on to supertruth because supertruth what best fits the truth-role. On this model, “true” stands to supertruth as “de-phlogistonated air” according to some, stands to oxygen. And so this is still a “truth=supertruth” standard supervaluationism.)

(2) The second maneuver is to appeal to supervaluational degrees of truth. Let the degree of supertruth of S be, roughly, the measure of the precisifications on which S is true. S is supertrue simpliciter when it is true on all the precisifications, i.e. measure 1 of the precisifications. If we then identify degrees of supertruth with degrees of truth, the contention that truth is supertruth becomes something that many find independently attractive: that in the context of a degree theory, truth simpliciter should be identified with truth to degree 1. (I think that this tendancy has something deeply in common with the temptation (following Unger) to think that nothing that nothing can be flatter than a flat thing: nothing can be truer than a true thing. I’ve heard people claim that Unger was right to think that a certain class of adjectives in English work this way).

I think when we understand the supertruth=truth claim in that way, the idea that “p&~true that p” should be something in which we should always have degree of belief 0 loses much of its appeal. After all, compatibly with “p” not being absolutely perfectly true (=true), it might be something that’s almost absolutely perfectly true. And it doesn’t sound bad or uncomfortable to me to think that one should conform one’s credences to the known degree of truth: indeed, that seems to be a natural generalization of the sort of thing that originally motivated our worries.

In summary. If you’re a supervaluationist who takes the orthodox line on supervaluational logic, then it looks like there’s a strong case for a non-classical take on what degrees of belief look like. That’s a potentially vulnerable point for the theory. If you’re a (standard, global, truth=supertruth) supervaluationist who’s open to the sort of position I sketch in the paper below, prima facie we can run with a classical take on degrees of belief.

Let me finish off by mentioning a connection between all this and some material on probability and conditionals I’ve been working on recently. I think a pretty strong case can be constructed for thinking that for some conditional sentences S, we should be all-but-certain that S&~DS. But that’s exactly of the form that we’ve been talking about throughout: and here we’ve got *independent* motivation to think that this should be high-probability, not probability zero.

Now, one reaction is to take this as evidence that “D” shouldn’t be understood along standard supervaluationist lines. That was my first reaction too (in fact, I couldn’t see how anyone but the epistemicist could deal with such cases). But now I’m thinking that this may be too hasty. What seems right is that (a) the standard supervaluationist with the Shafer-esque treatment of credences can’t deal with this case. But (b) the standard supervaluationist articulated in one of the ways just sketched shouldn’t think there’s an incompatibility here.

My own preference is to go for the degrees-of-truth explication of all this. Perhaps, once we’ve bought into that, the “truth=degree 1 supertruth” element starts to look less important, and we’ll find other useful things to do with supervaluational degrees of truth (a la Kamp, Lewis, Edgington). But I think the “phlogiston” model of supertruth is just about stable too.

[P.S. Thanks to Daniel Elstein, for a paper today at the CMM seminar which started me thinking again about all this.]

Supervaluations and logical revisionism paper

Happy news today: the Journal of Philosophy is going to publish my paper on the logic of supervaluationism. Swift moral. It ain’t logical revisionary; and if it is, it doesn’t matter.

This previous post gives an overview, if anyone’s interested…

Now I’ve just got to figure out how to transmute my beautiful LaTeX symbols into Word…

Fundamental and derivative truths

I’ve posted a new version of my paper “Fundamental and derivative truths“. The new version notes a few more uses for the fundamental/derivative distinction, and clears up a few points.

As before, the paper is concerned with a way of understanding the—initially pretty hard to take—claim that tables exist, but don’t really exist. I think that that claim at least makes good sense, and arguably the distinction between what is really/fundamentally the case and what is merely the case is something we should believe in whether or not we endorse the particular claim about tables. I think in particular that it leads to a particularly attractive view on the nature of set theory, since it really does seem that we do want to be able to “postulate sets into existence” (y’know how things form sets? well consider the set of absolutely everything. On pain of contradiction that set can’t be something that existed beforehand…) The framework I like lets us make sober sense of that.

The current version tidies up a bunch of things, it pinpoints more explicitly the difference between comparatively “easy cases”—defending the compatibility of set theoretic truths with a nominalist ontology—-and “hard cases”—defending the compatibility of the Moorean corpus with a microphysical mereological nihilist ontology. I’ve got another paper focusing on some of the technicalities of the composition case.

This project causes me much grief, since it involves many many different philosophically controversial areas: philosophy of maths, metaphysics of composition, theory of ontological commitment, philosophy of language and in particular metasemantics, and so forth. That makes it exciting to work on, but hard to present to people in a digestible way. Nevertheless, I’m going to have another go at the CSMN workshop in Olso later this month, focusing on the philosophy of language/theory of meaning aspects.

Kripkenstein’s monster

Though I’ve thought a lot about inscrutability and indeterminacy (well, I wrote my PhD thesis on it) I’ve always run a bit scared from the literature on Kripkenstein. Partly this is because the literature is so huge and sometimes intimidatingly complex. Partly it’s because I was a bit dissatisfied/puzzled with some of the foundational assumptions that seemed to be around, and was setting it aside until I had time to think things through.

Anyway, I’m now thinking about making a start on thinking about the issue. So this post is something in the way of a plea for information: I’m going to set out how I understand the puzzle involved, and invite people to disabuse me of my ignorance, recommend good readings or where these ideas have already been worked out.

To begin with, let’s draw a rough divide between three types of facts:

  1. Paradigmatically naturalistic facts (patterns of assent and dissent, causal relationships, dispositions, etc).
  2. Meaning-facts. (Of the form: “+” means addition, “67+56=123” is true, “Dobbin” refers to Dobbin.)
  3. Linguistic norms. (Of the form: One should utter “67+56=123” in such-and-such circs).

Kripkenstein’s strategy is to ask us to show how facts of (A) can constitute facts of kind (B) and (C). (An oddity here: the debate seems to have centred on a “dispositionalist” account of the move from A to B. But that’s hardly a popular option in the literature on naturalistic treatments of content, where variants of radical interpretation (Lewis, Davidson), of causal (Fodor, Field) and teleological (Millikan) theories are far more prominent. Boghossian in his state of the art article in Mind seems to say that these can all be seen as variants of the dispositionalist idea. But I don’t quite understand how. Anyway…)

One of the major strategies in Kripkenstein is to raise doubts about whether this or that constitutive story can really found facts of kind (C). Notice that if one assumes that (B) and (C) are a joint package, then this will simultaneously throw into doubt naturalistic stories about (B).

In what sense might they be a joint package? Well, maybe some sort of constraint like the following is proposed: unless putative meaning-facts make immediately intelligible the corresponding linguistic norms, then they don’t deserve the name “meaning facts” at all.

To see an application, suppose that some of Kripke’s “technical” objections to the dispositionalist position were patched (e.g. suppose one could non-circularly identify a disposition of mine to return the intuitively correct verdicts to each and every arithmetical sum). Still, then, there’s the “normative” objection: why are those the verdicts the ones one should return in those circumstances? And (right or wrongly) the Kripkenstein challenge is that this normative explanation is missing. So (according to the Kripkean) these ain’t the meaning-facts at all.

There’s one purely terminological issue I’d like to settle at this point. I think we shouldn’t just build it into the definition of meaning-facts that they correspond to linguistic norms in this way. After all, there’s lot of other theoretical roles for meaning other than supporting linguistic norms (e.g. a predicative/explanatory role wrt understanding, for example). I propose to proceed as follows. Firstly, let’s speak of “semantic” or “meaning” facts in general (picked out if you like via other aspects of the theoretical role of meaning). Secondly, we’ll look for arguments for or against the substantive claim that part of the job of a theory of meaning is to subserve, or make immediately intelligible, or whatever, facts like (C).

Onto details. The Kripkenstein paradox looks like it proceeds on the following assumptions. First, three principles are taken as target (we can think of them as part of a “folk theory” of meaning)

  1. the meaning-facts to be exactly as we take them to be: i.e. arithmetical truths are determinate “to infinity”; and
  2. the corresponding linguistic norms are determinate “to infinity” as well; and
  3. (1) and (2) are connected in the obvious way: if S is true, then in appropriate circumstances, we should utter S.

The “straight solutions” seem to tacitly assume that our story should take the following form. First, give some constitutive story about what fixes facts of kind (B). Then (supposing there’s no obvious counterexamples, i.e. that the technical challenge is met). Then the Kripkensteinian looks to see whether this “really gives you meaning”, in the sense that we’ve also got a story underpinning (C). Given our early discussion, the Kripkensteinian challenge needs to be rephrased somewhat. Put the challenge as follows. First, the straight solution gives a theory of semantic facts, which is evaluated for success on grounds that set aside putative connections to facts of kind (C). Next, we ask the question: can we give an adequate account of facts of kind (C), on the basis of what we have so far? The Kripkensteinian suggests not.

The “sceptical solution” starts in the other direction. It takes as groundwork facts of kind (A) and (C) (perhaps explaining facts of kind (C) on the basis of those of kind (A)?) and then uses this in constructing an account of (something like) (B). One Kripkensteinian thought here is to base some kind of vindication of (B)-talk on the (C)-style claim that one ought to utter sentences involving semantic vocabulary such as ” ‘+’ means addition”.

The basic idea one should be having at this point is more general however. Rather than start by assuming that facts like (B) are prior in the order of explanation to facts like (C), why not consider other explanatory orderings? Two spring to mind: linguistic normativity and meaning-facts are explained independently; or linguistic normativity is prior in the order of explanation to meaning-facts.

One natural thought in the latter direction is to run a “radical interpretation” line. The first element of a radical interpretation proposal is identify a “target set” of T-sentences, which the meaning-fixing T-theory for a language is (cp) constrained to generate. Davidson suggests we pick the T-sentences by looking at what sentences people de facto hold true in certain circumstances. But, granted (C)-facts, when identifying the target set of T-sentences one might instead appeal to what person’s ought to utter in such and such circs.

There’s no obvious reason why such normative facts need be construed as themselves “semantic” in nature, nor any obvious reason why the naturalistically minded shouldn’t look for reductions of this kind of normativity (e.g. it might be a normativity on a par with that involved with weak hypothetical imperatives, e.g. in the claim that I should eat this food, in order to stay alive, which I take to be pretty unscary.). So there’s no need to give up on reductionist project in doing things this way. Nor is it only radical interpretation that could build in this sort of appeal to (C)-type facts in the account of meaning.

One nice thing about building normativity into the subvening base for semantic facts in this way is that we make it obvious that we’ll get something like (a perhaps restricted and hedged) form of (iii). Running accounts of (B) and (C) separately would make the convergence of meaning-facts and linguistic norms seem like a coincidence, if it in fact holds in any form at all.)

Is there anything particularly sceptical about the setup, so construed? Not in the sense in which Kripke’s own suggestion is. Two things about the Kripke proposal (as I suggested we read it): it’s clear that we’ve got some kind of projectionist/quasi-realist treatment of the semantic going on (it’s only the acceptability of semantic claims that’s being vindicated, not “semantic facts” as most naturalistic theories of meaning would conceive them). Further, the sort of norms to which we can reasonably appeal will be grounded in practices of praise and blame in a linguistic community to which we belong, and given the sheer absence of people doing very-long sums, there just won’t be a practice of praise and blaming people for uttering “x+y=z” for sufficiently large choices of x, y and z. The linguistic norms we can ground in this way might be much more restricted than one might at first think: maybe only finitely many sentences S are such that something of the following form holds: we should assert S in circs c. Though there might be norms governing apparently infinitary claims, there is no reason to suppose in this setup that there are infinitely many type-(C) facts. That’ll mean that (2) and (3) are dropped.

In sum, Kripke’s proposal is sceptical in two senses: it is projectionist, rather than realist, about meaning-facts. And it drops what one might take to be a central plank of folk-theory of meaning, (2) and (3) above.

On the other hand, the modified radical interpretation or causal theory proposal I’ve been sketching can perfectly well be a realist about meaning-facts, having them “stretch out to infinity” as much as you like (I’d be looking to combine the radical interpretation setting sketched earlier with something like Lewis’s eligibility constraints on correct interpretation, to secure semantic determinacy). So it’s not “sceptical” in the first sense in which Kripke’s theory is: it doesn’t involve any dodgy projectivism about meaning-facts. But it is a “sceptical solution” in the other sense, since it gives up the claims that linguistic norms “stretch out” to infinity, and that truth-conditions of sentences are invariably paired with some such norm.

[Thanks (I think) are owed to Gerald Lang for the title to this post. A quick google search reveals that others have had the same idea…]

Supervaluations and revisionism once more

I’ve just spent the afternoon thinking about an error I found in my paper “supervaluational consequence” (see this previous post). I’ve figured out how to patch it now, so thought I’d blog about it.

The background is the orthodox view that supervaluational consequence will lead to revisions of classical logic. The strongest case I know for this (due to Williamson) is the following. Consider the claim “p&~Determinately(p)”. This (it is claimed) cannot be true on any serious supervaluational model of our language. Equivalently, you can’t have both p and ~Determinately(p) both true in a single model. If classical reductio were an ok rule of inference, therefore, you’d be able argue from ~Determinately(p) to ~p. But nobody thinks that’s supervaluationally valid: any indeterminate sentence will be a counterexample to it. So classical reductio should be given up.

This is stronger than the more commonly cited argument: that supervaluational semantics vindicates the move from p to Determinately(p), but not the material conditional “if p then Determinately(p)” (a counterexample to conditional proof). The reason is that, if “Determinately” itself is vague, arguably the supervaluationist won’t be committed to the former move. The key here is the thought that as well as things that are determinately sharpenings of our language, their may be interpretations which are borderline-sharpenings. Perhaps interpretation X is an “admissible interpretation of our language” on some sharpenings, but not on others. If p is true at all the definite sharpenings, but false at X, then that may lead to a situation where p is supertrue, but Determinately(p) isn’t.

But orthodoxy says that this sort of situation (non-transitivity in the accessibility relation among interpretations of our language) does nothing to undermine the case for revisionism I mentioned in the first paragraph.

One thing I do in the paper is construct what seems to me a reasonable-looking toy semantics for a language, on which one can have both p and ~Determinately p. Here it is.

Suppose you have five colour patches, ranging from red to orange (non-red). Call them A,B,C,D,E.

Suppose that our thought and talk makes it the case that only interpretations which put the cut-off between B and D are determinately “sharpenings” of the language we use. Suppose, however, that there’s some fuzziness around in what it is to be an “admissible interpretation”. For example, an interpretation that places the cut-off between B and C, thinks that both interpretations placing the cut-off between C and D, and interpretations placing the cut-off between A and B, are admissible. And likewise, an interpretation that place the cut-off between C and D think that interpretations that place the cut-off between B and C are admissible, but also thinks that interpretations that place the cut-off between D and E are admissible.

Modelling the situation with four interpretations, labelled AB, BC, CD, DE, for where they place the red/non-red cut-off, we can express the thought like this: each intepretation accesses (thinks admissible) itself and its immediate neighbours, but nothing else. But BC and CD are the sharpenings.

My first claim is that all this is a perfectly coherent toy model for the supervaluationist: nothing dodgy or “unintended” is going on.

Now let’s think about the truths values assigned to particular claims. Notice, to start with, that the claim “B is red” will be true at each sharpening. The claim “Determinately, B is red” will be true at the sharpening CD, but it won’t be true at the sharpening BC, for that accesses an interpretation on which B counts as non-red (viz. AB).

Likewise, the claim “D is not red” will be true at each sharpening, but “Determinately, D is not red” will be true at the sharpening BC, but fails at CD, due to the latter seeing the (non-sharpening) interpretation DE, at which D counts as red.

In neither of these atomic cases do we find “p and ~Det(p)” coming out true (that’s where I made a mistake previously). But by considering the following, we can find such a case:

Consider “B is red and D is not red”. It’s easy to see that this is true at each of the sharpenings, from what’s been said above. But also “Determinately(B is red and D is not red)” is false at each of the sharpenings. It’s false at BC because of the accessible interpretation AB at which B counts as non-red. It’s false at CD because of the accessible interpretation DE at which D counts as red.

So we’ve got “B is red and D is not red, & ~Determinately(B is red and D is non-red).” And we’ve got that in a perfectly reasonable toy model for a language of colour predicates.

(Why do people think otherwise? Well, the standard way of modelling the consequence relation in settings where the accessibility relation is non-transitive is to think of the sharpenings as *all the interpretations accessible from some designated interpretation*. And that imposes additional structure which, for example, the model just sketch doesn’t satisfy. But the additional structure seems to me totally unmotivated, and I provide an alternative framework in the paper for freeing oneself from those assumptions. The key thing is not to try and define “sharpening” in terms of the accessibility relation.).

The conclusion: the best extant case for (global) supervaluational consequence being revisionary fails.

Gavagai again again

A new version of my discussion of Quine’s “argument from below” is now up online (shorter! punchier! better!) Turns out it was all to do with counterpart theory all along.

Here’s the blurb: Gavagai gets discussed all the time. But (unless I’m missing something in the literature) I’ve never seen an advocate of gavagai-style indeterminacy spell out in detail what exactly the deviant interpretations or translations are, that incorporating the different ways of dividing reference (over rabbits, rabbit-stages or undetached rabbit-parts). And without this it is to say the least, a bit hard to evaluate the supposed counterexamples to such interpretations! So the main job of the paper is to spell out, for a significant fragment of language, what the rival accounts of reference-division amount to.

One audience for the paper (who might not realize they are an audience for it initially) are folks interested in the stage theory/worm theory debate in the philosophy of persistence. The neuvo-Gavagai guy, according to me, is claiming that there’s no fact of the matter whether our semantics is stage-theoretic or worm-theoretic. I think there’s a reasonable chance that that he’s right.

Stronger than this: so long as there are both 4D worms and instantaneous temporal parts thereof around (even if they’re “dependent entities” or “rabbit histories” or “mere sums” as opposed to Real Objects), the Gavagai guy asks you to explain why our words don’t refer to those worms or stages rather than whatever entity you think *really are* rabbits (say, enduring objects wholly present at each time).

By the way, even if these semantic indeterminacy results were right, I don’t think that this forecloses the metaphysical debate about which of endurance, perdurance or exdurance is the right account of *persistence*. But I do think that it forces us to think hard about what the difference is between semantic and metaphysical claims, and what sort of reasons we might offer for either.