I’ve just spent the afternoon thinking about an error I found in my paper “supervaluational consequence” (see this previous post). I’ve figured out how to patch it now, so thought I’d blog about it.

The background is the orthodox view that supervaluational consequence will lead to revisions of classical logic. The strongest case I know for this (due to Williamson) is the following. Consider the claim “p&~Determinately(p)”. This (it is claimed) cannot be true on any serious supervaluational model of our language. Equivalently, you can’t have both p and ~Determinately(p) both true in a single model. If classical reductio were an ok rule of inference, therefore, you’d be able argue from ~Determinately(p) to ~p. But nobody thinks that’s supervaluationally valid: any indeterminate sentence will be a counterexample to it. So classical reductio should be given up.

This is stronger than the more commonly cited argument: that supervaluational semantics vindicates the move from p to Determinately(p), but not the material conditional “if p then Determinately(p)” (a counterexample to conditional proof). The reason is that, if “Determinately” itself is vague, arguably the supervaluationist won’t be committed to the former move. The key here is the thought that as well as things that are determinately sharpenings of our language, their may be interpretations which are borderline-sharpenings. Perhaps interpretation X is an “admissible interpretation of our language” on some sharpenings, but not on others. If p is true at all the definite sharpenings, but false at X, then that may lead to a situation where p is supertrue, but Determinately(p) isn’t.

But orthodoxy says that this sort of situation (non-transitivity in the accessibility relation among interpretations of our language) does nothing to undermine the case for revisionism I mentioned in the first paragraph.

One thing I do in the paper is construct what seems to me a reasonable-looking toy semantics for a language, on which one can have both p and ~Determinately p. Here it is.

Suppose you have five colour patches, ranging from red to orange (non-red). Call them A,B,C,D,E.

Suppose that our thought and talk makes it the case that only interpretations which put the cut-off between B and D are determinately “sharpenings” of the language we use. Suppose, however, that there’s some fuzziness around in what it is to be an “admissible interpretation”. For example, an interpretation that places the cut-off between B and C, thinks that both interpretations placing the cut-off between C and D, and interpretations placing the cut-off between A and B, are admissible. And likewise, an interpretation that place the cut-off between C and D think that interpretations that place the cut-off between B and C are admissible, but also thinks that interpretations that place the cut-off between D and E are admissible.

Modelling the situation with four interpretations, labelled AB, BC, CD, DE, for where they place the red/non-red cut-off, we can express the thought like this: each intepretation accesses (thinks admissible) itself and its immediate neighbours, but nothing else. But BC and CD are the sharpenings.

My first claim is that all this is a perfectly coherent toy model for the supervaluationist: nothing dodgy or “unintended” is going on.

Now let’s think about the truths values assigned to particular claims. Notice, to start with, that the claim “B is red” will be true at each sharpening. The claim “Determinately, B is red” will be true at the sharpening CD, but it won’t be true at the sharpening BC, for that accesses an interpretation on which B counts as non-red (viz. AB).

Likewise, the claim “D is not red” will be true at each sharpening, but “Determinately, D is not red” will be true at the sharpening BC, but fails at CD, due to the latter seeing the (non-sharpening) interpretation DE, at which D counts as red.

In neither of these atomic cases do we find “p and ~Det(p)” coming out true (that’s where I made a mistake previously). But by considering the following, we can find such a case:

Consider “B is red and D is not red”. It’s easy to see that this is true at each of the sharpenings, from what’s been said above. But also “Determinately(B is red and D is not red)” is false at each of the sharpenings. It’s false at BC because of the accessible interpretation AB at which B counts as non-red. It’s false at CD because of the accessible interpretation DE at which D counts as red.

So we’ve got “B is red and D is not red, & ~Determinately(B is red and D is non-red).” And we’ve got that in a perfectly reasonable toy model for a language of colour predicates.

(Why do people think otherwise? Well, the standard way of modelling the consequence relation in settings where the accessibility relation is non-transitive is to think of the sharpenings as *all the interpretations accessible from some designated interpretation*. And that imposes additional structure which, for example, the model just sketch doesn’t satisfy. But the additional structure seems to me totally unmotivated, and I provide an alternative framework in the paper for freeing oneself from those assumptions. The key thing is not to try and define “sharpening” in terms of the accessibility relation.).

The conclusion: the best extant case for (global) supervaluational consequence being revisionary fails.

I don’t know this literature, so I could be missing something obvious, but I was wondering about something you say on p.3:

“In a supervaluational language without the D-operator and its relatives, there is no special threat to the classical modes of inference.”

But what about the following case: let p be some intedeterminate sentence.

(P1) p

(P2) ~p

Therefore, p & ~p

In classical logic, we have what could be called (and I think has been somewhere) “backwards falsehood preservation,” i.e., if the conclusion of an argument is false, then at least one of the premises is false. But in the argument above, the conclusion is superfalse, but none of the premises are (super)false.

So it seems to me that this is a ‘threat to our classical modes of inference’ that makes no use of the D-operator.

Interesting. We do have backwards preservation of untruth, but (as you note) not backwards preservation of falsity.

I’ll have to think about this some more. One issue concerns what we mean by “revisionism”. I guess the crude picture about the standardly alleged revisionism is that there’s a bit of inferential practice — e.g. believing a conditional on the basis of logical argument from antecedent to consequent—which is “licensed” by the classical system, but not licensed by the supervaluational system. What we’d need to do for a clean hit here is to find a corresponding bit of inferential practice (not involving reasoning from propositions involving “D” or “false” etc) which somehow trades on backwards-falsity preservation.

I’m not sure exactly what the bit of cognitive practice is that is “licensed” by facts about falsity-preservation. So I don’t know how to think about the case very well.

One thing I do make something of in the paper is the (well-known) point that supervaluational consequence departs from classical consequence in a multi-conclusion setting. And I think we can identify the something that makes that deserve the name “revision”. The practice in question being: if you reject all of c1, c2, c3, what else are you committed to reject? And here the classicist licenses transitions (e.g. from the rejection of each particular cut-off claim in a sorites, to the rejection of one or other of the minor premises of the sorites) which the supervaluationist thinks are bad. That’s the kind of revision I do think supervaluationists are committed to.

I wonder whether the backwards-falsity issue might correspond to the same thing. We’re thinking of falsity here as truth of the negation, I guess. And so “If C is false then one of p1,..,pn is false” is equivalent to “If ~C is true, then one of ~p1,…,~pn is true”. And that is the intuitive characterization what it is for ~p1,…~pn to follow from ~C in a multi-conclusion logic. So it looks to me that a classical inference will have backwards falsity preservation iff the dual argument in the multi-conclusion setting preserves truth. So the failure of backwards falsity preservation might be seen as a reflection of what happens in the multi-conclusion setting. But it does seem to me that to explain why this might be “revisionary” of inferential practice, the way to think about it is the multi-conclusion setting.

But maybe I’m missing something, and there’s someway of identifying the “cash value” of backwards falsity preservation more directly. I’d be very interested to hear about that…

All this isn’t to deny that there’re senses of “departures from classical logic” on which the way to put the point is more obvious. E.g. suppose one were to *define* consequence as backwards-falsity rather than forwards-truth preservation (as you can in the classical setting), then you wouldn’t get classical multi-premise logic out. And of course—most obviously—there’s a property of the classical consequence relation which isn’t a property of the supervaluational one: that of backwards-preserving falsity. That’s a “revision” of the metatheory of consequence rather than at the level of e.g. what arguments and modes of inference are classified as good or bad (as are the standard results). So it’s a bit different from failures of reduction and the like.

One danger here is to try to make a connection from backwards falsity preservation to inferential practice directly by the following pattern: “if one knows that C is false, and knows that C globally follows from P, then classically one could infer that one of P is False”. That’s (meta) reasoning, alright. But it uses a falsity-predicate or operator which is tantamount to using D-operator (think of False(p)=D~p). So this kind of “inferential practice” isn’t D-free in the way required to make the point.

Thanks for raising the issue: bunch of interesting things to think about!