Having just finished the final revisions to my Phil Compass survey article on Metaphysical indeterminacy and ontic vagueness (penultimate draft available here) I started thinking some more about how those who favour non-classical logics think of their proposal (in particular, people who think that something like the Kleene 3-valued logic or some continuum valued generalization of it is the appropriate setting for analyzing vagueness or indeterminacy).

The way that I’ve thought of non-classical treatments in the past is I think a natural interpretation of one non-classical picture, and I think it’s reasonably widely shared. In this post, I’m going to lay out some of that folklore-y conception of non-classicism (I won’t attribute views to authors, since I’m starting to wonder whether elements of the folklore conception are characterizations offered by opponents, rather than something that the nonclassicists should accept—ultimately I want to go back through the literature and check exactly what people really do say in defence of non-classicism).

Here’s my take on folklore nonclassicism. While classicists think there are two truth-statuses, non-classicists believe in three, four or continuum many truth-statuses (let’s focus on the 3-valued system for now). They might have various opinions about the structure of these truth-statuses—the most common ones being that they’re linearly ordered (so for any two truth-statuses, one is truer than the other). Some sentences (say, Jimmy is bald) get a status that’s intermediate between perfect truth and perfect falsity. And if we want to understand the operator “it is indeterminate whether” in such settings, we can basically treat it as a one-place extensional connective: “indeterminate(p)” is perfectly true just in case p has the intermediate status; otherwise it is perfectly false.

So interpreted, non-classicism generalizes classicism smoothly. Just as the classicist can think there is an intended interpretation of language (a two valued model which gets the representation properties of words right) the non-classicist can think there’s an intended interpretation (say a three valued model getting the representational features right). And that then dovetails very nicely with a model-theoretic characterization of consequence as truth-preservation under (almost) arbitrary reinterpretations of the language. For if one knows that some pattern is truth-preserving under arbitrary reinterpretations of the language, then that pattern is truth-preserving in particular in the intended interpretation—which is just to say that preserves truth simpliciter. This forges a connection between validity and preserving a status we have all sorts of reason to be interested in—truth. (Of course, one just has to write down this thought to start worrying about the details. Personally, I think this integrated package is tremendously powerful and interesting, deserves detailed scrutiny, and should be given up only as an option of last resort—but maybe others take a different view). All this carries over to the non-classicist position described. So for example, on a Kleene system, validity is a matter of preserving perfect truth under arbitrary reinterpretations—and to the extent we’re interested in reasoning which preserves that status, we’ve got the same reasons as before to be interested in consequence. Of course, one might also think that reasoning that preserves non-perfect-falsity is also an interesting thing to think about. And very nicely, we have a systematic story about that too—this non-perfect falsity sense of validity would be the paraconsistent logic LP (though of course not under an interpretation where contradictions get to be true).

With this much on board, one can put into position various familiar gambits in the literature.

- One could say that allowing contradictions to be half-true (i.e. to be indeterminate, to have the middle-status) is just terrible. Or that allowing a parity of truth-status between “Jimmy is bald or he isn’t” and “Jimmy’s both bald and not bald” just gets intuitions wrong (the most powerful way dialectically to deploy this is if the non-classicist backs their position primarily by intuitions about cases—e.g. our reluctance to endorse the first sentence in borderline cases. The accusation is that if our game is taking intuitions about sentences at face value, it’s not at all clear that the non-classicist is doing a good job.)
- One could point out that “indeterminacy” for the nonclassicist will trivially iterate. If one defines Determinate(p) as p&~indeterminate(p) (or directly as the one-place connective that is perfectly true if p is, and perfectly false otherwise) then we’ll quickly see that Determinately determinately p will follow from determinately p; and determinately indeterminate whether p will follow from indeterminate whether p. And so on.
- In reaction to this, one might abandon the 3-valued setting for a smooth, “fuzzy” setting. It’s not quite so clear what value “indeterminate p” should take (though there are actually some very funky options out there). Perhaps we might just replace such talk with direct talk of “degrees of determinacy” thought of as degrees of truth—with “D(p)=n” being again a one-place extensional operator perfectly true iff p has degree of truth n; and otherwise perfectly false.
- One might complain that all this multiplying of truth-values is fundamentally misguiding. Think of people saying that the “third status” view of indeterminacy is all wrong—indeterminacy is not a status that competes with truth and falsity; or the quip (maybe due to Mark Sainsbury?) that one does “not improve a bad idea by iterating it”—i.e. by introducing finer and finer distinctions.

I don’t think these are knock-down worries. (1) I do find persuasive, but I don’t think it’s very dialectically forceful—I wouldn’t know how to argue against someone who claimed their intuitions systematically followed, say, the Kleene tables. (I also think that the nonclassicist can’t really appeal to intuitions against the classicist effectively). Maybe some empirical surveying could break a deadlock. But pursued in this way the debate seems sort of dull to me.

(2) seems pretty interesting. It looks like the non-classicist’s treatment of indeterminacy, if they stick in the 3-valued setting, doesn’t allow for “higher-order” indeterminacy at all. Now, if the nonclassicist is aiming to treat determinacy rather than vagueness *in general* (say if they’re giving an account of the indeterminacy purportedly characteristic of the open future, or of the status of personal identity across fission cases) then it’s not clear one need to posit higher-order indeterminacy.

I should say that there’s one response to the “higher order” issues that I don’t really understand. That’s the move of saying that strictly, the semantics should be done in a non-classical metalanguage, where we can’t assume that “x is true or x is indeterminate or x is false” itself holds. I think Williamson’s complaints here in the chapter of his vagueness book are justified—I just don’t know how what the “non-classical theory” being appealed to here is, or how one would write it down in order to assess its merits (this is of course just a technical challenge: maybe it could be done).

I’d like to point out one thing here (probably not new to me!). The “nonclassical metalanguage” move at best evades the challange that by saying that there’s an intended 3-valued interpretation, one is committed to deny higher-order indeterminacy. But we achieve this, supposedly, by saying that the intended interpretation needs to be described non-classically (or perhaps notions like “the intended interpretation” need to be replaced by some more nuanced characterization). The 3-valued logic is standardly defined in terms of what preserves truth over all 3-valued interpretations describable in a classical metalanguage. We might continue with the classical model-theoretic characterization of the logic. But then (a) if the real interpretation is describable only non-classically, it’s not at all clear why truth-preservation in all classical models should entail truth-preservation in the real, non-classical interpretation. And moreover, our object-language “determinacy” operator, treated extensionally, will still trivially iterate—that was a feature of the *logic* itself. This last feature in particular might suggest that we should really be characterizing the logic as truth-preservation under all interpretations including those describable non-classically. But that means we don’t even have a fix on the *logic*, for who knows what will turn out to be truth-preserving on these non-classical models (if only because I just don’t know how to think about them).

To emphasize again—maybe someone could convince me this could all be done. But I’m inclined to think that it’d be much neater for this view to deny higher-order indeterminacy—which as I mentioned above just may not be a cost in some cases. My suggested answer to (4), therefore, is just to take it on directly—to provide independent motivation for wanting however many values that is independent of having higher-order indeterminacy around (I think Nick J.J. Smith’s AJP paper “Vagueness as closeness” pretty explicitly takes this tack for the fuzzy logic folk).

Anyway, I take this to be some of the folklore and dialectical moves that people try out in this setting. Certainly it’s the way I once thought of the debate shaping up. It’s still, I think, something that’s worth thinking about. But in the next post I’m going to say why I think there’s a far far more attractive way of being a non-classicist.

Pingback: Higher Order Vagueness and Sharp Boundaries « Possibly Philosophy