In the previous post, I set out what I took to be one folklore conception of a non-classicist treatment of indeterminacy. Essential elements were (a) the postulation of not two, but several truth statuses; (b) the treatment of “it is indeterminate whether” (or degreed variants thereof) as an extensional operator; (c) the generalization to this setting of a classicist picture, where logic is defined as truth preservation over a range of reinterpretations, one amongst which is the interpretation that gets things right.

I said in that post that I thought that folklore non-classicism was a defensible position, though there’s some fairly common maneuvers which I think the folklore non-classicist would be better off ditching. One of these is the idea that the intended interpretation is describable “only non-classically”.

However, there’s a powerful alternative way of being a non-classicist. The last couple of weeks I’ve had a sort of road to Damascus moment about this, through thinking about non-classicist approaches to the Liar paradox—and in particular, by reading Hartry Field’s articles and new book where he defends a “paracomplete” (excluded-middle rejecting) approach to the semantic paradoxes and work by JC Beall on a “paraconsistent” (contradiction-allowing) approach.

One interpretative issue with the non-classical approaches to the Liar and the like is that a crucial element is a truth-predicate that works in a way very unlike the notion of “truth” or “perfect truth” (“semantic value 1”, if you want neutral terminology) that feature in the many-valued semantics. But that’s not necessarily a reason by itself to start questioning the folklore picture. For it might be that “truth” is ambiguous—sometimes picking up on a disquotational notion, sometimes tracking the perfect truth notion featuring in the nonclassicists semantics. But in fact there are tensions here, and they run deep.

Let’s warm up with a picky point. I was loosely throwing around terms like “3-valued logic” in the last post, and mentioned the (strong) Kleene system. But then I said that we could treat “indeterminate whether p” as an extensional operator (the “tertium operator” that makes “indet p” true when p is third-valued, and otherwise false). But that operator doesn’t exist in the Kleene system—the Kleene system isn’t expressively complete with respect to the truth functions definable over three values, and this operator is one of the truth-functions that isn’t there. (Actually, I believe if you add this operator, you do get something that is expressively complete with respect to the three valued truth-functions).

One might take this to be just an expressive limitation of the Kleene system. After all, one might think, in the intended interpretation there is a truth-function behaving in the way just described lying around, and we can introduce an expression that picks up on it if we like.

But it’s absolutely crucial to the nonclassical treatments of the Liar that we can’t do this. The problem is that if we have this operator in the language, then “exclusion negation” is definable—an operator “neg” such that “neg p” is true when p is false or indeterminate, and otherwise false (this will correspond to “not determinately p”—i.e. ~p&~indeterminate p, where ~ is so-called “choice” negation, i.e. |~p|=1-|p|). “p v neg p” will be a tautology; and arbitrary q will follow from the pair {p, neg p}. But this is exactly the sort of device that leads to so-called “revenge” puzzles—Liar paradoxes that are paradoxical even in the 3-valued system. Very roughly, it looks as if on reasonable assumptions a system with exclusion negation can’t have a transparent truth predicate in it (something where p and T(p) are intersubstitutable in all extensional contexts). It’s the whole point of Field and Beall’s approaches to retain something with this property. So they can’t allow that there is such a notion around (so for example, Beall calls such notions “incoherent”).

What’s going on? Aren’t these approaches just denying us the resources to express the real Liar paradox? The key, I think, is a part of the nonclassicist picture that Beall and Field are quite explicit about and which totally runs against the folklore conception. They do not buy into the idea that model theory is ranging over a class of “interpretations” of the language among which we might hope to find the “intended” interpretation. The core role of the model theory is to give an extensionally adequate characterization of the consequence relation. But the significance of this consequence relation is not to be explained in model-theoretic terms (in particular, in terms of one among the models being intended, so that truth-preservation on every model automatically gives us truth-preservation simpliciter).

(Field sometimes talks about the “heuristic value” of this or that model and explicitly says that there is something more going on than just the use of model theory as an “algebraic device”. But while I don’t pretend to understand exactly what is being invoked here, it’s quite quite clear that the “added value” doesn’t consist on some classical 3-valued model being “intended”.)

Without appeal to the intended interpretation, I just don’t see how the revenge problem could be argued for. The key thought was that there is a truth-function hanging around just waiting to be given a name, “neg”. But without the intended interpretation, what does this even mean? Isn’t the right thought simply that we’re characterizing a consequence relation using rich set-theoretic resources—and in terms of which we can draw differences that correspond to nothing in the phenomenon being modelled.

So it’s absolutely essential to the nonclassicist treatment of the Liar paradox that we drop the “intended interpretation” view of language. Field, for one, has a ready-made alternative approach to suggest—a Quinean combination of deflationism about truth and reference, with perhaps something like translatability being invoked to explain how such predicates can be applied to expressions in a language other than ones own.

I’m therefore inclined to think of the non-classicism—at least about the Liar—as a position that *requires* something like this deflationist package. Whereas the folklore non-classicist I was describing previously is clearly someone who takes semantics seriously, and who buys into a generalization of the powerful connections between truth and consequence that a semantic theory of truth affords.

When we come to the analysis of vagueness and other (non-semantic-paradox related) kinds of indeterminacy, it’s now natural to consider this “no interpretation” non-classicism. (Field does exactly this—he conceives of his project as giving a unified account of the semantic paradoxes and the paradoxes of vagueness. So at least *this* kind of nonclassicism, we can confidently attribute to a leading figure in the field). All the puzzles described previously for the non-classicist position are thrown into a totally new light. Once we make this move.

To begin with, there’s no obvious place for the thought that there are multiple truth statuses. For you get that by looking at a many valued model, and imagining that to be an image of what the intended interpretation of the language must be like. And that is exactly the move that’s now illegitimate. Notice that this undercuts one motivation for going towards a fuzzy logic—the idea that one represents vague predicates as some smoothly varying in truth status. Likewise, the idea that we’re just “iterating a bad idea” in multiplying truth values doesn’t hold water on this conception—since the many-values assigned to sentences in models just don’t correspond to truth statuses.

Connectedly, one shouldn’t say that contradictions can be “half true” (nor that excluded middle is “half true”. It’s true that (on say the Kleene approach) that you won’t have ~(p&~p) as a tautology. Maybe you could object to *that* feature. But that to me doesn’t seem nearly as difficult to swallow as a contradiction having “some truth to it” despite the fact that from a contradiction, everything follows.

One shouldn’t assume that “determinately” should be treated as the tertium operator. Indeed, if you’re shooting for a combined non-classical theory of vagueness and semantic paradoxes, you *really* shouldn’t treat it this way, since as noted above this would give you paradox back.

There is therefore a central and really important question: what is the non-classical treatment of “determinately” to be? Sample answer (lifted from Field’s discussion of the literature): define D(p) as p&~(p–>~p), where –> is a certain fuzzy logic conditional. This, Field argues, has many of the features we’d intuitively want a determinately operator to have; and in particular, it allows for non-trivial iterations. So if something like this treatment of “determinately” were correct, then higher-order indeterminacy wouldn’t be obviously problematic (Field himself thinks this proposal is on the right lines, but that one must use another kind of conditional to make the case).

“No interpretation” nonclassicism is an utterly, completely different position from the folklore nonclassicism I was talking about before. For me, the reasons to think about indeterminacy and the semantic and vagueness-related paradoxes in the first place, is that they shed light on the nature of language, representation, logic and epistemology. And on these sorts of issues, the no interpretation nonclassicism and the folklore version take diametrically opposed positions on such issues, and flowing from this, the appropriate ways to arguing for or against these views are just very very different.

Could you explain what you mean by talking about the intended interpretation of a truth function in terms of models? Usually the truth functions, which I had thought you were treating as logical vocabulary, aren’t reinterpreted between models. They are fixed by the recursion clauses. The “intended interpretation” talk is something that I had thought only applied to relations, functions and constants which receive possibly different extensions in different models, so the intended interpretation is the class of models giving the intuitive extension to some non-logical symbols.

Are Field’s book and articles good entry points into vagueness and the liar?

Hey Shawn,

Yes, I should think about how I express this. The background setting I was thinking of is a “general” semantics where every expression in the language gets an extension (and extensional connectives like “and” get assigned a certain function from truth-values to truth values). I see that “truth function” might then be ambiguous between the expression and the extension—I was using “truth function” for the latter.

The alternative course, as you say, is to only assign extensions to non-logical expressions, and then lay down an axiom for each connective saying how truth of compounds depends on the truth of their parts. In the setting I mention, you can get away with a single axiom.

Of course, if you allow reinterpretation of the logical particles in the models over which you generalize, then you’re not going to get a sensible characterization of logical consequence. So in this setting, you have to declare some of the models logically inadmissible—and say that B follows from A if the argument is truth-preserving at all logically admissible models.

So that’s the background to what I was saying. According to folklore classicism, “and” etc do get extensions on the intended interpretation—but it’s just they get the same extension on this and every other admissible interpretation.

I hope nothing hangs on this framework assumption of mine—but I should definitely think about how to formulate the issues when we handle the connectives through axioms rather than through assignment of extensions. Thanks for pointing it out…

I think the Field book is really excellent. I’m no expert in the truth literature, but it surveyed and compared and contrasted loads of really important and interesting material on the semantic paradoxes. One quarter of the book is Field putting forward his own positive paracomplete view. There’s only a couple of short chapters on vagueness in it—though they’re pretty interesting. The papers I have in mind were “indeterminacy, degree of belief and excluded middle” from his collection Truth and the absence of fact and something called I think “the semantic paradoxes and the paradoxes of vagueness” from JC Beall’s collection “Liars and Heaps”. They aren’t really survey pieces, but the latter in particular articulates the main things he wants to push for.

This would be an excellent contribution to Arché’s philosophy of logic seminar this semester: on “Truth after Kripke”, more precisely, Maudlin and Field.

Originally, the idea was to read a series of Field’s articles after the Maudlin book (and perhaps the relevant chapter from JCs manuscript), but now that the book is out that is perhaps the better option. Have you had time to check if he has shifted his position from the articles (2003, 2006 especially) to the book?

Looking forward to seeing you in St Andrews.

I’m curious about your claim that the “no interpretation” account is absolutely essential to the many-valued approach to semantic paradox.

It seems to me that Priest takes the model theory much more seriously; it’s not merely a heuristic device for defining an extensionally correct consequence relation. (Actually, I think of Field and Beall’s approach as more of a heuristic device for defining a T-predicate with certain target features.)

Both Beall and Field think mathematics is entirely classical, including set theory. Priest, however, doesn’t. He makes much of the demand that paraconsistentist should be able to provide models in a paraconsistent meta-theory. (And he’s done quite a bit of work to get it done.)

In any case, I’m with you: much of the Beall and Field’s response to ‘revenge’ worries comes from the “no interpretation” view. But I’m not at all sure it’s essential to a many-valued approach.

Hi Aaron,

I think you’re absolutely right, I was overstating the case. I think what I should have said is that what’s essential to the nonclassicist treatment is that no *classical* model is intended (even then, I think that may be too strong—I’ve been trying to figure out what the argument would look like over the weekend, and its not absolutely obvious).

Anyway, the point you make is that if the interpretation is nonclassical (perhaps by having a nonclassical set theory) then it’s not clear why we can’t have an intended interpretation.

That seems right, and seems to be Priest’s view (actually, I think Field mentions it too at some points, and the idea that nonclassical models are approximating to the real, nonclassical, intended interpretation is one way of understanding some of Field’s talk of wanting models that are more than a merely algebraic device). And it does seem much more natural if you think that mathematics is in the limit nonclassical anyway. I need to think about this stuff some more, and particularly whether anything he says can be dualized for the use of the paracompletist.

Hi Ole,

Hey that sounds fun! We’re about to set up a reading group here on the Field book.

I think you’d need to ask someone with a prior appeciation of the subtleties of the literature about whether Field has shifted his position. I was taking it to be an elaboration of the views in the paper in JC’s Liars and Heaps volume…

It’d actually be really useful for someone who knows all this stuff to blog something on what the various Field papers do and how they fit together.

See you soon!

Pingback: Higher Order Vagueness and Sharp Boundaries « Possibly Philosophy