So this was the biggest selection-problem I faced: there are so many many-valued systems out there, and so many ways to think about them, which to choose?

I would have liked to talk a bunch on the interpretation of “third truth values”. This seems often glossed over badly to me. In the vagueness literature, it’s often assumed that once we’ve got a third truth value, we might as well be degree theorists. But it seems to me that “gap” interpretations of the third truth value are super-different from the “half-true” interpretation. But to make the case that this is more than a verbal dispute, I think we have to say a whole lot more about the cognitive role of indeterminacy, the role of logic, etc etc. All good stuff (in fact, very close to what I’m working on right now). But I chose not to go that way.

Another thing I could have done is talk directly on degree theories. Nick Smith has a new book length treatment of them, which makes some really nice moves both in motivating and defending the account. And of course they’re historically popular —and “fuzzy logic” is what you always hear talked about in non-philosophical treatments. In Williamson’s big vagueness book, degree theories are really the focus of the chapter corresponding to this section.

On the other hand, I felt is was really important to get a representative of the “logic first” view into the picture—someone who really treated semantics kind of instrumentally, and who saw the point of talking about vagueness in a rather different way to the way it’s often presented in the extant survey books. And the two that sprang to mind here were Hartry Field and Crispin Wright. Of these, Crispin’s intuitionism is harder to set up, and has less connections to other many valued theories. And his theory of a quandary cognitive role, while really interesting, just takes longer to explain than Hartry’s rejectionist suggestion. Wright’s agnosticism is a bit hard to explain too—I take it the view is supposed to be that we’re poised between the Williamsonian style picture, and an option where you assert the negation of bivalence—and the first seems unbelievable and the second incoherent, so we remain agnostic. But if something is incoherent, how can we remain agnostic? (So, actually, I think the better way to present the view is as agnosticism between bivalence endorsing views and Field-style rejectionist views, albeit carried out in an intuitionist rather than Kleene-based system. But if that’s the way to work things out, rejectionism is conceptually prior to agnosticism).

So in the end I started with a minimal intro to the many-valued truth tables, a brief pointer in the direction of extensions and interpretations, and then concentrate on the elements of the Field view—the Quinean translate-and-deflate theory of language, the logical revisionism (and instrumentalism about model theory) and the cognitive role that flows from it.

Just as with the epistemicism section, there are famous objections I just didn’t have room for. The whole issue over the methodological ok-ness of revising logic, burdens that entails… nothing will remain of that but a few references.

**Vagueness survey paper: section V**

**REVISIONARY LOGIC: MANY-VALUED SETTINGS**

A distinctive feature of supervaluationism was that while it threw out *bivalence *(“Harry is bald” is either true or false) it preserved the corresponding instance of *excluded middle* (“Harry is either bald or not bald”). Revising the *logic *in a more thorough-going way would allow for a coherent picture where we can finally reject the claim “there is a single hair that makes the difference between bald and non-bald” without falling into paradox.

“Many valued” logics can be characterized by *increasing the number of truth-values we work with*—perhaps to three, perhaps infinitely many—and offering generalizations of the familiar stories of how logical constants behave to accommodate this tweak. There are all sorts of ways in which this can be developed, and even more choice points in extracting notions of “consequence” out of the mass of relations that then become available.

Here is a sample many-valued logic, for a propositional language with conjunctions, disjunctions and negations. To characterize the logic, we postulate three values (let’s call them, neutrally, “1” “1/2” and “0”). For the propositional case, the idea will be that each atomic sentence will be assigned some one of the truth values; and then the truth values get assigned to complex sentences recursively. Thus, a conjunction will get that truth value which is the minimum of the truth values of its conjuncts; a disjunction will get that truth value which is the maximum of the truth values of its disjuncts; and a negation will get assigned 1 minus the truth value of the claim negated (you can easily check that, ignoring the value ½ altogether, we get back exactly classical truth-tables.)

A many-valued logic (the *strong Kleene logic*) is defined by looking at the class of those arguments that are “1-preserving”, i.e. such that when all the premises are value 1, the conclusion is value 1 too. It has some distinctive features; e.g. excluded middle “Av~A” is no longer a tautology, since it can be value ½ when A is value ½. “A&~A” is still treated as a logical contradiction, on the other hand (every sentence whatsoever follows from it), since it will never attain value 1, no matter what value A is assigned.

One option at this point is to *take this model theory seriously*—much as the classicist and supervaluationist do, and hypothesise that natural language has (or is modelled by?) some many-valued interpretation (or set of interpretations?). This view is a major player in the literature (cite cite cite).

For the remainder of this section, I focus on a different framework in which to view the proposal to revise logic. This begins with the rejection of the *very idea *of an “intended interpretation” of a language, or a semantic treatment of truth. Rather, one treats truth as a “device of disquotation”—perhaps *introduced *by means of the kind of disquotational principles mentioned earlier (such a device proves *useful*, argue its fans, in increasing our expressive power—allowing us to endorse or deny such claims as “everything the pope says is true”). The disquotation principles capture all that there is to be said about truth, and the notion doesn’t need any “model theoretic foundation” in an “intended interpretation” to be in good-standing.

In the first instance, such a truth-predicate is “local”—only carving the right distinctions in the very language for which it was introduced (via disquotation). To allow us to speak sensibly of true *French *sentences (for example), Field (cite) following Quine (cite) analyzes

“la neige est blanc” is true

as:

there’s a good translation of “la neige est blanc” into English, such that the translated version is disquotationally true.

Alongside this disquotationism is a distinctive attitude to logic. On Field’s view, logical consequence does not need to be “analyzed” in model-theoretic terms. Consequence is taken as primitive, and model theory seen as a useful *instrument* for characterizing the extension of this relation.

Such disquotationism elegantly *avoids* the sort of worries plaguing the classical, supervaluational, and traditional many-valued approaches – in particular, since there’s no explanatory role for an “intended interpretation”, we simply avoid worries about how such an intended interpretation might be settled on. Moreover, if model-theory is mere instrument, there’s no pressure to say anything about the nature of the “truth values” it uses. * *

So far, no appeal to revisionary logic has been made. The utility of hypothesizing a non-classical (Kleene-style) logic rather than a classical one comes in explaining the puzzles of vagueness. For Field, when indeterminacy surfaces, we should reject the relevant instance of excluded middle. Thus we should reject “either Harry is bald or he isn’t”—and consequently, also reject the claim that Harry is bald (from which the former follows, in any sensible logic). We then must reject “Harry is bald and the man with one hair less isn’t” (again because something we reject follows from it). So, from our rejection of excluded middle, we derive the core data behind the “little by little” worry—rejection of the horrible conjunctions (N).

So, like one form of supervaluationism, Field sees borderline cases of vague predicates as characterized by *forced rejection*. No wonder further inquiry into whether Harry is bald seems pointless. Again in parallel, Field accommodates rejection of (N) without accepting (N*)—and this is at least a start on explaining where our perplexity over the sorites comes from. Unlike the supervaluationist, he isn’t committed to the generalizations “there is some bald guy next to a non-bald guy”—the Kleene logic (extended to handle quantifiers) enforces rejection of this claim.

One central concern about this account of vagueness (setting aside very *general *worries about the disquotational setting) is whether in weakening our logic we have thrown the baby out with the bathwater. Some argue that it is *methodologically objectionable *to revise logic without *overwhelming reason to do so*, given the way that classical assumptions are built into successful, progressive science even when vagueness is clearly in play (think of applications of physics, or the classical assumptions in probability and decision theory, for example). This is an important issue: but let’s set it aside for now.

More locally, the logic we’ve looked at so far seems excessively *weak *in expressive power. It’s not clear, for example, how one should capture platitudes like “if someone is balder than a bald person, that person too is bald” (translating the “if” here as a kind of disjunction or negated conjunction, as is standard in the classical case, we get something entailing instances of excluded middle we want to reject – we do not seem to yet have in the system a suitable *material *conditional). For another thing, we haven’t yet said anything about how the central notion “it is determinate whether” fits in. It seems to have interesting logical behaviour—for example, the key connection between excluded middle and indeterminacy would be nicely captured if from Av~A one could infer “it is determinate whether A”. Much of Field’s *positive *project involves extending the basic Kleene logic to accommodate a suitable conditional and determinacy operators, in particular to capture thoroughly “higher-order” kinds of vagueness (borderline cases of borderline cases, and so on).

Hi Robbie,

just as a report of my experience on reading the section: I didn’t find it immediately obvious how the move from disquotationalism to ‘no explanatory role for intended interpretations’ and ‘no worries re how to settle on one’ goes. The following (naive) line of thought is, I think, what gives me pause.

Languages, or sentences of languages, vague or not, have meanings. A semantic theory for a language is a theory about the meanings of the sentences of that language. Interpretations are (assignments of) meanings that could be correct for a language, the intended one is the one that is correct. It’s puzzling what interpretation could be intended for a vague language.

So far I haven’t said anything about truth. So how does the disquotational theory of truth suddenly dispose of these questions?

Perhaps all that’s needed (if anything is) is a brief remark stating the assumed connection between views on truth and semantics, something which entails that if the T-schema is all there is to be said about ‘true’, it’s all there is to be said re semantics in the sense of ‘theory of meaning’.

Then again, it’s early in the morning and perhaps that’s why I lost track of how the argument goes 🙂

Best, Stephan

Minor quibble: it should be “la neige est blanche” since “neige” is feminine.