Category Archives: Logic

Psychology without semantics or psychologically loaded semantics?

Here’s a naive view of classical semantics, but one worth investigating. According to this view, semantics is a theory that describes a function from sentences to semantic properties (truth and falsity) relative to a given possible circumstance (“worlds”). Let’s suppose it does this via a two-step method. First, it assigns to each sentence a proposition. Second, it assigns to each proposition, a function from worlds to {True, False} (“truth conditions”).

Let’s focus on the bit where propositions (at a world) are assigned truth-values. One thing that’s leaps out is that the “truth values” assigned have significance beyond semantics. For propositions, we may assume, are the objects of attitudes like belief. It’s natural to think that in some sense, one should believe what’s true, and reject what’s false. So the statuses attributed to propositions as part of the semantic theory (the part that describes the truth-conditions of propositions) are psychologically loaded, in that propositions that have one of the statuses are “to be believed” and those that have the other are “to be rejected”. The sort of normativity involved here is extremely externalistic, of course — it’s not a very interesting criticism of me that I happen to believe p, just on the basis p is false, if my evidence suggested p overwhelmingly. But the idea of an external ought here is familiar and popular. It is often reported, somewhat metaphorically, as the idea that beliefs aims at truth (for discussion, see e.g. Ralph Wedgwood on the aim of belief).

Suppose we are interested in one of the host of non-classical semantic theories that are thrown around  when discussing vagueness. Let’s pick a three-valued Kleene theory, for example. On this view, we have three different semantic statuses that propositions (relative to a circumstance) are mapped to. Call them neutrally A, B and C (much of the semantic theory is then spent telling us how these abstract “statuses” are distributed around the propositions, or sentences which express the propositions). But what, if any, attitude is it appropriate to take to a proposition that has one of these statuses? If we have an answer to this question, we can say that the semantic theory is psychologically loaded (just as the familiar classical setting was).

Rarely do non-classical theorists tell us explicitly what the psychological loading of the various states are. But you might think an answer is implicit in the names they are given. Suppose that status A is called “true”, status C is called “falsity”. Then, surely, propositions that are A are to be believed, and propositions with C are to be rejected. But what of the “gaps”, the propositions that have status B, the ones that are neither true nor false? It’s rather unclear what to say; and without explicit guidance about what the theorist intends, we’re left searching for a principled generalization. One thought is that they’re at least untrue, and so are intended to have the normative role that all untrue propositions had in the classical setting—they’re to be rejected. But of course, we could equally have reasoned that, as propositions that are not false, they might be intended to have the status that all unfalse propositions have in the classical setting—they are to be accepted. Or perhaps they’re to have some intermediate status—-maybe a proposition that has B is to be half-believed (and we’d need some further details about what half-belief amounts to). One might even think (as Maudlin has recently explicitly urged) that in leaving a gap between truth and falsity, the propositions are devoid of psychological loading—that there’s nothing general to say about what attitude is appropriate to the gappy cases.

But notice that these kind of questions are at heart, exegetical—that we face them just reflects the fact that the theorist hasn’t told us enough to fix what theory is intended. The real insight here is to recognize that differences in psychological loading give rise to very different theories (at least as regards what attitudes to take to propositions), which should each be considered on their own merits.

Now, Stephen Schiffer has argued for some distinctive views about what the psychology of borderline cases should be like. As John Macfarlane and Nick Smith have recently urged, there’s a natural way of using Schiffer’s descriptions to fill out in detail one fully “psychologically loaded” degree-theoretic semantics. To recap, Schiffer distinguishes between “standard” partial beliefs (SPBs) which we can assume behave in familiar (probabilistic) ways and have their familiar functional role when there’s no vagueness or indeterminacy at issue. But then we also have special “vagueness-related” partial beliefs (VPBs) which come into play for borderline cases. Intermediate standard partial beliefs allow for uncertainty, but are “unambivalent” in the sense that when we are 50/50 over the result of a fair coin flip, we have no temptation to all-out judge that the coin will land heads. By contrast, VPBs exclude uncertainty, but generate ambivalence: when we say that Harry is smack-bang borderline bald, we are pulled to judge that he is bald, but also (conflictingly) pulled to judge that he is not bald.

Let’s suppose this gives us enough for an initial fix on the two kinds of state. The next issue is to associate them with the numbers a degree-theoretic semantics assigns to propositions (with Edgington, let’s call these numbers “verities”). Here is the proposal: a verity of 1 for p is ideally associated with (standard) certainty that p—an SPB of 1. A verity of 0 for p is ideally associated with (standard) utter rejection of p—an SPB of 0. Intermediate verities are associated with VPBs. Generally, a verity of k for p is associated with a VPB of degree k in p. [Probably, we should say for each verity, both what the ideal VPB and SPB are. This is easy enough: one should have VPBs of zero when the verity is 1 or 0; and SPB of zero for any verity other than 1.]

Now, Schiffer’s own theory doesn’t make play with all these “verities” and “ideal psychological states”. He does use various counterfactual idealizations to describe a range of “VPB*s”—so that e.g. relative to a given circumstance, we can talk about which VPB an idealized agent would take to a given proposition (though it shouldn’t be assumed that the idealization gives definitive verdicts in any but a small range of paradigmatic cases). But his main focus is not on the norms that the world imposes on psychological attitudes, but norms that concern what combinations of attitudes we may properly adopt—-requirements of “formal coherence” on partial belief.

How might a degree theory psychologically loaded with Schifferian attitudes relate to formal coherence requirements? Macfarlane and Smith, in effect, observe that something approximating Schiffer’s coherence constraints arises if we insist that the total partial belief in p (SPB+VPB) is always representable as an expectation of verity (relative to a classical credence distribution over possible situations). We might also observe that component corresponding to Schifferian SPB within this is always representable as the expectation of verity 1 (relative to the same credence). That’s suggestive, but it doesn’t do much to explain the connection between the external norms that we fed into the psychological loading, and the formal coherence norms that we’re now getting out. And what’s the “underlying credence over worlds” doing? If all the psychological loading of the semantics is doing is enabling a neat description of the coherence norms, that may have some interest, but it’s not terribly exciting—what we’d like is some kind of explanation for the norms from facts about psychological loading.

There’s a much more profound way of making the connection: a way of deriving coherence norms from psychologically loaded semantics. Start with the classical case. Truth (truth value 1) is associated with certainty (credence 1). Falsity (truth value 0) is associated with utter rejection (credence 0). Think of inaccuracy as a way of measuring how far a given partial belief is from the actual truth value; and interpret the “external norm” as telling you to minimize overall inaccuracy in this sense.

If we make suitable (but elegant and arguably well-motivated) assumptions about how “accuracy” is to be measured, then it turns out probabilistic belief states emerge as a special class in this setting. Every improbabilistic belief state can be shown to be accuracy-dominated by a probabilistic one—-there’s some particular probability that’ll be necessarily more accurate than the improbability you started with. No probabilistic belief state is dominated in this sense.

Any violations of formal coherence norms thus turns out to be needlessly far from the ideal aim. And this moral generalizes. Taking the same accuracy measures, but applying them to verities as the ideals, we can prove exactly the same theorem. Anything other than the Smith-Macfarlane belief states will be needlessly distant from the ideal aim. (This is generated by an adaptation of Joyce’s 1998 work on accuracy and classical probabilism—see here for the generalization).

There’s an awful lot of philosophy to be done to spell out the connection in the classical case, let alone its non-classical generalization. But I think even the above sketch gives a view on how we might not only psychologically load a non-classical semantics, but also use that loading to give a semantically-driven rationale for requirements of formal coherence on belief states—and with the Schiffer loading, we get the Macfarlane-Smith approximation to Schifferian coherence constraints.

Suppose we endorsed the psychologically-loaded, semantically-driven theory just sketched. Compare our stance to a theorist who endorsed the psychology without semantics—that is, they endorsed the same formal coherence constraints, but disclaimed commitment to verities and their accompanying ideal states. They thus give up on the prospect of giving the explanation of the coherence constraints sketched above. We and they  would agree on what kinds of psychological states are rational to hold together—including what kind of VPB one could rationally take to p when you judge p to be borderline. So they could both agree on the doxastic role of the concept of “borderlineness”, and in that sense give a psychological specification of the concept of indeterminacy. We and they would be allied against rival approaches—say, the claims of the epistemicists (thinking that borderlineness generates uncertainty) and Field (thinking that borderlineness merits nothing more than straight rejection).  The fan of psychology-without-semantics might worry about the metaphysical commitments of his friend’s postulate of a vast range of fine-grained verities (attaching to propositions in circumstances)—metasemantic explanatory demands and higher-order-vagueness puzzles are two familiar ways in which this disquiet might be made manifest. In turn, the fan of psychologically loaded, semantically driven theory might question his friend’s refusal to give any underlying explanation of the source of the requirements of formal coherence he postulates. Can explanatory bedrock really be certain formal patterns amongst attititudes? Don’t we owe an illuminating explanation of why those patterns are sensible ones to adopt? (Kolodny mocks this kind of attitude, in recent work, as picturing coherence norms as a mere “fetish for a certain kind of mental neatness”). That explanation needn’t take a semantically-driven form—but it feels like we need something.

To repeat the basic moral. Classical semantics, as traditionally conceived, is already psychologically loaded. If we go in for non-classical semantics at all (with more than instrumentalist ambitions in mind) we underspecify the theory until we’re told what what the psychological loading of the new semantic values is to be. That’s one kind of complaint against non-classical semantics. It’s always possible to kick away the ladder—to take the formal coherence constraints motivated by a particular elaboration of this semantics, and endorse only these without giving a semantically-driven explanation of why these constraints in particular are in force. Repeating this stance, we can find pairs of views that, while distinct, are importantly allied on many fronts. I think in particular this casts doubt on the kind of argument that Schiffer often sounds like he’s giving—i.e. to argue from facts about appropriate psychological attitudes to borderline cases, to the desirability of a “psychology without semantics” view.

Intuitionism and truth value gaps

I spent some time last year reading through Dummett on non-classical logics. One aim was to figure out what sorts of arguements there might be against combining a truth-value gap view with intuitionistic logic. The question is whether in an intuitionist setting it might be ok to endorse ~T(A)&~T(~A) (The characteristic intuitionistic feature, hereabouts, is a refusal to assert T(A)vT(~A)—which is certainly weaker than asserting its negation. Indeed, when it comes to the law of excluded middle, the intuitionist refuses to assert Av~A in general, but ~(Av~A) is an intuitionistic inconsistency).

On the motivational side: it is striking that in Kripke tree semantics for intuitionistic logic, there are sentences such that neither they nor their negation are “forced”. And if we think of forcing in a Kripke tree as an analogue of truth, that looks like we’re modelling truth value gaps.

A familiar objection to the very idea of truth-value gaps (which appears early on in Dummett—though I can’t find the reference right now) is that asserting the existence of truth value gaps (i.e. endorsing ~T(A)&~T(~A)) is inconsistent with the T-scheme. For if we have “T(A) iff A”, then contraposing and applying modus ponens, we derive from the above ~A and ~~A—contradiction. However, this does require the T-scheme, and you might let the reductio fall on that rather than the denial of bivalence. (Interestingly, Dummett in his discussion of many-valued logics talks about them in terms of truth value gaps without appealing to the above sort of argument—so I’m not sure he’d rest all that much on it).

Another idea I’ve come across is that an intuitionistic (Heyting-style) reading of what “~T(A)” says will allow us to infer from it that ~A (this is based around the thought that intuitionistic negation says “any proof of A can be transformed into a proof of absurdity”). That suffices to reduce a denial of bivalence to absurdity. There are a few places to resist this argument too (and it’s not quite clear to me how to set it up rigorously in the first place) but I won’t go into it here.

Here’s one line of thought I was having. Suppose that we could argue that Av~A entailed the corresponding instance of bivalence: T(A)vT(~A). It’s clear that the latter entails ~(~T(A)&~T(~A))—i.e. given the claim above, the law of excluded middle for A will entail that A is not gappy.

So now suppose we assert that it is gappy. For reductio, suppose Av~A. By the above, this entails that A is not gappy. Contradiction. Hence ~(Av~A). But we know that this is itself an intuitionistic inconsistency. Hence we have derived absurdity from the premise that A is gappy.

So it seems that to argue against gaps, we just need the minimal claim that LEM entails bivalence. Now, it’s a decent question what grounds we might give for this entailment claim; but it strikes me as sufficiently “conceptually central” to the intuitionistic idea about what’s going on that it’s illuminating to have this argument around.

I guess the last thing to point out is that the T-scheme argument may be a lot more impressive in an intuitionistic context in any case. A standard maneuver when denying the T-scheme is to keep the T-rules: to say that A entails T(A), for example (this is consistent with rejecting the T-scheme if you drop conditional proof, as supervaluational and many-valued logicians often do). But in an intuitionistic context, the T-rule contraposes (again, a metarule that’s not good in supervaluational and many-valued settings) to give an entailment from ~T(A) to ~A, which is sufficient to reduce the denial of bivalence to absurdity. This perhaps explains why Dummett is prepared to deny bivalence in non-classical settings in general, but seems wary of this in an intuitionistic setting.

The two cleanest starting points for arguing against gaps for the intuitionist, it seems to me, are either to start with the T-rule, “A entails T(A)” or with the claim “Av~A entails T(A)vT(~A)”. Clearly the first allows you to derive the second. I can’t see at the moment an argument that the second entails the first (if someone can point to one, I’d be very interested), so perhaps basing the argument against gaps on the second is the optimal strategy. (It does leave me with a puzzle—what is “forcing” in a Kripke tree supposed to model, since that notion seems clearly gappy?)

Motivating material conditionals

Soul physics has a post that raises a vexed issue: how to say something to motivate the truth-table account of the material conditional, for people first encountering it.

They give a version of one popular strategy: argue by elimination for the familiar material truth table. The broad outline they suggest (which I think is a nice way to divide matters up) goes like this.

(1) Argue that “if A, B” is true when both are true, and false if A is true and B is false. This leaves two remaining cases to be consider—the cases where A is false.

(2) Argue that none of the three remaining rivals to the material conditional truth table works.

I won’t say much about (1), since the issues that arise aren’t that different from what you anyway have to deal with for motivating truth tables for disjunction, say.

(2) is the problem case. The way Soul Physics suggests presenting this is as following from two minimal observations about the material conditional (i) it isn’t trivial (i.e. it doesn’t just have the same truth values as one of the component sentences) and it’s not symmetric—“if A then B” and “if B then A” can come apart.

In fact, all of the four options that remain at this stage can be informatively described. There’s a truth-function equivalent to A (this is the trivial one); the conjunction of A&B; the biconditional between A and B (these are both symmetric); and finally the material conditional itself.

But there’s something structurally odd about these sort of motivations. We argue by elimination of three options, leaving the material conditional account the winner. But the danger is, of course, that we find something that looks equally as bad or worse with the remaining option, leaving us back where we started with no truth table better motivated than the others.

And the trouble, notoriously, is that this is fairly likely to happen, the moment people get wind of paradoxes of material implication. It’s pretty hard to explain why we put so much weight on symmetry, while (to our students) seeming to ignore the fact that the account says silly things like “If I’m in the US, I’m in the UK” is true.

One thing that’s missing is a justification for the whole truth-table approach—if there’s something wrong with every option, shouldn’t we be questioning our starting points? And of course, if someone raises these sort of questions, we’re a little stuck, since many of us think that the truth table account really is misguided as a way to treat the English indicative. But intro logic is perhaps not the place to get into that too much!

So I’m a bit stuck at this point—at least in intro logic. Of course, you can emphasize the badness of the alternatives, and just try to avoid getting into the paradoxes of material implication—but that seems like smoke and mirrors to me, and I’m not very good at carrying it off. So if I haven’t got more time to go into the details I’m back to saying things like: it’s not that there’s a perfect candidate, but it happens that this works better than others—trust me—so let’s go with it. When I was taught this stuff, I was told about Grice at this point, and I remember that pretty much washing over my head. And its a bit odd to defend widespread practice of using the material conditional by pointing to one possible defence of it as an interpretation of the English indicative that most of us thing is wrong anyway. I wish I had a more principled fallback.

When I’ve got more time—and once the students are more familiar with basic logical reasoning and so on, I take a different approach, one that seems to me far more satisfactory. The general strategy, that replaces (2), at least, of the above, is to argue directly for the equivalence of the conditional “if A, B” with the corresponding disjunction ~AvB. And if you want to give a truth table for the former,  you just read off the latter.

Now, there are various ways of doing this—say by pointing to inferences that “sound good”, like the one from “A or B” to “if not A, then B”. The trouble is that we’re in a similar situation to that earlier—there are inferences that sound bad just nearby. A salient one is the contrapositive: “It’s not true that if A, then B”  doesn’t sound like it implies “A and ~B”. So there’s a bit of a stand-off between or-to-if and not-if-to-and-not.

My favourite starting point is therefore with inferences that don’t just sound good, but for which you see an obvious rationale—and here the obvious candidates are the classic “in” and “out” rules for the conditional: modus ponens and conditional proof. You can really see how the conditional is functioning if it obeys those rules—allowing you to capture good reasoning from assumptions, store it, and then release it when needed. It’s not just reasonable—it’s the sort of thing we’d want to invent if we didn’t have it!

Given these, there’s a straightforward little argument by conditional proof (using disjunctive syllogism, which is easy enough to read off the truth table for “or”) for the controversial direction of equivalence between the English conditional “if A, B” and ~AvB. Our premise is ~AvB. To show the conditional follows, we use conditional proof. Assume A. By disjunctive syllogism, B. So by conditional proof, if A then B.

If you’ve already motivated the top two lines  of the truth table for “if”, then this is enough to fill out the rest of the truth table—that ~AvB entails “if A then B” tells you how the bottom two lines should be filled out. Or you could argue (using modus ponens and reasoning by cases) for the converse entailment, getting the equivalence, at which point you really can read off the truth table.

An alternative is to start from scratch motivating the truth table. We’ve argued that ~AvB entails “if A then B”. This forces the latter to be true whenever the former is.  Hence the three “T” lines of the material conditional truth table—which are the controversial bits. In order that modus ponens hold, we can’t have the conditional true when the antecedent is true and the consequent false, so we can see that the remaining entry in the truth table must be “F”. So between them, conditional proof (via the above argument) and modus ponens (directly) fix each line of the material truth table.

Now I suspect that—for people who’ve already got the idea of a logical argument, assumptions, conclusions and so on—this sort of idea will seem pretty accessible. And the idea that conditionals are something to do with reasoning under suppositions is very easy to sell.

Most of all though, what I like about this way of presenting things is that there’s something deeply *right* about it. It really does seem to me that the reason for bothering with a material conditional at all is its “inference ticket” behaviour, as expressed via conditional proof and modus ponens.  So there’s something about this way of putting things that gets to the heart of things (to my mind).

But, further, this way of looking at things provides a nice comparison and contrast with other theories of the English indicative, since you can view famous options as essentially giving different ways of cashing out the relationship between conditionals and reasoning under a supposition. If we don’t like the conditional-proof idea about how they are related, an obvious next thing to reach for is the Ramsey test—which in a probabilistic version gets you ultimately into the Adams tradition. Stalnakerian treatments of conditionals can be given a similar gloss. Presented this way, I feel that the philosophical issues and the informal motivations are in sync.

I’d really like to hear about other strategies/ways of presenting this—-particular ideas for how to get it across at “first contact”.

Gradational accuracy; Degree supervaluational logic

In lieu of new blogposts, I thought I’d post up drafts of two papers I’m working on. They’re both in fairly early stages (in particular, the structure of each needs quite a bit of sorting out. But as they’re fairly techy, I think I’d really benefit from any trouble-shooting people were willing to do!

The first is “Degree supervaluational logic“. This is the kind of treatment of indeterminacy that Edgington has long argued for, and it also features in work from the 70’s by Lewis and Kamp. Weirdly, it isn’t that common, though I think there’s a lot going for it. But it’s arguably implicit in a lot of people’s thinking about supervaluationism. Plenty of people like the idea that the “proportion of sharpenings on which a sentence is true” tells us something pretty important about that sentence—maybe even serving to fix what degree of belief we should have in it. If proportions of sharpenings play this kind of “expert function” role for you, then you’re already a degree-supervaluationist in the sense I’m concerned with, whether or not you want to talk explicitly about “degrees of truth”.

One thing I haven’t seen done is to look systematically at its logic. Now, if we look at a determinacy-operator free object language, the headline news is that everything is classical—and that’s pretty robust under a number of ways of defining “validity”. But it’s familiar from standard supervaluationism that things can become tricky when we throw in determinacy operators. So I look at what happens when we add in things like “it is determinate to degree 0.5 that…” into our object-language. What happens now depends *very much* on how validity is defined. I think there’s a lot to be said for “degree of truth preservation” validity—i.e. the conclusion has to be at least as true as the premises. This is classical in the determinacy-free language. And its “supraclassical” even when those operators are present—every classically valid argument is still valid. But in terms of metarules, all hell breaks loose. We get failures of conjunction introduction, for example; and of structural rules such as Cut. Despite this, I think there’s a good deal to be said for the package.

The second paper “Gradational accuracy and non-classical semantics”  is on Joyce’s work on scoring functions. I look at what happens to his 1998 argument for probabilism, when we’ve got non-classical truth-value assignments in play. From what I can see, his argument generalizes very nicely. For each kind of truth-value assignment, we can characterize a set of “coherent” credences, and show that for any incoherent credence there is a single coherent credence which is more accurate than it, no matter what the truth-values turn out to be.

In certain cases, we can relate this to kinds of “belief functions” that are familiar. For example, the class of supervaluationally coherent credences I think can be shown to be Dempster-Shafer belief functions—at least if you define supervaluational “truth values” as I do in the paper.

As I mentioned, there are certainly some loose ends in this work—be really grateful for any thoughts! I’m going to be presenting something from the degree supervaluational paper at the AAP in July, and also on the agenda is to write up some ideas about the metaphysics of radical interpretation (as a kind of fictionalism about semantics) for the Fictionalism conference in Manchester this September.

[Update: I’ve added an extra section to the gradational accuracy paper, just showing that “coherent credences” for the various kinds of truth-value assignments I discuss satisfy the generalizations of classical probability theory suggested in Brian Weatherson’s 2003 NDJFL paper. The one exception is supervaluationism, where only a weakened version of the final axiom is satisfied—but in that case, we can show that the coherent credences must be Dempster-Shafer functions. So I think that gives us a pretty good handle on the behaviour of non-accuracy-dominated credences for the non-classical case.]

[Update 2: I’ve tightened up some of the initial material on non-classical semantics, and added something on intuitionism, which the generalization seems to cover quite nicely. I’m still thinking that kicking off the whole thing with lists of non-classical semantics ain’t the most digestable/helpful way of presenting the material, but at the moment I just want to make sure that the formal material works.]

Kripkean conservativeness?

Suppose you have some theory R, formulated in that fragment of English that is free of semantic vocabulary. The theory, we can assume, is at least “effectively” classical—e.g. we can assume excluded middle and so forth for each predicate that it uses. Now think of total theory—which includes not just this theory but also, e.g. a theory of truth.

It would be nice if truth in this widest theory could work “transparently”—so that we could treat “p” and “T(p)” as intersubstitutable at least in all extensional contexts. To get that, something has to go. E.g. the logic for the wider language might have to be non-classical, to avoid the Liar paradox.

One question is whether weakening logic is enough to avoid problems. For all we’ve said so far, it might be that one can have a transparent truth-predicate—but only if one’s non-semantic theories are set up just right. In the case at hand, the worry is that R cannot be consistently embedded within a total theory that includes a transparent truth predicate. Maybe in order to ensure consistency of total theory, we’d have to play around with what we say in the non-semantic fragment. It’d be really interesting if we could get a guarantee that we never need to do that. And this is one thing that Kripke’s fixed point construction seems to give us.

Think of Kripke’s techniques as a “black box”, which takes as input classical models of the semantics-free portion of our language, and outputs non-classical models of language as a whole—and in such a way as to make “p” and “Tp” always coincide in semantic value. Crucially, the non-classical model coincides with the classical model taken as input when it comes to the semantics-free fragment. So if “S” is in the semantics-free language and is true-on-input-model, then it will be true-on-the-output model.

This result seems clearly relevant to the question of whether we disrupt theories like R by embedding them within a total theory incorporating transparent truth. The most obvious thought is to let M be the intended (classical) model of our base language—and then view the Kripke construction as outputting a candidate to be the intended interpretation of total language. And the result just given tells us that if R is true relative to M, it remains true relative to the outputted Kripkean (non-classical model).

But this is a contentious characterization. For example, if our semantics-free language contains absolutely unrestricted quantifiers, there won’t be a (traditional) model that can serve as the “intended interpretation”. For (traditional) models assign sets as the range of quantifiers, and no set contains absolutely everything—in particular no set can contain all sets. And even if somehow we could finesse this (e.g. if we could argue that our quantifiers can never be absolutely unrestricted), it’s not clear that we should be identifying true-on-the-output-model with truth, which is crucial to the above suggested moral.

Field suggests we take a different moral from the Kripkean construction. Focus on the question of whether theories like R (which ex hypothesi are consistent taken alone), might turn out to be inconsistent in the light of total theory—in particular, might turn out to be inconsistent once we’ve got a transparent truth predicate in our language. He argues that the Kripkean construction gives us this.

Here’s the argument. Suppose that R is classically consistent. We want to know whether R+T is consistent, where R+T is what you get from R when you add in a transparent truth-predicate. The consistency of R means that there’s a classical model on which it is true. Input that into Kripke’s black box. And what we get out the other end is a (non-classical) model of R+T. And the existence of such a model (whether or not it’s an “intended one”) means that R+T is consistent.

Field explicitly mentions one worry about this–that it might equivocate over “consistent”. If consistent just means “has a model (of such-and-such a kind)” then the argument goes through as it stands. But in the present setting it’s not obvious what all this talk of models is doing for us. After all, we’re not supposed to be assuming that one among the models is the “intended” one. In fact, we’re supposed to be up for the thesis that the very notion of “intended interpretation” should be ditched, in which case there’d be no space even for viewing the various models as possibly, though not actually, intended interpretations.

This is the very point at which Kreisel’s squeezing argument is supposed to help us. For it forges a link between intuitive consistency, and the model-theoretic constructions. So we could reconstruct the above line of thought in the following steps:

  1. R is consistent (in the intuitive sense)
  2. So: R is consistent (in the model-theoretic sense). [By a squeezing argument]
  3. So: R+T is consistent (in the model-theoretic sense). [By the Kripkean construction]
  4. So: R+T is consistent (in the intuitive sense). [By the squeezing argument again]

Now, I’m prepared to think that the squeezing argument works to bridge the gap between (1) and (2). For here we’re working within the classical fragment of English, and I see the appeal of the premises of the squeezing argument in that setting (actually, for this move we don’t really need the premise I’m most concerned with—just the completeness result and intuitive soundness suffice).

But the move from (3) to (4) is the one that I find dodgy. For this directly requires the principle that if there is a formal (3-valued) countermodel to a given argument, then that argument is invalid (in the intuitive sense). And that is exactly the point over which I voiced scepticism in the previous post. Why should the recognition that there’s an assignment of values to R+T on which an inference isn’t value-1 preserving suggest that the argument from R+T to absurdity is invalid? Without illegitimately sneaking in some thoughts about what value-1 represents (e.g. truth, or determinate truth) I can’t even begin to get a handle on this question.

In the previous post I sketched a fallback option (and it really was only a sketch). I suggested that you might run a squeezing argument for Kleene logic using probabilistic semantics, rather than 3-valued ones, since we do have a sense of what a probabilistic assignment represents, and why failure to preserve probability might be an indicator of intuitive invalidity. Now maybe if this were successful, we could bridge the gap—but in a very indirect way. One would argue from the existence of a 3-valued model, via completeness, to the non-existence of a derivation of absurdity from R+T. And then, by a second completeness result, one would argue that there had to exist a probabilistic model for R+T. Finally, one would appeal to the general thought that such probabilistic models secured consistency (in the intuitive sense).

To summarize. The Kripkean constructions obviously secure a technical conservativeness result. As Field mentions, we should be careful to distinguish this from a true conservativeness result: the result that no inconsistency can arise from adding transparent truth to a classically consistent base theory. But whether the technical result we can prove gives us reason (via a Kreisel-like argument) to believe the true conservativeness result turns on exactly the issue of whether a 3-valued countermodel to an argument gives us reason to think that that argument is intuitively invalid. And it’s not obvious at all where that last part is coming from—so for me, for now, it remains open whether the Kripkean constructions give us reason to believe true conservativeness.

Squeezing arguments

Kreisel gave a famous and elegant argument for why we should be interested in model-theoretic validity. But I’m not sure who can use it.

Some background. Let’s suppose we can speak unproblematically about absolutely all the sets. If so, then there’s something strange about model theoretic definitions of validity. The condition for an argument to be model-theoretically valid it needs to such that, relative to any interpretation, if the premises are true then the conclusion is true. It’s natural to think that one way or another, the reason to be interested in such a property of arguments is that if an argument is valid in this sense, then it preserves truth. And one can see why this would be—if it is truth-preserving on every interpretation, then in particular it should be truth-preserving on the correct interpretation, but that’s just to say that it guarantees that whenever the premises are true, the conclusion is so too.

Lots of issues about the intuitive line of thought arise when you start to take the semantic paradoxes seriously. But the one I’m interested in here is a puzzle about how to think about it when the object-language in question is (on the intended interpretation) talking about absolutely all the sets. The problem is that when we spell out the formal details of the model-theoretic definition of validity, we appeal to “truth on an interpretation” in a very precise sense—and one of the usual conditions on that is that the domain of quantification is a set. But notoriously there is no set of all sets, and so the “intended interpretation” of discourse about absolutely all sets isn’t something we find in the space of interpretations relative to which the model-theoretic definition of validity for that language is defined. But then the idea that actual truth is a special case of truth-on-an-interpretation is well and truly broken, and without that, it’s sort of obscure what significance the model-theoretic characterization has.

Now, Kreisel suggested the following way around this (I’m following Hartry Field’s presentation here). First of all, distinguish between (i) model theoretic validity, defined as above as preservation of truth-on-set-sized-interpretations (call that MT-validity); and (ii) intuitive validity (call that I-validity)—expressing some property of arguments that has philosophical significance to us. Also suppose that we have available a derivability relation.

Now we argue:

(1) [Intuitive soundness] If q is derivable from P, then the argument from P to q is I-valid.

(2) [Countermodels] If the argument from P to q is not MT-valid, then the argument from P to q is not I-valid.

(3) [Completeness] If the argument from P to q is MT-valid, then q is derivable from P.

From (1)-(3) it follows that an argument is MT-valid iff it is I-valid.

Now (1) seems like a decent constraint on the choice of a deductive system. Friends of classical logic will just be saying that whatever that philosophically significant sense of validity is that I-valid expresses, classical syntactic consequences (e.g. from A&B to A, from ~~A to A) should turn out I-valid. Of course, non-classicists will disagree with the classicist over the I-validity of classical rules—but they will typically have a different syntactic relation and it should be that with which we’re running the squeezing argument, at least in the general case. Let’s spot ourselves this.

(3) is the technical “completeness” theorem for a given syntactic consequence relation and model-theory. Often we have this. Sometimes we don’t—for example, for second order languages where the second order quantifiers are logically constrained to be “well-behaved”, there are arguments which are MT-valid but not derivable in the standard ways. But e.g. classical logic does have this result.

Finally, we have (2). Now, what this says is that if an argument has a set-sized interpretation relative to which the premises are true and the conclusion false, then it’s not I-valid.

Now this premise strikes me as delicate. Here’s why for the case of classical set theory we started with, it seems initially compelling to me. I’m still thinking of I-validity as a matter of guaranteed truth-preservation—i.e. truth-preservation no matter what the (non-logical) words involved mean. And I look at a given set-sized model and think—well, even though I was actually speaking in an unrestricted language, I could very well have been speaking in a language where my quantifiers were restricted. And what the set-sized countermodel shows is that on that interpretation of what my words mean, the argument wouldn’t be truth-preserving. So it can’t be I-valid.

However, suppose you adopt the stance where I-validity isn’t to be understood as “truth-preservation no matter what the words mean”—and for example, Field argues that the hypothesis that the two are biconditionally related is refutable. Why then should you think that the presence of countermodels have anything to do with I-invalidity? I just don’t get why I should see this as intuitively obvious (once I’ve set aside the usual truth-preservation understanding of I-validity), nor do I see what an argument for it would be. I’d welcome enlightenment/references!

We’ve been talking so far about the case of classical set theory. But I reckon the point surfaces with respect to other settings.

For example, Field favours a nonclassical logic (an extension of the strong Kleene logic) for dealing with the paradoxes. His methodology is to describe the logic model-theoretically. So what he gives us is a definition of MT-validity for a language containing a transparent truth-predicate. But of course, it’d be nice if we could explain why we’re interested in MT-validity so-characterized, and one attractive route is to give something like a Kreisel squeezing argument.

What would this look like? Well, we’d need to endorse (1)—to pick out a syntactic consequence relation and judge the basic principles to be I-valid. Let’s spot ourselves that. We’d also need (3), the completeness result. That’s tricky. For the strong Kleene logic itself, we have a completeness result relative to a 3-valued semantics. So relative to K3 and the usual 3-valued semantics, we’ve got (3). But Field’s own system adds to the K3 base a strong conditional, and the model theory is far more complex than a 3-valued one. And completeness just might not be available for this system—see p.305 of Field’s book.

But even if we have completeness (suppose we were working directly with K3, rather than Field’s extension) to me the argument seems puzzling. The problem again is with (2). Suppose a given argument, from P to q, has a 3-valued countermodel. What do we make on this? Well, this means there’s some way of assigning semantic values to expressions such that the premises all get value 1, and the conclusion gets value less than 1 (0.5, or 0). But what does that tell us? Well, if we were allowed to identify having-semantic-value-1 with being-true, then we’d forge a connection between countermodels and failure-to-preserve-truth. And so we’d be back to the situation that faced us in set-theory, in that countermodels would display interpretations relative to which truth isn’t preserved. I expressed some worries before about why if I-validity is officially primitive, this should be taken to show that the argument is I-valid. But let’s spot ourselves an answer to that question—we can suppose that even if I-valid is primitive, then failure of truth-preservation on some interpretation is a sufficient condition for failure to be I-valid.

The problem is that in the present case we can’t even get as far as this. For we’re not supposed to be thinking of semantic value 1 as truth, and nor are we supposed to be thinking of the formal models as specifying “meanings” for our words. If we do start thinking in this way, we open ourselves up to a whole heap of nasty questions—e.g. it looks very much like sentences with value 1/2 will be thought of as truth-value gaps, whereas the target was to stablize a transparent notion of truth—a side-effect of which is that we will be able to reduce truth-value gaps to absurdity.

Field suggests a different gloss in some places—think of semantic value 1 as representing determinate truth, semantic value 0 as representing determinate falsity, and semantic value 1/2 as representing indeterminacy. OK: so having a 3-valued countermodel to an argument should be glossed have displaying a case where the premises are all determinately true, and the conclusion is at best indeterminate. But recall that “indeterminacy” here is *not* supposed to be a status incompatible with truth—otherwise we’re back to truth-value gaps—so we’ve not got any reason here to think that we’ve got a failure of truth-preservation. So whereas holding that failure of truth-preservation is a sufficient condition for I-invalidity would be ok to give us (2) for the case of classical set theory, in the non-classical cases we’re thinking about it just isn’t enough to patch the argument. What we need instead is that failure of determinate-truth-preservation is a sufficient condition for I-invalidity. But where is that coming from? And what’s the rationale for it?

Here’s a final thought about how to make progress on these questions. Notice that the Kreisel squeezing argument is totally schematic—we don’t have to pack in anything about the particular model theory or proof theory involved, so long as (1-3) are satisfied. As an illustration, suppose you’re working with some model-theoretically specified consequence relation where there isn’t a nice derivability relation which is complete wrt it—where a derivability relation is nice if it is “practically implementable”–i.e. doesn’t appeal to essentially infinitary rules (like the omega-rule). Well, nothing in the squeezing argument required the derivability relation to be *nice*. Add in whatever infinitary rule you like to beef up the derivability relation until it is complete wrt the model theory, and so long as what you end up with is intuitively sound—i.e. (1) is met—then the Kreisel argument can be run.

A similar point goes if we hold fixed the proof theory and vary the model theory that defines MT-validity. The condition is that we need a model theory that (a) makes the derivability relation complete; and (b) is such that countermodels entail I-invalidity. So long as something plays that role, we’re home and dry. Suppose, for example, you give a probabilistic semantics for classical logic (in the fashion that Field does, via Popper functions, in his 1977 JP paper), and interpret an assignment of probabilities as a possible distribution of partial beliefs over sentences in the language. An argument is MT-valid, on this semantics, just in case if whenever premises are probability 1 (conditionally on anything) then so is the conclusion. Slogan: MT-validity is certainty-preservation. A countermodel is then some representation of partial beliefs whereby one is certain of all the premises, but less than certain of the conclusion. Just as with non-probabilistic semantics, there’ll be a question of whether the presence of a countermodel in this sense is sufficient for I-invalidity—but it doesn’t seem to me that we weaken our case by this shift.

But what seems significant about this move is that, in principle, we might be able to do the same for the non-classical cases. Rather than do a 3-valued semantics and worry about what to make of “semantic value 1 preservation” and its relation to I-validity, one searches for a complete probabilistic semantics. The advantage is that we’ve got a interpretation standing by of what individual assignments of probabilities means (in terms of degrees of belief)—and so I don’t envisage new interpretative problems arising for this choice of semantics, as they did for the 3-valued way of doing things.

[Of course, to carry out the version of the squeezing argument itself, we’ll need to actually have such a semantics—and maybe to keep things clean, we need an axiomization of what a probability function is that doesn’t tacitly appeal to the logic itself (that’s the role that Popper’s axiomatization of conditional probability functions in Field’s 1977 interpretation). I don’t know of such an axiomitization—advice gratefully received.]

Defending conditional excluded middle

So things have been a little quiet on this blog lately. This is a combination of (a) trips away, (b) doing administration-stuff for the Analysis Trust, and (c) the fact that I’m entering the “writing up” phase of my current research leave.

I’ve got a whole heap of papers that in various stages of completion that I want to get finished up. As I post drafts online, the blogging should become more regular. So here’s the first installment—a new version of an older paper that discusses conditional excluded middle, and in particular, a certain style of argument that Lewis deploys against it, and which Bennett endorses (in an interestingly varied form) in his survey book.

What I try to do in the present version—apart from setting out some reasons for being interested in conditional excluded middle for counterfactuals that I think deserve more attention—is try to disentangle two elements of Bennett’s discussion. One element is a certain narrow-scope analysis of “might”-counterfactuals (roughly: “if it were that P it might be that Q” has the form: P\rightarrow \Diamond Q —where the modal expresses an idealized ignorance). The second is an interesting epistemic constraint on true counterfactuals I call “Bennett’s Hypothesis”.

One thing I argue is that Bennett’s Hypothesis all on its own conflicts with conditional excluded middle. And without Bennett’s Hypothesis, there’s really no argument from the narrow-scope analysis alone against conditional excluded middle. So really, if counterfactuals work the way Bennett thinks they do, we can forget about the fine details of analyzing epistemic modals when arguing against conditional excluded middle. All the action is with whether or not we’ve got grounds to endorse the epistemic constraint on counterfactual truth.

The second thing I argue is that there are reasons to be severely worried about Bennett’s Hypothesis—it threatens to lead us straight into an error theory about ordinary counterfactual judgements.

If people are interested, the current version of the paper is available here. Any thoughts gratefully received!

Paracompleteness and credences in contradictions.

The last few posts have discussed non-classical approaches to indeterminacy.

One of the big stumbling blocks about “folklore” non-classicism, for me, is the suggestion that contradictions (A&~A) be “half true” where A is indeterminate.

Here’s a way of putting a constraint that appeals to me: I’m inclined to think that an ideal agent ought to fully reject such contradictions.

(Actually, I’m not quite as unsympathetic to contradictions as this makes it sound. I’m interested in the dialethic/paraconsistent package. But in that setting, the right thing to say isn’t that A&~A is half-true, but that it’s true (and probably also false). Attitudinally, the ideal agent ought to fully accept it.)

Now the no-interpretation non-classicist has the resources to satisfy this constraint. She can maintain that the ideal degree of belief in A&~A is always 0. Given that:

p(A)+p(B)=p(AvB)+p(A&B)

we have:

p(A)+p(~A)=p(Av~A)

And now, whenever we fail to fully accept Av~A, it will follow that our credences in A and ~A don’t sum to one. That’s the price we pay for continuing to utterly reject contradictions.

The *natural* view in this setting, it seems to me, is that accepting indeterminacy of A corresponds to rejecting Av~A. So someone fully aware that A is indeterminate should fully reject Av~A. (Here and in the above I’m following Field’s “No fact of the matter” presentation of the nonclassicist).

But now consider the folklore nonclassicist, who does take talk of indeterminate propositions being “half true” (or more generally, degree-of-truth talk) seriously. This is the sort of position that the Smith paper cited in the last post explores. The idea there is that indeterminacy corresponds to half-truth, and fully informed ideal agents should set their partial beliefs to match the degree-of-truth of a proposition (e.g. in a 3-valued setting, an indeterminate A should be believed to degree 0.5). [NB: obviously partial beliefs aren’t going to behave like a probability function if truth-functional degrees of truth are taken as an “expert function” for them.]

Given the usual min/max take on how these multiple truth values get settled over conjunction and negation, for the fullyinformed agent we’ll get p(Av~A) set equal to the degree of truth of Av~A, i.e. 0.5. And exactly the same value will be given to A&~A. So contradictions, far from being rejected, are appropriately given the same doxastic attitude as I assign to “this fair coin will land heads”

Another way of putting this: the difference between our overall attitude to “the coin will land heads” and “Jim is bald and not bald” only comes out when we consider attitudes to contents in which these are embedded. For example, I fully disbelieve B&~B when B=the coin lands heads; but I half-accept it for B=A&~A . That doesn’t at all ameliorate the implausibility of the initial identification, for me, but it’s something to work with.

In short, the Field-like nonclassicist sets A&~A to 0; and that seems exactly right. Given this and one or two other principles, we get a picture where our confidence in Av~A can take any value—right down to 0; and as flagged before, the probabilities of A and ~A carve up this credence between them, so in the limit where Av~A has probability 0, they take probability 0 too.

But the folklore nonclassicist I’ve been considering, for whom degrees-of-truth are an expert function for degrees-of-belief, has 0.5 as a pivot. For the fully informed, Av~A always exceeds this by exactly the amount that A&~A falls below it—and where A is indeterminate, we assign them all probability 0.5.

As will be clear, I’m very much on the Fieldian side here (if I were to be a nonclassicist in the first place). It’d be interesting to know whether folklore nonclassicists do in general have a picture about partial beliefs that works as Smith describes. Consistently with taking semantics seriously, they might think of the probability of A as equal to the measure of the set of possibilities where A is perfectly true. And that will always make the probability of A&~A 0 (since it’s never perfectly true); and meet various other of the Fieldian descriptions of the case. What it does put pressure on is the assumption (more common in degree theorists than 3-value theorists perhaps) that we should describe degree-of-truth-0.5 as a way of being “half true”—why in a situation where we know A is halftrue, would we be compelled to fully reject it? So it does seem to me that the rhetoric of folklore degree theorists fits a lot better with Smith’s suggestions about how partial beliefs work. And I think it’s objectionable on that account.

[Just a quick update. First observation. To get a fix on the “pivot” view, think of the constraint being that P(A)+P(~A)=1. Then we can see that P(Av~A)=1-P(A&~A), which summarizes the result. Second observation. I mentioned above that something that treated the degrees of truth as an expert function “won’t behave like a probability function”. One reflection of that is that the logic-probability link will be violated, given certain choices for the logic. E.g. suppose we require valid arguments to preserve perfect truth (e.g. we’re working with the K3 logic). Then A&~A will be inconsistent. And, for example, P(A&~A) can be 0.5, while for some unrelated B, P(B) is 0. But in the logic A&~A|-B, so probability has decreased over a valid argument. Likewise if we’re preserving non-perfect-falsity (e.g. we’re working with the LP system). Av~A will then be a validity, but P(Av~A) can be 0.5, yet P(B) be 1. These are for the 3-valued case, but clearly that point generalizes to the analogous definitions of validity in a degree valued setting. One of the tricky things about thinking about the area is that there are lots of choice-points around, and one is the definition of validity. So, for example, one might demand that valid arguments preserve both perfect truth and non-perfect falsity; and then the two arguments above drop away since neither |-Av~A nor A&~A|- on this logic. The generalization to this in the many-valued setting is to demand e-truth preservation for every e. Clearly these logics are far more constrained than the K3 or LP logics, and so there’s a better chance of avoiding violations of the logic-probability link. Whether one gets away with it is another matter.]

Non-classical logics: the no interpretation account

In the previous post, I set out what I took to be one folklore conception of a non-classicist treatment of indeterminacy. Essential elements were (a) the postulation of not two, but several truth statuses; (b) the treatment of “it is indeterminate whether” (or degreed variants thereof) as an extensional operator; (c) the generalization to this setting of a classicist picture, where logic is defined as truth preservation over a range of reinterpretations, one amongst which is the interpretation that gets things right.

I said in that post that I thought that folklore non-classicism was a defensible position, though there’s some fairly common maneuvers which I think the folklore non-classicist would be better off ditching. One of these is the idea that the intended interpretation is describable “only non-classically”.

However, there’s a powerful alternative way of being a non-classicist. The last couple of weeks I’ve had a sort of road to Damascus moment about this, through thinking about non-classicist approaches to the Liar paradox—and in particular, by reading Hartry Field’s articles and new book where he defends a “paracomplete” (excluded-middle rejecting) approach to the semantic paradoxes and work by JC Beall on a “paraconsistent” (contradiction-allowing) approach.

One interpretative issue with the non-classical approaches to the Liar and the like is that a crucial element is a truth-predicate that works in a way very unlike the notion of “truth” or “perfect truth” (“semantic value 1”, if you want neutral terminology) that feature in the many-valued semantics. But that’s not necessarily a reason by itself to start questioning the folklore picture. For it might be that “truth” is ambiguous—sometimes picking up on a disquotational notion, sometimes tracking the perfect truth notion featuring in the nonclassicists semantics. But in fact there are tensions here, and they run deep.

Let’s warm up with a picky point. I was loosely throwing around terms like “3-valued logic” in the last post, and mentioned the (strong) Kleene system. But then I said that we could treat “indeterminate whether p” as an extensional operator (the “tertium operator” that makes “indet p” true when p is third-valued, and otherwise false). But that operator doesn’t exist in the Kleene system—the Kleene system isn’t expressively complete with respect to the truth functions definable over three values, and this operator is one of the truth-functions that isn’t there. (Actually, I believe if you add this operator, you do get something that is expressively complete with respect to the three valued truth-functions).

One might take this to be just an expressive limitation of the Kleene system. After all, one might think, in the intended interpretation there is a truth-function behaving in the way just described lying around, and we can introduce an expression that picks up on it if we like.

But it’s absolutely crucial to the nonclassical treatments of the Liar that we can’t do this. The problem is that if we have this operator in the language, then “exclusion negation” is definable—an operator “neg” such that “neg p” is true when p is false or indeterminate, and otherwise false (this will correspond to “not determinately p”—i.e. ~p&~indeterminate p, where ~ is so-called “choice” negation, i.e. |~p|=1-|p|). “p v neg p” will be a tautology; and arbitrary q will follow from the pair {p, neg p}. But this is exactly the sort of device that leads to so-called “revenge” puzzles—Liar paradoxes that are paradoxical even in the 3-valued system. Very roughly, it looks as if on reasonable assumptions a system with exclusion negation can’t have a transparent truth predicate in it (something where p and T(p) are intersubstitutable in all extensional contexts). It’s the whole point of Field and Beall’s approaches to retain something with this property. So they can’t allow that there is such a notion around (so for example, Beall calls such notions “incoherent”).

What’s going on? Aren’t these approaches just denying us the resources to express the real Liar paradox? The key, I think, is a part of the nonclassicist picture that Beall and Field are quite explicit about and which totally runs against the folklore conception. They do not buy into the idea that model theory is ranging over a class of “interpretations” of the language among which we might hope to find the “intended” interpretation. The core role of the model theory is to give an extensionally adequate characterization of the consequence relation. But the significance of this consequence relation is not to be explained in model-theoretic terms (in particular, in terms of one among the models being intended, so that truth-preservation on every model automatically gives us truth-preservation simpliciter).

(Field sometimes talks about the “heuristic value” of this or that model and explicitly says that there is something more going on than just the use of model theory as an “algebraic device”. But while I don’t pretend to understand exactly what is being invoked here, it’s quite quite clear that the “added value” doesn’t consist on some classical 3-valued model being “intended”.)

Without appeal to the intended interpretation, I just don’t see how the revenge problem could be argued for. The key thought was that there is a truth-function hanging around just waiting to be given a name, “neg”. But without the intended interpretation, what does this even mean? Isn’t the right thought simply that we’re characterizing a consequence relation using rich set-theoretic resources—and in terms of which we can draw differences that correspond to nothing in the phenomenon being modelled.

So it’s absolutely essential to the nonclassicist treatment of the Liar paradox that we drop the “intended interpretation” view of language. Field, for one, has a ready-made alternative approach to suggest—a Quinean combination of deflationism about truth and reference, with perhaps something like translatability being invoked to explain how such predicates can be applied to expressions in a language other than ones own.

I’m therefore inclined to think of the non-classicism—at least about the Liar—as a position that *requires* something like this deflationist package. Whereas the folklore non-classicist I was describing previously is clearly someone who takes semantics seriously, and who buys into a generalization of the powerful connections between truth and consequence that a semantic theory of truth affords.

When we come to the analysis of vagueness and other (non-semantic-paradox related) kinds of indeterminacy, it’s now natural to consider this “no interpretation” non-classicism. (Field does exactly this—he conceives of his project as giving a unified account of the semantic paradoxes and the paradoxes of vagueness. So at least *this* kind of nonclassicism, we can confidently attribute to a leading figure in the field). All the puzzles described previously for the non-classicist position are thrown into a totally new light. Once we make this move.

To begin with, there’s no obvious place for the thought that there are multiple truth statuses. For you get that by looking at a many valued model, and imagining that to be an image of what the intended interpretation of the language must be like. And that is exactly the move that’s now illegitimate. Notice that this undercuts one motivation for going towards a fuzzy logic—the idea that one represents vague predicates as some smoothly varying in truth status. Likewise, the idea that we’re just “iterating a bad idea” in multiplying truth values doesn’t hold water on this conception—since the many-values assigned to sentences in models just don’t correspond to truth statuses.

Connectedly, one shouldn’t say that contradictions can be “half true” (nor that excluded middle is “half true”. It’s true that (on say the Kleene approach) that you won’t have ~(p&~p) as a tautology. Maybe you could object to *that* feature. But that to me doesn’t seem nearly as difficult to swallow as a contradiction having “some truth to it” despite the fact that from a contradiction, everything follows.

One shouldn’t assume that “determinately” should be treated as the tertium operator. Indeed, if you’re shooting for a combined non-classical theory of vagueness and semantic paradoxes, you *really* shouldn’t treat it this way, since as noted above this would give you paradox back.

There is therefore a central and really important question: what is the non-classical treatment of “determinately” to be? Sample answer (lifted from Field’s discussion of the literature): define D(p) as p&~(p–>~p), where –> is a certain fuzzy logic conditional. This, Field argues, has many of the features we’d intuitively want a determinately operator to have; and in particular, it allows for non-trivial iterations. So if something like this treatment of “determinately” were correct, then higher-order indeterminacy wouldn’t be obviously problematic (Field himself thinks this proposal is on the right lines, but that one must use another kind of conditional to make the case).

“No interpretation” nonclassicism is an utterly, completely different position from the folklore nonclassicism I was talking about before. For me, the reasons to think about indeterminacy and the semantic and vagueness-related paradoxes in the first place, is that they shed light on the nature of language, representation, logic and epistemology. And on these sorts of issues, the no interpretation nonclassicism and the folklore version take diametrically opposed positions on such issues, and flowing from this, the appropriate ways to arguing for or against these views are just very very different.

Non-classical logics: some folklore

Having just finished the final revisions to my Phil Compass survey article on Metaphysical indeterminacy and ontic vagueness (penultimate draft available here) I started thinking some more about how those who favour non-classical logics think of their proposal (in particular, people who think that something like the Kleene 3-valued logic or some continuum valued generalization of it is the appropriate setting for analyzing vagueness or indeterminacy).

The way that I’ve thought of non-classical treatments in the past is I think a natural interpretation of one non-classical picture, and I think it’s reasonably widely shared. In this post, I’m going to lay out some of that folklore-y conception of non-classicism (I won’t attribute views to authors, since I’m starting to wonder whether elements of the folklore conception are characterizations offered by opponents, rather than something that the nonclassicists should accept—ultimately I want to go back through the literature and check exactly what people really do say in defence of non-classicism).

Here’s my take on folklore nonclassicism. While classicists think there are two truth-statuses, non-classicists believe in three, four or continuum many truth-statuses (let’s focus on the 3-valued system for now). They might have various opinions about the structure of these truth-statuses—the most common ones being that they’re linearly ordered (so for any two truth-statuses, one is truer than the other). Some sentences (say, Jimmy is bald) get a status that’s intermediate between perfect truth and perfect falsity. And if we want to understand the operator “it is indeterminate whether” in such settings, we can basically treat it as a one-place extensional connective: “indeterminate(p)” is perfectly true just in case p has the intermediate status; otherwise it is perfectly false.

So interpreted, non-classicism generalizes classicism smoothly. Just as the classicist can think there is an intended interpretation of language (a two valued model which gets the representation properties of words right) the non-classicist can think there’s an intended interpretation (say a three valued model getting the representational features right). And that then dovetails very nicely with a model-theoretic characterization of consequence as truth-preservation under (almost) arbitrary reinterpretations of the language. For if one knows that some pattern is truth-preserving under arbitrary reinterpretations of the language, then that pattern is truth-preserving in particular in the intended interpretation—which is just to say that preserves truth simpliciter. This forges a connection between validity and preserving a status we have all sorts of reason to be interested in—truth. (Of course, one just has to write down this thought to start worrying about the details. Personally, I think this integrated package is tremendously powerful and interesting, deserves detailed scrutiny, and should be given up only as an option of last resort—but maybe others take a different view). All this carries over to the non-classicist position described. So for example, on a Kleene system, validity is a matter of preserving perfect truth under arbitrary reinterpretations—and to the extent we’re interested in reasoning which preserves that status, we’ve got the same reasons as before to be interested in consequence. Of course, one might also think that reasoning that preserves non-perfect-falsity is also an interesting thing to think about. And very nicely, we have a systematic story about that too—this non-perfect falsity sense of validity would be the paraconsistent logic LP (though of course not under an interpretation where contradictions get to be true).

With this much on board, one can put into position various familiar gambits in the literature.

  1. One could say that allowing contradictions to be half-true (i.e. to be indeterminate, to have the middle-status) is just terrible. Or that allowing a parity of truth-status between “Jimmy is bald or he isn’t” and “Jimmy’s both bald and not bald” just gets intuitions wrong (the most powerful way dialectically to deploy this is if the non-classicist backs their position primarily by intuitions about cases—e.g. our reluctance to endorse the first sentence in borderline cases. The accusation is that if our game is taking intuitions about sentences at face value, it’s not at all clear that the non-classicist is doing a good job.)
  2. One could point out that “indeterminacy” for the nonclassicist will trivially iterate. If one defines Determinate(p) as p&~indeterminate(p) (or directly as the one-place connective that is perfectly true if p is, and perfectly false otherwise) then we’ll quickly see that Determinately determinately p will follow from determinately p; and determinately indeterminate whether p will follow from indeterminate whether p. And so on.
  3. In reaction to this, one might abandon the 3-valued setting for a smooth, “fuzzy” setting. It’s not quite so clear what value “indeterminate p” should take (though there are actually some very funky options out there). Perhaps we might just replace such talk with direct talk of “degrees of determinacy” thought of as degrees of truth—with “D(p)=n” being again a one-place extensional operator perfectly true iff p has degree of truth n; and otherwise perfectly false.
  4. One might complain that all this multiplying of truth-values is fundamentally misguiding. Think of people saying that the “third status” view of indeterminacy is all wrong—indeterminacy is not a status that competes with truth and falsity; or the quip (maybe due to Mark Sainsbury?) that one does “not improve a bad idea by iterating it”—i.e. by introducing finer and finer distinctions.

I don’t think these are knock-down worries. (1) I do find persuasive, but I don’t think it’s very dialectically forceful—I wouldn’t know how to argue against someone who claimed their intuitions systematically followed, say, the Kleene tables. (I also think that the nonclassicist can’t really appeal to intuitions against the classicist effectively). Maybe some empirical surveying could break a deadlock. But pursued in this way the debate seems sort of dull to me.

(2) seems pretty interesting. It looks like the non-classicist’s treatment of indeterminacy, if they stick in the 3-valued setting, doesn’t allow for “higher-order” indeterminacy at all. Now, if the nonclassicist is aiming to treat determinacy rather than vagueness *in general* (say if they’re giving an account of the indeterminacy purportedly characteristic of the open future, or of the status of personal identity across fission cases) then it’s not clear one need to posit higher-order indeterminacy.

I should say that there’s one response to the “higher order” issues that I don’t really understand. That’s the move of saying that strictly, the semantics should be done in a non-classical metalanguage, where we can’t assume that “x is true or x is indeterminate or x is false” itself holds. I think Williamson’s complaints here in the chapter of his vagueness book are justified—I just don’t know how what the “non-classical theory” being appealed to here is, or how one would write it down in order to assess its merits (this is of course just a technical challenge: maybe it could be done).

I’d like to point out one thing here (probably not new to me!). The “nonclassical metalanguage” move at best evades the challange that by saying that there’s an intended 3-valued interpretation, one is committed to deny higher-order indeterminacy. But we achieve this, supposedly, by saying that the intended interpretation needs to be described non-classically (or perhaps notions like “the intended interpretation” need to be replaced by some more nuanced characterization). The 3-valued logic is standardly defined in terms of what preserves truth over all 3-valued interpretations describable in a classical metalanguage. We might continue with the classical model-theoretic characterization of the logic. But then (a) if the real interpretation is describable only non-classically, it’s not at all clear why truth-preservation in all classical models should entail truth-preservation in the real, non-classical interpretation. And moreover, our object-language “determinacy” operator, treated extensionally, will still trivially iterate—that was a feature of the *logic* itself. This last feature in particular might suggest that we should really be characterizing the logic as truth-preservation under all interpretations including those describable non-classically. But that means we don’t even have a fix on the *logic*, for who knows what will turn out to be truth-preserving on these non-classical models (if only because I just don’t know how to think about them).

To emphasize again—maybe someone could convince me this could all be done. But I’m inclined to think that it’d be much neater for this view to deny higher-order indeterminacy—which as I mentioned above just may not be a cost in some cases. My suggested answer to (4), therefore, is just to take it on directly—to provide independent motivation for wanting however many values that is independent of having higher-order indeterminacy around (I think Nick J.J. Smith’s AJP paper “Vagueness as closeness” pretty explicitly takes this tack for the fuzzy logic folk).

Anyway, I take this to be some of the folklore and dialectical moves that people try out in this setting. Certainly it’s the way I once thought of the debate shaping up. It’s still, I think, something that’s worth thinking about. But in the next post I’m going to say why I think there’s a far far more attractive way of being a non-classicist.