Defending conditional excluded middle

So things have been a little quiet on this blog lately. This is a combination of (a) trips away, (b) doing administration-stuff for the Analysis Trust, and (c) the fact that I’m entering the “writing up” phase of my current research leave.

I’ve got a whole heap of papers that in various stages of completion that I want to get finished up. As I post drafts online, the blogging should become more regular. So here’s the first installment—a new version of an older paper that discusses conditional excluded middle, and in particular, a certain style of argument that Lewis deploys against it, and which Bennett endorses (in an interestingly varied form) in his survey book.

What I try to do in the present version—apart from setting out some reasons for being interested in conditional excluded middle for counterfactuals that I think deserve more attention—is try to disentangle two elements of Bennett’s discussion. One element is a certain narrow-scope analysis of “might”-counterfactuals (roughly: “if it were that P it might be that Q” has the form: P\rightarrow \Diamond Q —where the modal expresses an idealized ignorance). The second is an interesting epistemic constraint on true counterfactuals I call “Bennett’s Hypothesis”.

One thing I argue is that Bennett’s Hypothesis all on its own conflicts with conditional excluded middle. And without Bennett’s Hypothesis, there’s really no argument from the narrow-scope analysis alone against conditional excluded middle. So really, if counterfactuals work the way Bennett thinks they do, we can forget about the fine details of analyzing epistemic modals when arguing against conditional excluded middle. All the action is with whether or not we’ve got grounds to endorse the epistemic constraint on counterfactual truth.

The second thing I argue is that there are reasons to be severely worried about Bennett’s Hypothesis—it threatens to lead us straight into an error theory about ordinary counterfactual judgements.

If people are interested, the current version of the paper is available here. Any thoughts gratefully received!

CFP: CMM Graduate Conference at Leeds

The Centre for Metaphysics and Mind at the University of Leeds is hosting the 3rd Annual CMM Graduate Conference on Thursday 4th September. This is will run immediately before the metaphysics conference, Perspectives on Ontology, that is being held at University of Leeds from Friday 5th to Sunday 7th September.

Submissions are welcome on any area of metaphysics. Metaphysics should be broadly construed to include not only traditional metaphysical topics, but also the metaphysical aspects of e.g. philosophy of mind, philosophy of physics, philosophy of religion, and aesthetics.

Submissions of any length up to  5,000 words will be considered.

Each paper presented at the conference will be followed by a response from a member of academic staff from the University of Leeds Department of Philosophy.

As with last year’s conference we hope to be able to pay some or all of the travel and accommodation costs for those people whose papers are accepted. (This is dependent on successful funding applications.)

Please submit complete papers, preferably by e-mail, to Sarah Grant, phl2skg@leeds.ac.uk. Please mark your submission clearly as such. Receipt will be acknowledged asap. Submissions will also be accepted by mail:

Sarah Grant
School of Philosophy
University of Leeds
Woodhouse Lane
LS2 9JT

All papers should be suitable for blind review (we cannot guarantee anonymised refereeing if your paper is not suitably anonymised). Please include a cover page with title, abstract and contact details. Mailed submissions should include two copies.

Deadline for receipt of submissions is Friday 18th July 2008.

Decisions will be made by Friday 8th August 2008.

For more general details on the conference please consult:

http://www.personal.leeds.ac.uk/~phsk/cmmgc08/index.htm

or e-mail Duncan Watson at phl5dw@leeds.ac.uk

Metaphysics at Leeds: Perspectives on ontology conference

Registration details are now available for Perspectives on Ontology. Please see the website here.
Attendance at the conference is limited, so early registration is urged.

Details are also available for the graduate bursaries.

Perspectives on Ontology

A major international conference on metaphysics to be held at the University of Leeds, Sep 5th-7th 2008.

Speakers:
Karen Bennett (Cornell)
John Hawthorne (Oxford
Jill North (Yale)
Helen Steward (Leeds)
Gabriel Uzquiano (Oxford)
Jessica Wilson (Toronto)

Commentators:
Benj Hellie (Toronto)
Kris McDaniel (Syracuse)
Juha Saatsi (Leeds)
Ted Sider (NYU)
Jason Turner (Leeds)
Robbie Williams (Leeds)

There’s also going to be a graduate conference directly prior to this. Details, including a call for papers, are available here.

Fafblog is back!

Hooray!

[Ht: Crooked Timber]

Probabilities and indeterminacy

I’ve just learned that my paper “Vagueness, Conditionals and Probability” has been accepted for the first formal epistemology festival in Konstanz this summer. It looks like the perfect place for me to get feedback on, and generally learn more about, the issues raised in the paper. So I’m really looking forward to it.

I’m presently presenting some of this work as part of a series of talks at Arche in St Andrews. I’m learning lots here too! One thing that I’ve been thinking about today relates directly to the paper above.

One of the main things I’ve been thinking about is how credences, evidential probability and the like should dovetail with supervaluationism. I’ve written about this a couple of times in the past, so I’ll briefly set out one sort of approach that I’ve been interested in, and then sketch something that just occurred to me today.

The basic question is: what attitude should we take to p, if we are certain that p is indeterminate? Here’s one attractive line of thought. First of all, it’s a familiar thought that logic should impose some rationality constraints on belief. Let’s formulate this minimally as the constraint that, for the rational agent, probability (credence or evidential probability) can never decrease across a valid argument:

A\models B \Rightarrow p(A)\leq p(B)

Now take one of the things that supervaluational logics are often taken to imply, where ‘D‘ is read as ‘it is determinate that’:

A\models DA

Then we note that this and the logicality constraint on probabilities entails that

p(A)\leq p(DA)

So in particular, if we fully reject A being determinate (e.g. if we fully accept that it’s indeterminate) then the probability of the RHS will be zero, and so by the inequality, the probability of the RHS is zero. (The particular supervaluational consequence I’m appealing to is controversial, since it follows only in settings which seem inappropriate for modelling higher-order indeterminacy, but we can argue by adding a couple of extra assumptions for the same result in other ways. This’ll do us for now though).

The result is that if we’re fully confident that A is indeterminate, we should have probability zero in both A and in not-A. That’s interesting, since we’re clearly not in Kansas anymore—this result is incompatible with classical probability theory. Hartry Field has argued in the past for the virtues of this result as giving a fix on what indeterminacy is, and I’m inclined to think that it captures something at the heart of at least one way of conceiving of indeterminacy.

Rather than thinking about indeterminate propositions as having point-valued probabilities, one might instead favour a view whereby they get interval values. One version of this can be defined in this setting. For any A, let u(A) be defined to be 1-p(\neg A). This quantity—how little one accepts the negation of a proposition—might be thought of as the upper bound of an interval whose lower bound is the probability of A itself. So rather than describe one’s doxastic attitudes to known indeterminate A as being “zero credence” in A, one might prefer the description of them as themselves indeterminate—in a range between zero and 1.

There’s a different way of thinking about supervaluational probabilities, though, which is in direct tension with the above. Start with the thought that at least for supervaluationism conceived as a theory of semantic indecision, there should be no problem with the idea of perfectly sharp classical probabilities defined over a space of possible worlds. The ways the world can be, for this supervaluationist, are each perfectly determinate, so there’s no grounds as yet for departing from orthodoxy.

But we also want to talk about the probabilities of what is expressed by sentences such as “that man is bald” where the terms involved are vague (pick your favourite example if this one won’t do). The supervaluationist thought is that this sentence picks out a sharp proposition only relative to a precisification. What shall we say of the probability of what this sentence expresses? Well, there’s no fact of the matter about what it expresses, but relative to each precisification, it expresses this or that sharp proposition—and in each case our underlying probability measure assigns it a probability.

Just as before, it looks like we have grounds for assigning to sentences, not point-like probability values, but range-like values. The range in question will be a subset of [0,1], and will consist of all the probability-values which some precisification of the claim acquires. Again, we might gloss this as saying that when A is indeterminate, it’s indeterminate what degree of belief we should have in A.

But the two recipes deliver totally utterly different results. Suppose, for example, I introduce a predicate into English, “Teads”, which has two precisifications: on one it applies to all and only coins which land Heads, on the other it applies to all and only coins that land Tails (or not Heads). Consider the claim that the fair coin I’ve just flipped will land Teads. Notice that we can be certain that this sentence will be indeterminate—whichever way the coin lands, Heads or Tails, the claim will be true on one precisification and false on the other.

What would the logic-based argument give us? Since we assign probability 1 to indeterminacy, it’ll say that we should assign probability 0, or a [0,1] interval, to the coin landing Teads.

What would the precisification-based argument give us? Think of the two propositions the claim might express: that the coin will land heads, or that the coin will land tails. Either way, it expresses a proposition that is probability 1/2. So the set of probability values associated with the sentence will be point-like, having value 1/2.

Of course, one might think that the point-like value stands in an interesting relationship to the [0,1] range—namely being its midpoint. But now consider cases where the coin is biased in one way. For example, if the coin is biased to degree 0.8 towards heads, then the story for the logic-based argument will remain the same. But for the precisification-based person the values will change to {0.8,0.2}. So we can’t just read off the values the precisificationist arrives at from what we get from the logic-based argument. Moral: in cases of indeterminacy, thinking of probabilities in the logic-based way wipes out all information other than that the claim in question is indeterminate.

This last observation can form the basis for criticism of supervaluationism in a range of circumstances in which we want to discriminate between attitudes towards equally indeterminate sentences. And *as an argument* I take it seriously. I do think there should be logical constraints on rational credence, and if the logic for supervaluationism is as its standardly taken to be, that enforces the result. If we don’t want the result, we need to argue for some other logic. Doing so isn’t cost free, I think—working within the supervaluational setting, bumps tend to arise elsewhere when one tries to do this. So the moral I’d like to draw from the above discussion is that there must be two very different ways of thinking about indeterminacy that both fall under the semantic indecision model. These two conceptions are manifest in different attitudes towards indeterminacy described above. (This has convinced me, against my own previous prejudices, that there’s something more-than-merely terminological to the question of “whether truth is supertruth”).

But let’s set that aside for now. What I want to do is just note that *within* the supervaluational setting that goes with the logic-based argument and thinks that all indeterminate claims should be rejected, there shouldn’t be any objection to the underlying probability measure mentioned above, and given this, one shouldn’t object to introducing various object-language operators. In particular, let’s consider the following definition:

“P(S)=n” is true on i, w iff the measure of {u: “S” is true on u,i}=n

But it’s pretty clear to see that the (super)truths about this operator will reflect the precisification-based probabilities described earlier. So even if the logic-based argument means that our degree of belief in indeterminate A should be zero, still there will be object-language claims we could read as “P(the coin will land Teads)=1/2” that will be supertrue. (The appropriate moral from the perspective of the theorist in question would be that whatever this operator expresses, it isn’t a notion that can be identified with degree of belief).

If this is right, then arguments that I’m interested in using against certain applications of the “certainty of indeterminacy entails credence zero” position have to be handled with extreme care. So, for example, in the paper mentioned right at the beginning of this post, I appeal to empirical data about folk judgements about the probabilities of conditionals. I was assuming that I could take this data as information on what the folk view about credences of conditionals is.

But if, compatibly with taking the “indeterminacy entails zero credence” view of conditionals, one could have within a language a P-operator which behaves in the ways described above, this isn’t so clear anymore. Explicit probability reports might be reporting on the P-operator, rather than subjective credence. So everything becomes rather delicate and very confusing.

A new wordpress home for theories n things

I’ve decided to move this blog to wordpress (I like the extra features it brings—and particularly the ability to use latex commands to write in logical notation—I plan to use that a lot).

Please feel free to leave comments on the functioning and aesthetics of the new blog. (The picture in the title-bar, if you’re wondering, is Mary the colour scientist in her monochrome room—a section from a rather lovely picture that was drawn for me for my first year “Mind” lectures by a friend of one of the graduate students here at Leeds).

Paracompleteness and credences in contradictions.

The last few posts have discussed non-classical approaches to indeterminacy.

One of the big stumbling blocks about “folklore” non-classicism, for me, is the suggestion that contradictions (A&~A) be “half true” where A is indeterminate.

Here’s a way of putting a constraint that appeals to me: I’m inclined to think that an ideal agent ought to fully reject such contradictions.

(Actually, I’m not quite as unsympathetic to contradictions as this makes it sound. I’m interested in the dialethic/paraconsistent package. But in that setting, the right thing to say isn’t that A&~A is half-true, but that it’s true (and probably also false). Attitudinally, the ideal agent ought to fully accept it.)

Now the no-interpretation non-classicist has the resources to satisfy this constraint. She can maintain that the ideal degree of belief in A&~A is always 0. Given that:

p(A)+p(B)=p(AvB)+p(A&B)

we have:

p(A)+p(~A)=p(Av~A)

And now, whenever we fail to fully accept Av~A, it will follow that our credences in A and ~A don’t sum to one. That’s the price we pay for continuing to utterly reject contradictions.

The *natural* view in this setting, it seems to me, is that accepting indeterminacy of A corresponds to rejecting Av~A. So someone fully aware that A is indeterminate should fully reject Av~A. (Here and in the above I’m following Field’s “No fact of the matter” presentation of the nonclassicist).

But now consider the folklore nonclassicist, who does take talk of indeterminate propositions being “half true” (or more generally, degree-of-truth talk) seriously. This is the sort of position that the Smith paper cited in the last post explores. The idea there is that indeterminacy corresponds to half-truth, and fully informed ideal agents should set their partial beliefs to match the degree-of-truth of a proposition (e.g. in a 3-valued setting, an indeterminate A should be believed to degree 0.5). [NB: obviously partial beliefs aren’t going to behave like a probability function if truth-functional degrees of truth are taken as an “expert function” for them.]

Given the usual min/max take on how these multiple truth values get settled over conjunction and negation, for the fullyinformed agent we’ll get p(Av~A) set equal to the degree of truth of Av~A, i.e. 0.5. And exactly the same value will be given to A&~A. So contradictions, far from being rejected, are appropriately given the same doxastic attitude as I assign to “this fair coin will land heads”

Another way of putting this: the difference between our overall attitude to “the coin will land heads” and “Jim is bald and not bald” only comes out when we consider attitudes to contents in which these are embedded. For example, I fully disbelieve B&~B when B=the coin lands heads; but I half-accept it for B=A&~A . That doesn’t at all ameliorate the implausibility of the initial identification, for me, but it’s something to work with.

In short, the Field-like nonclassicist sets A&~A to 0; and that seems exactly right. Given this and one or two other principles, we get a picture where our confidence in Av~A can take any value—right down to 0; and as flagged before, the probabilities of A and ~A carve up this credence between them, so in the limit where Av~A has probability 0, they take probability 0 too.

But the folklore nonclassicist I’ve been considering, for whom degrees-of-truth are an expert function for degrees-of-belief, has 0.5 as a pivot. For the fully informed, Av~A always exceeds this by exactly the amount that A&~A falls below it—and where A is indeterminate, we assign them all probability 0.5.

As will be clear, I’m very much on the Fieldian side here (if I were to be a nonclassicist in the first place). It’d be interesting to know whether folklore nonclassicists do in general have a picture about partial beliefs that works as Smith describes. Consistently with taking semantics seriously, they might think of the probability of A as equal to the measure of the set of possibilities where A is perfectly true. And that will always make the probability of A&~A 0 (since it’s never perfectly true); and meet various other of the Fieldian descriptions of the case. What it does put pressure on is the assumption (more common in degree theorists than 3-value theorists perhaps) that we should describe degree-of-truth-0.5 as a way of being “half true”—why in a situation where we know A is halftrue, would we be compelled to fully reject it? So it does seem to me that the rhetoric of folklore degree theorists fits a lot better with Smith’s suggestions about how partial beliefs work. And I think it’s objectionable on that account.

[Just a quick update. First observation. To get a fix on the “pivot” view, think of the constraint being that P(A)+P(~A)=1. Then we can see that P(Av~A)=1-P(A&~A), which summarizes the result. Second observation. I mentioned above that something that treated the degrees of truth as an expert function “won’t behave like a probability function”. One reflection of that is that the logic-probability link will be violated, given certain choices for the logic. E.g. suppose we require valid arguments to preserve perfect truth (e.g. we’re working with the K3 logic). Then A&~A will be inconsistent. And, for example, P(A&~A) can be 0.5, while for some unrelated B, P(B) is 0. But in the logic A&~A|-B, so probability has decreased over a valid argument. Likewise if we’re preserving non-perfect-falsity (e.g. we’re working with the LP system). Av~A will then be a validity, but P(Av~A) can be 0.5, yet P(B) be 1. These are for the 3-valued case, but clearly that point generalizes to the analogous definitions of validity in a degree valued setting. One of the tricky things about thinking about the area is that there are lots of choice-points around, and one is the definition of validity. So, for example, one might demand that valid arguments preserve both perfect truth and non-perfect falsity; and then the two arguments above drop away since neither |-Av~A nor A&~A|- on this logic. The generalization to this in the many-valued setting is to demand e-truth preservation for every e. Clearly these logics are far more constrained than the K3 or LP logics, and so there’s a better chance of avoiding violations of the logic-probability link. Whether one gets away with it is another matter.]

Regimentation (x-post).

Here’s something you frequently hear said about ontological commitment. First, that to determine the ontological commitments of some sentence S, one must look not at S, but at a regimentation or paraphrase of S, S*. Second (very roughly), you determine the ontological commitments of S by looking at what existential claims follow from S*.

Leave aside the second step of this. What I’m perplexed about is how people are thinking about the first step. Here’s one way to express the confusion. We’re asked about the sentence S, but to determine the ontological commitments we look at features of some quite different sentence S*. But what makes us think that looking at S* is a good way of finding out about what’s required of the world for S to be true?

Reaction (1). The regimentation may be constrained so as to make the relevance of S* transparent. Silly example: regimentation could be required to be null, i.e. every sentence has to be “regimented” as itself. No mystery there. Less silly example: the regimentation might be required to preserve meaning, or truth-conditions, or something similar. If that’s the case then one could plausibly argue that the OC’s of S and S* coincide, and looking at the OC’s of S* is a good way of figuring out what the OC’s of S is.

(The famous “symmetry” objections are likely to kick in here; i.e. if certain existential statements follow from S but not from S*, and what we know is that S and S* have the same OC’s, why take it that S* reveals those OC’s better than S?—so for example if S is “prime numbers exist” and S* is a nominalistic paraphrase, we have to say something about whether S* shows that S is innocent of OC to prime numbers, or whether S shows that S* is in a hidden way committed to prime numbers).

Obviously this isn’t plausibly taken as Quine view—the appeal to synonymy is totally unQuinean (moreover in Word and Object, he’s pretty explicit that the regimentation relationship is constrained by whether S* can play the same theoretical role as we initially thought S played—and that’ll allow for lots of paraphrases where the sentences don’t even have the appearance of being truth-conditionally equivalent).

Reaction (2). Adopt a certain general account of the nature of language. In particular, adopt a deflationism about truth and reference. Roughly: T- and R-schemes are in effect introduced into the object language as defining a disquotational truth-predicate. Then note that a truth-predicate so introduced will struggle to explain the predications of truth for sentences not in one’s home language. So appeal to translation, and let the word “true” apply to a sentence in a non-home language iff that sentence translates to some sentence of the home language that is true in the disquotational sense. Truth for non-home languages is then the product of translation and disquotational truth. (We can take the “home language” for present purposes to be each person’s idiolect).

I think from this perspective the regimentation steps in the Quinean characterization of ontological commitment have an obvious place. Suppose I’m a nominalist, and refuse to speak of numbers. But the mathematicians go around saying things like “prime numbers exist”. Do I have to say that what they say is untrue (am I going to go up to them and tell them this?) Well, they’re not speaking my idiolect; so according to the deflationary conception under consideration, what I need to do is figure out whether there sentences translate to something that’s deflationarily true in my idiolect. And if I translate them according to a paraphrase on which their sentences pair with something that is “nominalistically acceptable”, then it’ll turn out that I can call what they say true.

This way of construing the regimentation step of ontological commitment identifies it with the translation step of the translation-disquotation treatment of truth sketched above. So obviously what sorts of constraints we have on translation will transfer directly to constraints on regimentation. One *could* appeal to a notion of truth-conditional equivalence to ground the notion of translatability—and so get back to a conception whereby synonymy (or something close to it) was central to our analysis of language.

It’s in the Quinean spirit to take translatability to stand free of such notions (to make an intuitive case for separation here, one might, for example, that synonymy should be an equivalence relation, whereas translatability is plausibly non-transitive). There are several options. Quine I guess focuses on preservation of patterns of assent and dissent to translated pairs; Field appeals to his projectivist treatment of norms and takes “good translation” as something to be explained in projective terms. No doubt there are other ways to go.

This way of defending the regimentation step in treatments of ontological commitment turns essentially on deflationism about truth; and more than that, on a non-universal part of the deflationary project: the appeal to translation as a way to extend usage of the truth-predicate to non-home languages. If one has some non-translation story about how this should go (and there are some reasons for wanting one, to do with applying “true” to languages whose expressive power outstrips that of one’s own) then the grounding for the regimentation step falls away.

So the Quinean regimentation-involving treatment of ontological commitment makes perfect sense within a Quinean translation-involving treatment of language in general. But I can’t imagine that people who buy into to the received view of ontological commitment really mean to be taking a stance on deflationism vs. its rivals; or about the exact implementation of deflationism.

Of course, regimentation or translatability (in a more Quinean, preservation-of-theoretical-role sense, rather than a synonymy-sense) can still be significant for debates about ontological commitments. One might think that arithmetic was ontologically committing, but the existence of some nominalistic paraphrase that was suited to play the same theoretical role gave one some reassurance that one doesn’t *have* to use the committing language, and maybe overall these kind of relationships will undermine the case for believing in dubious entities—not because ordinary talk isn’t committed to them, but because for theoretical purposes talk needn’t be committed to them. But unlike the earlier role for regimentation, this isn’t a “hermeneutic” result. E.g. on the Quinean way of doing things, some non-home sentence “there are prime numbers” can be true, despite there being no numbers—just because the best translation of the quoted sentence translates it to something other than the home sentence “there are prime numbers”. This kind of flexibility is apparently lost if you ditch the Quinean use of regimentation.

Arche talks

In a few weeks time (31st March-5th April) I’m going to be visiting the Arche research centre in St Andrews, and giving a series of talks. I studied at Arche for my PhD, so it’ll be really good to go back and see what’s going on.

The talks I’m giving relate to the material on indeterminacy and probability (in particular, evidential probability or partial belief). The titles are as follows:

  • Indeterminacy and partial belief I: The open future and future-directed belief.
  • Indeterminacy and partial belief II: Conditionals and conditional belief.
  • Indeterminacy and partial belief III: Vague survival and de se belief.

A lot of these are based around exploring the consequences of the view that if p is indeterminate, and one knows this (or is certain of it) then one shouldn’t invest any probability in p. In the case of the open future, of conditionals, and in vague survival—for rather different reasons in each case—this seems highly problematic.

But why should you believe that key principle about how attitudes to indeterminacy constrain attitudes to p? The case I’ve been focussing on up till now has concerned a truth-value gappy position on indeterminacy. With a broadly classical logic governing the object language, one postulates truth-value gaps in indeterminate cases. There’s then an argument directly from this to the sort of revisionism associated with supervaluationist positions in vagueness. And from there, and a certain consistency requirement on rational partial belief (or evidence) we get the result. The consistency requirement is simply the claim, for example, that if q follows from p, one cannot rationally invest more confidence in p than one invests in q (given, of course, that one is aware of the relevant facts).

The only place I appeal to what I’ve previously called the “Aristotelian” view of indeterminacy (truth value gaps but LEM retained) is in arguing for the connection between attitudes to determinately p and attitudes to p. But I’ve just realized something that should have been obvious all along—which is that there’s a quick argument to something similar for someone who thinks determinacy is marked by a rejection of excluded middle. Assume, to begin with, that the paracompletist nonclassicist will think in borderline cases, characteristically, one should reject the relevant instance of excluded middle. So if one is fully convinced that p is borderline, one should utterly reject pv~p.

It’s dangerous to generalize about non-classical systems, but the ones I’m thinking of all endorse the claim p|-pvq—i.e. disjunction introduction. So in particular, an instance of excluded middle will follow from p.

But if we utterly reject pv~p in a borderline case (assign it credence 0), then by the probability-logic link we should utterly reject (assign credence 0) anything from which it follows.
In particular, we should assign credence 0 to p. And by parallel reasoning, we should assign credence 0 to ~p.

[Edit: there’s a question, I think, about whether the non-classicist should take us to utterly reject LEM in a borderline case (i.e. degree of partial belief=0). The folklore non-classicist, at least, might suggest that on her conception degrees of truth should be expert functions for partial beliefs—i.e. absent uncertainty about what the degrees of truth are, one should conform the partial beliefs to the degrees of truth. Nick J. J. Smith has a paper where he works out a view that has this effect, from what I can see. It’s available here and is well worth a read. If a paradigm borderline case for the folklore nonclassicist is one where degree of truth of p, not p and pv~p are all 0.5, then one’s degree of belief in all of them should be 0.5. And there’s no obvious violation of the probability-logic link here. (At least in this specific case. The logic will have to be pretty constrained if it isn’t to violate probability-logic connection somewhere).]

If all this is correct, then I don’t need to restrict myself to discussing the consequences of the Aristotelian/supervaluation sort of view. Everything will generalize to cover the nonclassical cases—and will cover both the folklore nonclassicist and the no interpretation nonclassicist discussed in the previous cases (here’s a place where there’s convergence).

[A folklore classicist might object that for them, there isn’t a unique “logic” for which to run the argument. If one focuses on truth-preservation, one gets say a Kleene logic; if one focuses on non-falsity preservation, one gets an LP logic. But I don’t think this thought really goes anywhere…]

Non-classical logics: the no interpretation account

In the previous post, I set out what I took to be one folklore conception of a non-classicist treatment of indeterminacy. Essential elements were (a) the postulation of not two, but several truth statuses; (b) the treatment of “it is indeterminate whether” (or degreed variants thereof) as an extensional operator; (c) the generalization to this setting of a classicist picture, where logic is defined as truth preservation over a range of reinterpretations, one amongst which is the interpretation that gets things right.

I said in that post that I thought that folklore non-classicism was a defensible position, though there’s some fairly common maneuvers which I think the folklore non-classicist would be better off ditching. One of these is the idea that the intended interpretation is describable “only non-classically”.

However, there’s a powerful alternative way of being a non-classicist. The last couple of weeks I’ve had a sort of road to Damascus moment about this, through thinking about non-classicist approaches to the Liar paradox—and in particular, by reading Hartry Field’s articles and new book where he defends a “paracomplete” (excluded-middle rejecting) approach to the semantic paradoxes and work by JC Beall on a “paraconsistent” (contradiction-allowing) approach.

One interpretative issue with the non-classical approaches to the Liar and the like is that a crucial element is a truth-predicate that works in a way very unlike the notion of “truth” or “perfect truth” (“semantic value 1”, if you want neutral terminology) that feature in the many-valued semantics. But that’s not necessarily a reason by itself to start questioning the folklore picture. For it might be that “truth” is ambiguous—sometimes picking up on a disquotational notion, sometimes tracking the perfect truth notion featuring in the nonclassicists semantics. But in fact there are tensions here, and they run deep.

Let’s warm up with a picky point. I was loosely throwing around terms like “3-valued logic” in the last post, and mentioned the (strong) Kleene system. But then I said that we could treat “indeterminate whether p” as an extensional operator (the “tertium operator” that makes “indet p” true when p is third-valued, and otherwise false). But that operator doesn’t exist in the Kleene system—the Kleene system isn’t expressively complete with respect to the truth functions definable over three values, and this operator is one of the truth-functions that isn’t there. (Actually, I believe if you add this operator, you do get something that is expressively complete with respect to the three valued truth-functions).

One might take this to be just an expressive limitation of the Kleene system. After all, one might think, in the intended interpretation there is a truth-function behaving in the way just described lying around, and we can introduce an expression that picks up on it if we like.

But it’s absolutely crucial to the nonclassical treatments of the Liar that we can’t do this. The problem is that if we have this operator in the language, then “exclusion negation” is definable—an operator “neg” such that “neg p” is true when p is false or indeterminate, and otherwise false (this will correspond to “not determinately p”—i.e. ~p&~indeterminate p, where ~ is so-called “choice” negation, i.e. |~p|=1-|p|). “p v neg p” will be a tautology; and arbitrary q will follow from the pair {p, neg p}. But this is exactly the sort of device that leads to so-called “revenge” puzzles—Liar paradoxes that are paradoxical even in the 3-valued system. Very roughly, it looks as if on reasonable assumptions a system with exclusion negation can’t have a transparent truth predicate in it (something where p and T(p) are intersubstitutable in all extensional contexts). It’s the whole point of Field and Beall’s approaches to retain something with this property. So they can’t allow that there is such a notion around (so for example, Beall calls such notions “incoherent”).

What’s going on? Aren’t these approaches just denying us the resources to express the real Liar paradox? The key, I think, is a part of the nonclassicist picture that Beall and Field are quite explicit about and which totally runs against the folklore conception. They do not buy into the idea that model theory is ranging over a class of “interpretations” of the language among which we might hope to find the “intended” interpretation. The core role of the model theory is to give an extensionally adequate characterization of the consequence relation. But the significance of this consequence relation is not to be explained in model-theoretic terms (in particular, in terms of one among the models being intended, so that truth-preservation on every model automatically gives us truth-preservation simpliciter).

(Field sometimes talks about the “heuristic value” of this or that model and explicitly says that there is something more going on than just the use of model theory as an “algebraic device”. But while I don’t pretend to understand exactly what is being invoked here, it’s quite quite clear that the “added value” doesn’t consist on some classical 3-valued model being “intended”.)

Without appeal to the intended interpretation, I just don’t see how the revenge problem could be argued for. The key thought was that there is a truth-function hanging around just waiting to be given a name, “neg”. But without the intended interpretation, what does this even mean? Isn’t the right thought simply that we’re characterizing a consequence relation using rich set-theoretic resources—and in terms of which we can draw differences that correspond to nothing in the phenomenon being modelled.

So it’s absolutely essential to the nonclassicist treatment of the Liar paradox that we drop the “intended interpretation” view of language. Field, for one, has a ready-made alternative approach to suggest—a Quinean combination of deflationism about truth and reference, with perhaps something like translatability being invoked to explain how such predicates can be applied to expressions in a language other than ones own.

I’m therefore inclined to think of the non-classicism—at least about the Liar—as a position that *requires* something like this deflationist package. Whereas the folklore non-classicist I was describing previously is clearly someone who takes semantics seriously, and who buys into a generalization of the powerful connections between truth and consequence that a semantic theory of truth affords.

When we come to the analysis of vagueness and other (non-semantic-paradox related) kinds of indeterminacy, it’s now natural to consider this “no interpretation” non-classicism. (Field does exactly this—he conceives of his project as giving a unified account of the semantic paradoxes and the paradoxes of vagueness. So at least *this* kind of nonclassicism, we can confidently attribute to a leading figure in the field). All the puzzles described previously for the non-classicist position are thrown into a totally new light. Once we make this move.

To begin with, there’s no obvious place for the thought that there are multiple truth statuses. For you get that by looking at a many valued model, and imagining that to be an image of what the intended interpretation of the language must be like. And that is exactly the move that’s now illegitimate. Notice that this undercuts one motivation for going towards a fuzzy logic—the idea that one represents vague predicates as some smoothly varying in truth status. Likewise, the idea that we’re just “iterating a bad idea” in multiplying truth values doesn’t hold water on this conception—since the many-values assigned to sentences in models just don’t correspond to truth statuses.

Connectedly, one shouldn’t say that contradictions can be “half true” (nor that excluded middle is “half true”. It’s true that (on say the Kleene approach) that you won’t have ~(p&~p) as a tautology. Maybe you could object to *that* feature. But that to me doesn’t seem nearly as difficult to swallow as a contradiction having “some truth to it” despite the fact that from a contradiction, everything follows.

One shouldn’t assume that “determinately” should be treated as the tertium operator. Indeed, if you’re shooting for a combined non-classical theory of vagueness and semantic paradoxes, you *really* shouldn’t treat it this way, since as noted above this would give you paradox back.

There is therefore a central and really important question: what is the non-classical treatment of “determinately” to be? Sample answer (lifted from Field’s discussion of the literature): define D(p) as p&~(p–>~p), where –> is a certain fuzzy logic conditional. This, Field argues, has many of the features we’d intuitively want a determinately operator to have; and in particular, it allows for non-trivial iterations. So if something like this treatment of “determinately” were correct, then higher-order indeterminacy wouldn’t be obviously problematic (Field himself thinks this proposal is on the right lines, but that one must use another kind of conditional to make the case).

“No interpretation” nonclassicism is an utterly, completely different position from the folklore nonclassicism I was talking about before. For me, the reasons to think about indeterminacy and the semantic and vagueness-related paradoxes in the first place, is that they shed light on the nature of language, representation, logic and epistemology. And on these sorts of issues, the no interpretation nonclassicism and the folklore version take diametrically opposed positions on such issues, and flowing from this, the appropriate ways to arguing for or against these views are just very very different.