Category Archives: Logic

Psychology without semantics or psychologically loaded semantics?

Here’s a naive view of classical semantics, but one worth investigating. According to this view, semantics is a theory that describes a function from sentences to semantic properties (truth and falsity) relative to a given possible circumstance (“worlds”). Let’s suppose it does this via a two-step method. First, it assigns to each sentence a proposition. Second, it assigns to each proposition, a function from worlds to {True, False} (“truth conditions”).

Let’s focus on the bit where propositions (at a world) are assigned truth-values. One thing that’s leaps out is that the “truth values” assigned have significance beyond semantics. For propositions, we may assume, are the objects of attitudes like belief. It’s natural to think that in some sense, one should believe what’s true, and reject what’s false. So the statuses attributed to propositions as part of the semantic theory (the part that describes the truth-conditions of propositions) are psychologically loaded, in that propositions that have one of the statuses are “to be believed” and those that have the other are “to be rejected”. The sort of normativity involved here is extremely externalistic, of course — it’s not a very interesting criticism of me that I happen to believe p, just on the basis p is false, if my evidence suggested p overwhelmingly. But the idea of an external ought here is familiar and popular. It is often reported, somewhat metaphorically, as the idea that beliefs aims at truth (for discussion, see e.g. Ralph Wedgwood on the aim of belief).

Suppose we are interested in one of the host of non-classical semantic theories that are thrown around  when discussing vagueness. Let’s pick a three-valued Kleene theory, for example. On this view, we have three different semantic statuses that propositions (relative to a circumstance) are mapped to. Call them neutrally A, B and C (much of the semantic theory is then spent telling us how these abstract “statuses” are distributed around the propositions, or sentences which express the propositions). But what, if any, attitude is it appropriate to take to a proposition that has one of these statuses? If we have an answer to this question, we can say that the semantic theory is psychologically loaded (just as the familiar classical setting was).

Rarely do non-classical theorists tell us explicitly what the psychological loading of the various states are. But you might think an answer is implicit in the names they are given. Suppose that status A is called “true”, status C is called “falsity”. Then, surely, propositions that are A are to be believed, and propositions with C are to be rejected. But what of the “gaps”, the propositions that have status B, the ones that are neither true nor false? It’s rather unclear what to say; and without explicit guidance about what the theorist intends, we’re left searching for a principled generalization. One thought is that they’re at least untrue, and so are intended to have the normative role that all untrue propositions had in the classical setting—they’re to be rejected. But of course, we could equally have reasoned that, as propositions that are not false, they might be intended to have the status that all unfalse propositions have in the classical setting—they are to be accepted. Or perhaps they’re to have some intermediate status—-maybe a proposition that has B is to be half-believed (and we’d need some further details about what half-belief amounts to). One might even think (as Maudlin has recently explicitly urged) that in leaving a gap between truth and falsity, the propositions are devoid of psychological loading—that there’s nothing general to say about what attitude is appropriate to the gappy cases.

But notice that these kind of questions are at heart, exegetical—that we face them just reflects the fact that the theorist hasn’t told us enough to fix what theory is intended. The real insight here is to recognize that differences in psychological loading give rise to very different theories (at least as regards what attitudes to take to propositions), which should each be considered on their own merits.

Now, Stephen Schiffer has argued for some distinctive views about what the psychology of borderline cases should be like. As John Macfarlane and Nick Smith have recently urged, there’s a natural way of using Schiffer’s descriptions to fill out in detail one fully “psychologically loaded” degree-theoretic semantics. To recap, Schiffer distinguishes between “standard” partial beliefs (SPBs) which we can assume behave in familiar (probabilistic) ways and have their familiar functional role when there’s no vagueness or indeterminacy at issue. But then we also have special “vagueness-related” partial beliefs (VPBs) which come into play for borderline cases. Intermediate standard partial beliefs allow for uncertainty, but are “unambivalent” in the sense that when we are 50/50 over the result of a fair coin flip, we have no temptation to all-out judge that the coin will land heads. By contrast, VPBs exclude uncertainty, but generate ambivalence: when we say that Harry is smack-bang borderline bald, we are pulled to judge that he is bald, but also (conflictingly) pulled to judge that he is not bald.

Let’s suppose this gives us enough for an initial fix on the two kinds of state. The next issue is to associate them with the numbers a degree-theoretic semantics assigns to propositions (with Edgington, let’s call these numbers “verities”). Here is the proposal: a verity of 1 for p is ideally associated with (standard) certainty that p—an SPB of 1. A verity of 0 for p is ideally associated with (standard) utter rejection of p—an SPB of 0. Intermediate verities are associated with VPBs. Generally, a verity of k for p is associated with a VPB of degree k in p. [Probably, we should say for each verity, both what the ideal VPB and SPB are. This is easy enough: one should have VPBs of zero when the verity is 1 or 0; and SPB of zero for any verity other than 1.]

Now, Schiffer’s own theory doesn’t make play with all these “verities” and “ideal psychological states”. He does use various counterfactual idealizations to describe a range of “VPB*s”—so that e.g. relative to a given circumstance, we can talk about which VPB an idealized agent would take to a given proposition (though it shouldn’t be assumed that the idealization gives definitive verdicts in any but a small range of paradigmatic cases). But his main focus is not on the norms that the world imposes on psychological attitudes, but norms that concern what combinations of attitudes we may properly adopt—-requirements of “formal coherence” on partial belief.

How might a degree theory psychologically loaded with Schifferian attitudes relate to formal coherence requirements? Macfarlane and Smith, in effect, observe that something approximating Schiffer’s coherence constraints arises if we insist that the total partial belief in p (SPB+VPB) is always representable as an expectation of verity (relative to a classical credence distribution over possible situations). We might also observe that component corresponding to Schifferian SPB within this is always representable as the expectation of verity 1 (relative to the same credence). That’s suggestive, but it doesn’t do much to explain the connection between the external norms that we fed into the psychological loading, and the formal coherence norms that we’re now getting out. And what’s the “underlying credence over worlds” doing? If all the psychological loading of the semantics is doing is enabling a neat description of the coherence norms, that may have some interest, but it’s not terribly exciting—what we’d like is some kind of explanation for the norms from facts about psychological loading.

There’s a much more profound way of making the connection: a way of deriving coherence norms from psychologically loaded semantics. Start with the classical case. Truth (truth value 1) is associated with certainty (credence 1). Falsity (truth value 0) is associated with utter rejection (credence 0). Think of inaccuracy as a way of measuring how far a given partial belief is from the actual truth value; and interpret the “external norm” as telling you to minimize overall inaccuracy in this sense.

If we make suitable (but elegant and arguably well-motivated) assumptions about how “accuracy” is to be measured, then it turns out probabilistic belief states emerge as a special class in this setting. Every improbabilistic belief state can be shown to be accuracy-dominated by a probabilistic one—-there’s some particular probability that’ll be necessarily more accurate than the improbability you started with. No probabilistic belief state is dominated in this sense.

Any violations of formal coherence norms thus turns out to be needlessly far from the ideal aim. And this moral generalizes. Taking the same accuracy measures, but applying them to verities as the ideals, we can prove exactly the same theorem. Anything other than the Smith-Macfarlane belief states will be needlessly distant from the ideal aim. (This is generated by an adaptation of Joyce’s 1998 work on accuracy and classical probabilism—see here for the generalization).

There’s an awful lot of philosophy to be done to spell out the connection in the classical case, let alone its non-classical generalization. But I think even the above sketch gives a view on how we might not only psychologically load a non-classical semantics, but also use that loading to give a semantically-driven rationale for requirements of formal coherence on belief states—and with the Schiffer loading, we get the Macfarlane-Smith approximation to Schifferian coherence constraints.

Suppose we endorsed the psychologically-loaded, semantically-driven theory just sketched. Compare our stance to a theorist who endorsed the psychology without semantics—that is, they endorsed the same formal coherence constraints, but disclaimed commitment to verities and their accompanying ideal states. They thus give up on the prospect of giving the explanation of the coherence constraints sketched above. We and they  would agree on what kinds of psychological states are rational to hold together—including what kind of VPB one could rationally take to p when you judge p to be borderline. So they could both agree on the doxastic role of the concept of “borderlineness”, and in that sense give a psychological specification of the concept of indeterminacy. We and they would be allied against rival approaches—say, the claims of the epistemicists (thinking that borderlineness generates uncertainty) and Field (thinking that borderlineness merits nothing more than straight rejection).  The fan of psychology-without-semantics might worry about the metaphysical commitments of his friend’s postulate of a vast range of fine-grained verities (attaching to propositions in circumstances)—metasemantic explanatory demands and higher-order-vagueness puzzles are two familiar ways in which this disquiet might be made manifest. In turn, the fan of psychologically loaded, semantically driven theory might question his friend’s refusal to give any underlying explanation of the source of the requirements of formal coherence he postulates. Can explanatory bedrock really be certain formal patterns amongst attititudes? Don’t we owe an illuminating explanation of why those patterns are sensible ones to adopt? (Kolodny mocks this kind of attitude, in recent work, as picturing coherence norms as a mere “fetish for a certain kind of mental neatness”). That explanation needn’t take a semantically-driven form—but it feels like we need something.

To repeat the basic moral. Classical semantics, as traditionally conceived, is already psychologically loaded. If we go in for non-classical semantics at all (with more than instrumentalist ambitions in mind) we underspecify the theory until we’re told what what the psychological loading of the new semantic values is to be. That’s one kind of complaint against non-classical semantics. It’s always possible to kick away the ladder—to take the formal coherence constraints motivated by a particular elaboration of this semantics, and endorse only these without giving a semantically-driven explanation of why these constraints in particular are in force. Repeating this stance, we can find pairs of views that, while distinct, are importantly allied on many fronts. I think in particular this casts doubt on the kind of argument that Schiffer often sounds like he’s giving—i.e. to argue from facts about appropriate psychological attitudes to borderline cases, to the desirability of a “psychology without semantics” view.

Intuitionism and truth value gaps

I spent some time last year reading through Dummett on non-classical logics. One aim was to figure out what sorts of arguements there might be against combining a truth-value gap view with intuitionistic logic. The question is whether in an intuitionist setting it might be ok to endorse ~T(A)&~T(~A) (The characteristic intuitionistic feature, hereabouts, is a refusal to assert T(A)vT(~A)—which is certainly weaker than asserting its negation. Indeed, when it comes to the law of excluded middle, the intuitionist refuses to assert Av~A in general, but ~(Av~A) is an intuitionistic inconsistency).

On the motivational side: it is striking that in Kripke tree semantics for intuitionistic logic, there are sentences such that neither they nor their negation are “forced”. And if we think of forcing in a Kripke tree as an analogue of truth, that looks like we’re modelling truth value gaps.

A familiar objection to the very idea of truth-value gaps (which appears early on in Dummett—though I can’t find the reference right now) is that asserting the existence of truth value gaps (i.e. endorsing ~T(A)&~T(~A)) is inconsistent with the T-scheme. For if we have “T(A) iff A”, then contraposing and applying modus ponens, we derive from the above ~A and ~~A—contradiction. However, this does require the T-scheme, and you might let the reductio fall on that rather than the denial of bivalence. (Interestingly, Dummett in his discussion of many-valued logics talks about them in terms of truth value gaps without appealing to the above sort of argument—so I’m not sure he’d rest all that much on it).

Another idea I’ve come across is that an intuitionistic (Heyting-style) reading of what “~T(A)” says will allow us to infer from it that ~A (this is based around the thought that intuitionistic negation says “any proof of A can be transformed into a proof of absurdity”). That suffices to reduce a denial of bivalence to absurdity. There are a few places to resist this argument too (and it’s not quite clear to me how to set it up rigorously in the first place) but I won’t go into it here.

Here’s one line of thought I was having. Suppose that we could argue that Av~A entailed the corresponding instance of bivalence: T(A)vT(~A). It’s clear that the latter entails ~(~T(A)&~T(~A))—i.e. given the claim above, the law of excluded middle for A will entail that A is not gappy.

So now suppose we assert that it is gappy. For reductio, suppose Av~A. By the above, this entails that A is not gappy. Contradiction. Hence ~(Av~A). But we know that this is itself an intuitionistic inconsistency. Hence we have derived absurdity from the premise that A is gappy.

So it seems that to argue against gaps, we just need the minimal claim that LEM entails bivalence. Now, it’s a decent question what grounds we might give for this entailment claim; but it strikes me as sufficiently “conceptually central” to the intuitionistic idea about what’s going on that it’s illuminating to have this argument around.

I guess the last thing to point out is that the T-scheme argument may be a lot more impressive in an intuitionistic context in any case. A standard maneuver when denying the T-scheme is to keep the T-rules: to say that A entails T(A), for example (this is consistent with rejecting the T-scheme if you drop conditional proof, as supervaluational and many-valued logicians often do). But in an intuitionistic context, the T-rule contraposes (again, a metarule that’s not good in supervaluational and many-valued settings) to give an entailment from ~T(A) to ~A, which is sufficient to reduce the denial of bivalence to absurdity. This perhaps explains why Dummett is prepared to deny bivalence in non-classical settings in general, but seems wary of this in an intuitionistic setting.

The two cleanest starting points for arguing against gaps for the intuitionist, it seems to me, are either to start with the T-rule, “A entails T(A)” or with the claim “Av~A entails T(A)vT(~A)”. Clearly the first allows you to derive the second. I can’t see at the moment an argument that the second entails the first (if someone can point to one, I’d be very interested), so perhaps basing the argument against gaps on the second is the optimal strategy. (It does leave me with a puzzle—what is “forcing” in a Kripke tree supposed to model, since that notion seems clearly gappy?)

Motivating material conditionals

Soul physics has a post that raises a vexed issue: how to say something to motivate the truth-table account of the material conditional, for people first encountering it.

They give a version of one popular strategy: argue by elimination for the familiar material truth table. The broad outline they suggest (which I think is a nice way to divide matters up) goes like this.

(1) Argue that “if A, B” is true when both are true, and false if A is true and B is false. This leaves two remaining cases to be consider—the cases where A is false.

(2) Argue that none of the three remaining rivals to the material conditional truth table works.

I won’t say much about (1), since the issues that arise aren’t that different from what you anyway have to deal with for motivating truth tables for disjunction, say.

(2) is the problem case. The way Soul Physics suggests presenting this is as following from two minimal observations about the material conditional (i) it isn’t trivial (i.e. it doesn’t just have the same truth values as one of the component sentences) and it’s not symmetric—“if A then B” and “if B then A” can come apart.

In fact, all of the four options that remain at this stage can be informatively described. There’s a truth-function equivalent to A (this is the trivial one); the conjunction of A&B; the biconditional between A and B (these are both symmetric); and finally the material conditional itself.

But there’s something structurally odd about these sort of motivations. We argue by elimination of three options, leaving the material conditional account the winner. But the danger is, of course, that we find something that looks equally as bad or worse with the remaining option, leaving us back where we started with no truth table better motivated than the others.

And the trouble, notoriously, is that this is fairly likely to happen, the moment people get wind of paradoxes of material implication. It’s pretty hard to explain why we put so much weight on symmetry, while (to our students) seeming to ignore the fact that the account says silly things like “If I’m in the US, I’m in the UK” is true.

One thing that’s missing is a justification for the whole truth-table approach—if there’s something wrong with every option, shouldn’t we be questioning our starting points? And of course, if someone raises these sort of questions, we’re a little stuck, since many of us think that the truth table account really is misguided as a way to treat the English indicative. But intro logic is perhaps not the place to get into that too much!

So I’m a bit stuck at this point—at least in intro logic. Of course, you can emphasize the badness of the alternatives, and just try to avoid getting into the paradoxes of material implication—but that seems like smoke and mirrors to me, and I’m not very good at carrying it off. So if I haven’t got more time to go into the details I’m back to saying things like: it’s not that there’s a perfect candidate, but it happens that this works better than others—trust me—so let’s go with it. When I was taught this stuff, I was told about Grice at this point, and I remember that pretty much washing over my head. And its a bit odd to defend widespread practice of using the material conditional by pointing to one possible defence of it as an interpretation of the English indicative that most of us thing is wrong anyway. I wish I had a more principled fallback.

When I’ve got more time—and once the students are more familiar with basic logical reasoning and so on, I take a different approach, one that seems to me far more satisfactory. The general strategy, that replaces (2), at least, of the above, is to argue directly for the equivalence of the conditional “if A, B” with the corresponding disjunction ~AvB. And if you want to give a truth table for the former,  you just read off the latter.

Now, there are various ways of doing this—say by pointing to inferences that “sound good”, like the one from “A or B” to “if not A, then B”. The trouble is that we’re in a similar situation to that earlier—there are inferences that sound bad just nearby. A salient one is the contrapositive: “It’s not true that if A, then B”  doesn’t sound like it implies “A and ~B”. So there’s a bit of a stand-off between or-to-if and not-if-to-and-not.

My favourite starting point is therefore with inferences that don’t just sound good, but for which you see an obvious rationale—and here the obvious candidates are the classic “in” and “out” rules for the conditional: modus ponens and conditional proof. You can really see how the conditional is functioning if it obeys those rules—allowing you to capture good reasoning from assumptions, store it, and then release it when needed. It’s not just reasonable—it’s the sort of thing we’d want to invent if we didn’t have it!

Given these, there’s a straightforward little argument by conditional proof (using disjunctive syllogism, which is easy enough to read off the truth table for “or”) for the controversial direction of equivalence between the English conditional “if A, B” and ~AvB. Our premise is ~AvB. To show the conditional follows, we use conditional proof. Assume A. By disjunctive syllogism, B. So by conditional proof, if A then B.

If you’ve already motivated the top two lines  of the truth table for “if”, then this is enough to fill out the rest of the truth table—that ~AvB entails “if A then B” tells you how the bottom two lines should be filled out. Or you could argue (using modus ponens and reasoning by cases) for the converse entailment, getting the equivalence, at which point you really can read off the truth table.

An alternative is to start from scratch motivating the truth table. We’ve argued that ~AvB entails “if A then B”. This forces the latter to be true whenever the former is.  Hence the three “T” lines of the material conditional truth table—which are the controversial bits. In order that modus ponens hold, we can’t have the conditional true when the antecedent is true and the consequent false, so we can see that the remaining entry in the truth table must be “F”. So between them, conditional proof (via the above argument) and modus ponens (directly) fix each line of the material truth table.

Now I suspect that—for people who’ve already got the idea of a logical argument, assumptions, conclusions and so on—this sort of idea will seem pretty accessible. And the idea that conditionals are something to do with reasoning under suppositions is very easy to sell.

Most of all though, what I like about this way of presenting things is that there’s something deeply *right* about it. It really does seem to me that the reason for bothering with a material conditional at all is its “inference ticket” behaviour, as expressed via conditional proof and modus ponens.  So there’s something about this way of putting things that gets to the heart of things (to my mind).

But, further, this way of looking at things provides a nice comparison and contrast with other theories of the English indicative, since you can view famous options as essentially giving different ways of cashing out the relationship between conditionals and reasoning under a supposition. If we don’t like the conditional-proof idea about how they are related, an obvious next thing to reach for is the Ramsey test—which in a probabilistic version gets you ultimately into the Adams tradition. Stalnakerian treatments of conditionals can be given a similar gloss. Presented this way, I feel that the philosophical issues and the informal motivations are in sync.

I’d really like to hear about other strategies/ways of presenting this—-particular ideas for how to get it across at “first contact”.

Gradational accuracy; Degree supervaluational logic

In lieu of new blogposts, I thought I’d post up drafts of two papers I’m working on. They’re both in fairly early stages (in particular, the structure of each needs quite a bit of sorting out. But as they’re fairly techy, I think I’d really benefit from any trouble-shooting people were willing to do!

The first is “Degree supervaluational logic“. This is the kind of treatment of indeterminacy that Edgington has long argued for, and it also features in work from the 70’s by Lewis and Kamp. Weirdly, it isn’t that common, though I think there’s a lot going for it. But it’s arguably implicit in a lot of people’s thinking about supervaluationism. Plenty of people like the idea that the “proportion of sharpenings on which a sentence is true” tells us something pretty important about that sentence—maybe even serving to fix what degree of belief we should have in it. If proportions of sharpenings play this kind of “expert function” role for you, then you’re already a degree-supervaluationist in the sense I’m concerned with, whether or not you want to talk explicitly about “degrees of truth”.

One thing I haven’t seen done is to look systematically at its logic. Now, if we look at a determinacy-operator free object language, the headline news is that everything is classical—and that’s pretty robust under a number of ways of defining “validity”. But it’s familiar from standard supervaluationism that things can become tricky when we throw in determinacy operators. So I look at what happens when we add in things like “it is determinate to degree 0.5 that…” into our object-language. What happens now depends *very much* on how validity is defined. I think there’s a lot to be said for “degree of truth preservation” validity—i.e. the conclusion has to be at least as true as the premises. This is classical in the determinacy-free language. And its “supraclassical” even when those operators are present—every classically valid argument is still valid. But in terms of metarules, all hell breaks loose. We get failures of conjunction introduction, for example; and of structural rules such as Cut. Despite this, I think there’s a good deal to be said for the package.

The second paper “Gradational accuracy and non-classical semantics”  is on Joyce’s work on scoring functions. I look at what happens to his 1998 argument for probabilism, when we’ve got non-classical truth-value assignments in play. From what I can see, his argument generalizes very nicely. For each kind of truth-value assignment, we can characterize a set of “coherent” credences, and show that for any incoherent credence there is a single coherent credence which is more accurate than it, no matter what the truth-values turn out to be.

In certain cases, we can relate this to kinds of “belief functions” that are familiar. For example, the class of supervaluationally coherent credences I think can be shown to be Dempster-Shafer belief functions—at least if you define supervaluational “truth values” as I do in the paper.

As I mentioned, there are certainly some loose ends in this work—be really grateful for any thoughts! I’m going to be presenting something from the degree supervaluational paper at the AAP in July, and also on the agenda is to write up some ideas about the metaphysics of radical interpretation (as a kind of fictionalism about semantics) for the Fictionalism conference in Manchester this September.

[Update: I’ve added an extra section to the gradational accuracy paper, just showing that “coherent credences” for the various kinds of truth-value assignments I discuss satisfy the generalizations of classical probability theory suggested in Brian Weatherson’s 2003 NDJFL paper. The one exception is supervaluationism, where only a weakened version of the final axiom is satisfied—but in that case, we can show that the coherent credences must be Dempster-Shafer functions. So I think that gives us a pretty good handle on the behaviour of non-accuracy-dominated credences for the non-classical case.]

[Update 2: I’ve tightened up some of the initial material on non-classical semantics, and added something on intuitionism, which the generalization seems to cover quite nicely. I’m still thinking that kicking off the whole thing with lists of non-classical semantics ain’t the most digestable/helpful way of presenting the material, but at the moment I just want to make sure that the formal material works.]

Kripkean conservativeness?

Suppose you have some theory R, formulated in that fragment of English that is free of semantic vocabulary. The theory, we can assume, is at least “effectively” classical—e.g. we can assume excluded middle and so forth for each predicate that it uses. Now think of total theory—which includes not just this theory but also, e.g. a theory of truth.

It would be nice if truth in this widest theory could work “transparently”—so that we could treat “p” and “T(p)” as intersubstitutable at least in all extensional contexts. To get that, something has to go. E.g. the logic for the wider language might have to be non-classical, to avoid the Liar paradox.

One question is whether weakening logic is enough to avoid problems. For all we’ve said so far, it might be that one can have a transparent truth-predicate—but only if one’s non-semantic theories are set up just right. In the case at hand, the worry is that R cannot be consistently embedded within a total theory that includes a transparent truth predicate. Maybe in order to ensure consistency of total theory, we’d have to play around with what we say in the non-semantic fragment. It’d be really interesting if we could get a guarantee that we never need to do that. And this is one thing that Kripke’s fixed point construction seems to give us.

Think of Kripke’s techniques as a “black box”, which takes as input classical models of the semantics-free portion of our language, and outputs non-classical models of language as a whole—and in such a way as to make “p” and “Tp” always coincide in semantic value. Crucially, the non-classical model coincides with the classical model taken as input when it comes to the semantics-free fragment. So if “S” is in the semantics-free language and is true-on-input-model, then it will be true-on-the-output model.

This result seems clearly relevant to the question of whether we disrupt theories like R by embedding them within a total theory incorporating transparent truth. The most obvious thought is to let M be the intended (classical) model of our base language—and then view the Kripke construction as outputting a candidate to be the intended interpretation of total language. And the result just given tells us that if R is true relative to M, it remains true relative to the outputted Kripkean (non-classical model).

But this is a contentious characterization. For example, if our semantics-free language contains absolutely unrestricted quantifiers, there won’t be a (traditional) model that can serve as the “intended interpretation”. For (traditional) models assign sets as the range of quantifiers, and no set contains absolutely everything—in particular no set can contain all sets. And even if somehow we could finesse this (e.g. if we could argue that our quantifiers can never be absolutely unrestricted), it’s not clear that we should be identifying true-on-the-output-model with truth, which is crucial to the above suggested moral.

Field suggests we take a different moral from the Kripkean construction. Focus on the question of whether theories like R (which ex hypothesi are consistent taken alone), might turn out to be inconsistent in the light of total theory—in particular, might turn out to be inconsistent once we’ve got a transparent truth predicate in our language. He argues that the Kripkean construction gives us this.

Here’s the argument. Suppose that R is classically consistent. We want to know whether R+T is consistent, where R+T is what you get from R when you add in a transparent truth-predicate. The consistency of R means that there’s a classical model on which it is true. Input that into Kripke’s black box. And what we get out the other end is a (non-classical) model of R+T. And the existence of such a model (whether or not it’s an “intended one”) means that R+T is consistent.

Field explicitly mentions one worry about this–that it might equivocate over “consistent”. If consistent just means “has a model (of such-and-such a kind)” then the argument goes through as it stands. But in the present setting it’s not obvious what all this talk of models is doing for us. After all, we’re not supposed to be assuming that one among the models is the “intended” one. In fact, we’re supposed to be up for the thesis that the very notion of “intended interpretation” should be ditched, in which case there’d be no space even for viewing the various models as possibly, though not actually, intended interpretations.

This is the very point at which Kreisel’s squeezing argument is supposed to help us. For it forges a link between intuitive consistency, and the model-theoretic constructions. So we could reconstruct the above line of thought in the following steps:

  1. R is consistent (in the intuitive sense)
  2. So: R is consistent (in the model-theoretic sense). [By a squeezing argument]
  3. So: R+T is consistent (in the model-theoretic sense). [By the Kripkean construction]
  4. So: R+T is consistent (in the intuitive sense). [By the squeezing argument again]

Now, I’m prepared to think that the squeezing argument works to bridge the gap between (1) and (2). For here we’re working within the classical fragment of English, and I see the appeal of the premises of the squeezing argument in that setting (actually, for this move we don’t really need the premise I’m most concerned with—just the completeness result and intuitive soundness suffice).

But the move from (3) to (4) is the one that I find dodgy. For this directly requires the principle that if there is a formal (3-valued) countermodel to a given argument, then that argument is invalid (in the intuitive sense). And that is exactly the point over which I voiced scepticism in the previous post. Why should the recognition that there’s an assignment of values to R+T on which an inference isn’t value-1 preserving suggest that the argument from R+T to absurdity is invalid? Without illegitimately sneaking in some thoughts about what value-1 represents (e.g. truth, or determinate truth) I can’t even begin to get a handle on this question.

In the previous post I sketched a fallback option (and it really was only a sketch). I suggested that you might run a squeezing argument for Kleene logic using probabilistic semantics, rather than 3-valued ones, since we do have a sense of what a probabilistic assignment represents, and why failure to preserve probability might be an indicator of intuitive invalidity. Now maybe if this were successful, we could bridge the gap—but in a very indirect way. One would argue from the existence of a 3-valued model, via completeness, to the non-existence of a derivation of absurdity from R+T. And then, by a second completeness result, one would argue that there had to exist a probabilistic model for R+T. Finally, one would appeal to the general thought that such probabilistic models secured consistency (in the intuitive sense).

To summarize. The Kripkean constructions obviously secure a technical conservativeness result. As Field mentions, we should be careful to distinguish this from a true conservativeness result: the result that no inconsistency can arise from adding transparent truth to a classically consistent base theory. But whether the technical result we can prove gives us reason (via a Kreisel-like argument) to believe the true conservativeness result turns on exactly the issue of whether a 3-valued countermodel to an argument gives us reason to think that that argument is intuitively invalid. And it’s not obvious at all where that last part is coming from—so for me, for now, it remains open whether the Kripkean constructions give us reason to believe true conservativeness.

Squeezing arguments

Kreisel gave a famous and elegant argument for why we should be interested in model-theoretic validity. But I’m not sure who can use it.

Some background. Let’s suppose we can speak unproblematically about absolutely all the sets. If so, then there’s something strange about model theoretic definitions of validity. The condition for an argument to be model-theoretically valid it needs to such that, relative to any interpretation, if the premises are true then the conclusion is true. It’s natural to think that one way or another, the reason to be interested in such a property of arguments is that if an argument is valid in this sense, then it preserves truth. And one can see why this would be—if it is truth-preserving on every interpretation, then in particular it should be truth-preserving on the correct interpretation, but that’s just to say that it guarantees that whenever the premises are true, the conclusion is so too.

Lots of issues about the intuitive line of thought arise when you start to take the semantic paradoxes seriously. But the one I’m interested in here is a puzzle about how to think about it when the object-language in question is (on the intended interpretation) talking about absolutely all the sets. The problem is that when we spell out the formal details of the model-theoretic definition of validity, we appeal to “truth on an interpretation” in a very precise sense—and one of the usual conditions on that is that the domain of quantification is a set. But notoriously there is no set of all sets, and so the “intended interpretation” of discourse about absolutely all sets isn’t something we find in the space of interpretations relative to which the model-theoretic definition of validity for that language is defined. But then the idea that actual truth is a special case of truth-on-an-interpretation is well and truly broken, and without that, it’s sort of obscure what significance the model-theoretic characterization has.

Now, Kreisel suggested the following way around this (I’m following Hartry Field’s presentation here). First of all, distinguish between (i) model theoretic validity, defined as above as preservation of truth-on-set-sized-interpretations (call that MT-validity); and (ii) intuitive validity (call that I-validity)—expressing some property of arguments that has philosophical significance to us. Also suppose that we have available a derivability relation.

Now we argue:

(1) [Intuitive soundness] If q is derivable from P, then the argument from P to q is I-valid.

(2) [Countermodels] If the argument from P to q is not MT-valid, then the argument from P to q is not I-valid.

(3) [Completeness] If the argument from P to q is MT-valid, then q is derivable from P.

From (1)-(3) it follows that an argument is MT-valid iff it is I-valid.

Now (1) seems like a decent constraint on the choice of a deductive system. Friends of classical logic will just be saying that whatever that philosophically significant sense of validity is that I-valid expresses, classical syntactic consequences (e.g. from A&B to A, from ~~A to A) should turn out I-valid. Of course, non-classicists will disagree with the classicist over the I-validity of classical rules—but they will typically have a different syntactic relation and it should be that with which we’re running the squeezing argument, at least in the general case. Let’s spot ourselves this.

(3) is the technical “completeness” theorem for a given syntactic consequence relation and model-theory. Often we have this. Sometimes we don’t—for example, for second order languages where the second order quantifiers are logically constrained to be “well-behaved”, there are arguments which are MT-valid but not derivable in the standard ways. But e.g. classical logic does have this result.

Finally, we have (2). Now, what this says is that if an argument has a set-sized interpretation relative to which the premises are true and the conclusion false, then it’s not I-valid.

Now this premise strikes me as delicate. Here’s why for the case of classical set theory we started with, it seems initially compelling to me. I’m still thinking of I-validity as a matter of guaranteed truth-preservation—i.e. truth-preservation no matter what the (non-logical) words involved mean. And I look at a given set-sized model and think—well, even though I was actually speaking in an unrestricted language, I could very well have been speaking in a language where my quantifiers were restricted. And what the set-sized countermodel shows is that on that interpretation of what my words mean, the argument wouldn’t be truth-preserving. So it can’t be I-valid.

However, suppose you adopt the stance where I-validity isn’t to be understood as “truth-preservation no matter what the words mean”—and for example, Field argues that the hypothesis that the two are biconditionally related is refutable. Why then should you think that the presence of countermodels have anything to do with I-invalidity? I just don’t get why I should see this as intuitively obvious (once I’ve set aside the usual truth-preservation understanding of I-validity), nor do I see what an argument for it would be. I’d welcome enlightenment/references!

We’ve been talking so far about the case of classical set theory. But I reckon the point surfaces with respect to other settings.

For example, Field favours a nonclassical logic (an extension of the strong Kleene logic) for dealing with the paradoxes. His methodology is to describe the logic model-theoretically. So what he gives us is a definition of MT-validity for a language containing a transparent truth-predicate. But of course, it’d be nice if we could explain why we’re interested in MT-validity so-characterized, and one attractive route is to give something like a Kreisel squeezing argument.

What would this look like? Well, we’d need to endorse (1)—to pick out a syntactic consequence relation and judge the basic principles to be I-valid. Let’s spot ourselves that. We’d also need (3), the completeness result. That’s tricky. For the strong Kleene logic itself, we have a completeness result relative to a 3-valued semantics. So relative to K3 and the usual 3-valued semantics, we’ve got (3). But Field’s own system adds to the K3 base a strong conditional, and the model theory is far more complex than a 3-valued one. And completeness just might not be available for this system—see p.305 of Field’s book.

But even if we have completeness (suppose we were working directly with K3, rather than Field’s extension) to me the argument seems puzzling. The problem again is with (2). Suppose a given argument, from P to q, has a 3-valued countermodel. What do we make on this? Well, this means there’s some way of assigning semantic values to expressions such that the premises all get value 1, and the conclusion gets value less than 1 (0.5, or 0). But what does that tell us? Well, if we were allowed to identify having-semantic-value-1 with being-true, then we’d forge a connection between countermodels and failure-to-preserve-truth. And so we’d be back to the situation that faced us in set-theory, in that countermodels would display interpretations relative to which truth isn’t preserved. I expressed some worries before about why if I-validity is officially primitive, this should be taken to show that the argument is I-valid. But let’s spot ourselves an answer to that question—we can suppose that even if I-valid is primitive, then failure of truth-preservation on some interpretation is a sufficient condition for failure to be I-valid.

The problem is that in the present case we can’t even get as far as this. For we’re not supposed to be thinking of semantic value 1 as truth, and nor are we supposed to be thinking of the formal models as specifying “meanings” for our words. If we do start thinking in this way, we open ourselves up to a whole heap of nasty questions—e.g. it looks very much like sentences with value 1/2 will be thought of as truth-value gaps, whereas the target was to stablize a transparent notion of truth—a side-effect of which is that we will be able to reduce truth-value gaps to absurdity.

Field suggests a different gloss in some places—think of semantic value 1 as representing determinate truth, semantic value 0 as representing determinate falsity, and semantic value 1/2 as representing indeterminacy. OK: so having a 3-valued countermodel to an argument should be glossed have displaying a case where the premises are all determinately true, and the conclusion is at best indeterminate. But recall that “indeterminacy” here is *not* supposed to be a status incompatible with truth—otherwise we’re back to truth-value gaps—so we’ve not got any reason here to think that we’ve got a failure of truth-preservation. So whereas holding that failure of truth-preservation is a sufficient condition for I-invalidity would be ok to give us (2) for the case of classical set theory, in the non-classical cases we’re thinking about it just isn’t enough to patch the argument. What we need instead is that failure of determinate-truth-preservation is a sufficient condition for I-invalidity. But where is that coming from? And what’s the rationale for it?

Here’s a final thought about how to make progress on these questions. Notice that the Kreisel squeezing argument is totally schematic—we don’t have to pack in anything about the particular model theory or proof theory involved, so long as (1-3) are satisfied. As an illustration, suppose you’re working with some model-theoretically specified consequence relation where there isn’t a nice derivability relation which is complete wrt it—where a derivability relation is nice if it is “practically implementable”–i.e. doesn’t appeal to essentially infinitary rules (like the omega-rule). Well, nothing in the squeezing argument required the derivability relation to be *nice*. Add in whatever infinitary rule you like to beef up the derivability relation until it is complete wrt the model theory, and so long as what you end up with is intuitively sound—i.e. (1) is met—then the Kreisel argument can be run.

A similar point goes if we hold fixed the proof theory and vary the model theory that defines MT-validity. The condition is that we need a model theory that (a) makes the derivability relation complete; and (b) is such that countermodels entail I-invalidity. So long as something plays that role, we’re home and dry. Suppose, for example, you give a probabilistic semantics for classical logic (in the fashion that Field does, via Popper functions, in his 1977 JP paper), and interpret an assignment of probabilities as a possible distribution of partial beliefs over sentences in the language. An argument is MT-valid, on this semantics, just in case if whenever premises are probability 1 (conditionally on anything) then so is the conclusion. Slogan: MT-validity is certainty-preservation. A countermodel is then some representation of partial beliefs whereby one is certain of all the premises, but less than certain of the conclusion. Just as with non-probabilistic semantics, there’ll be a question of whether the presence of a countermodel in this sense is sufficient for I-invalidity—but it doesn’t seem to me that we weaken our case by this shift.

But what seems significant about this move is that, in principle, we might be able to do the same for the non-classical cases. Rather than do a 3-valued semantics and worry about what to make of “semantic value 1 preservation” and its relation to I-validity, one searches for a complete probabilistic semantics. The advantage is that we’ve got a interpretation standing by of what individual assignments of probabilities means (in terms of degrees of belief)—and so I don’t envisage new interpretative problems arising for this choice of semantics, as they did for the 3-valued way of doing things.

[Of course, to carry out the version of the squeezing argument itself, we’ll need to actually have such a semantics—and maybe to keep things clean, we need an axiomization of what a probability function is that doesn’t tacitly appeal to the logic itself (that’s the role that Popper’s axiomatization of conditional probability functions in Field’s 1977 interpretation). I don’t know of such an axiomitization—advice gratefully received.]

Defending conditional excluded middle

So things have been a little quiet on this blog lately. This is a combination of (a) trips away, (b) doing administration-stuff for the Analysis Trust, and (c) the fact that I’m entering the “writing up” phase of my current research leave.

I’ve got a whole heap of papers that in various stages of completion that I want to get finished up. As I post drafts online, the blogging should become more regular. So here’s the first installment—a new version of an older paper that discusses conditional excluded middle, and in particular, a certain style of argument that Lewis deploys against it, and which Bennett endorses (in an interestingly varied form) in his survey book.

What I try to do in the present version—apart from setting out some reasons for being interested in conditional excluded middle for counterfactuals that I think deserve more attention—is try to disentangle two elements of Bennett’s discussion. One element is a certain narrow-scope analysis of “might”-counterfactuals (roughly: “if it were that P it might be that Q” has the form: P\rightarrow \Diamond Q —where the modal expresses an idealized ignorance). The second is an interesting epistemic constraint on true counterfactuals I call “Bennett’s Hypothesis”.

One thing I argue is that Bennett’s Hypothesis all on its own conflicts with conditional excluded middle. And without Bennett’s Hypothesis, there’s really no argument from the narrow-scope analysis alone against conditional excluded middle. So really, if counterfactuals work the way Bennett thinks they do, we can forget about the fine details of analyzing epistemic modals when arguing against conditional excluded middle. All the action is with whether or not we’ve got grounds to endorse the epistemic constraint on counterfactual truth.

The second thing I argue is that there are reasons to be severely worried about Bennett’s Hypothesis—it threatens to lead us straight into an error theory about ordinary counterfactual judgements.

If people are interested, the current version of the paper is available here. Any thoughts gratefully received!