Category Archives: Semantics

Psychology without semantics or psychologically loaded semantics?

Here’s a naive view of classical semantics, but one worth investigating. According to this view, semantics is a theory that describes a function from sentences to semantic properties (truth and falsity) relative to a given possible circumstance (“worlds”). Let’s suppose it does this via a two-step method. First, it assigns to each sentence a proposition. Second, it assigns to each proposition, a function from worlds to {True, False} (“truth conditions”).

Let’s focus on the bit where propositions (at a world) are assigned truth-values. One thing that’s leaps out is that the “truth values” assigned have significance beyond semantics. For propositions, we may assume, are the objects of attitudes like belief. It’s natural to think that in some sense, one should believe what’s true, and reject what’s false. So the statuses attributed to propositions as part of the semantic theory (the part that describes the truth-conditions of propositions) are psychologically loaded, in that propositions that have one of the statuses are “to be believed” and those that have the other are “to be rejected”. The sort of normativity involved here is extremely externalistic, of course — it’s not a very interesting criticism of me that I happen to believe p, just on the basis p is false, if my evidence suggested p overwhelmingly. But the idea of an external ought here is familiar and popular. It is often reported, somewhat metaphorically, as the idea that beliefs aims at truth (for discussion, see e.g. Ralph Wedgwood on the aim of belief).

Suppose we are interested in one of the host of non-classical semantic theories that are thrown around  when discussing vagueness. Let’s pick a three-valued Kleene theory, for example. On this view, we have three different semantic statuses that propositions (relative to a circumstance) are mapped to. Call them neutrally A, B and C (much of the semantic theory is then spent telling us how these abstract “statuses” are distributed around the propositions, or sentences which express the propositions). But what, if any, attitude is it appropriate to take to a proposition that has one of these statuses? If we have an answer to this question, we can say that the semantic theory is psychologically loaded (just as the familiar classical setting was).

Rarely do non-classical theorists tell us explicitly what the psychological loading of the various states are. But you might think an answer is implicit in the names they are given. Suppose that status A is called “true”, status C is called “falsity”. Then, surely, propositions that are A are to be believed, and propositions with C are to be rejected. But what of the “gaps”, the propositions that have status B, the ones that are neither true nor false? It’s rather unclear what to say; and without explicit guidance about what the theorist intends, we’re left searching for a principled generalization. One thought is that they’re at least untrue, and so are intended to have the normative role that all untrue propositions had in the classical setting—they’re to be rejected. But of course, we could equally have reasoned that, as propositions that are not false, they might be intended to have the status that all unfalse propositions have in the classical setting—they are to be accepted. Or perhaps they’re to have some intermediate status—-maybe a proposition that has B is to be half-believed (and we’d need some further details about what half-belief amounts to). One might even think (as Maudlin has recently explicitly urged) that in leaving a gap between truth and falsity, the propositions are devoid of psychological loading—that there’s nothing general to say about what attitude is appropriate to the gappy cases.

But notice that these kind of questions are at heart, exegetical—that we face them just reflects the fact that the theorist hasn’t told us enough to fix what theory is intended. The real insight here is to recognize that differences in psychological loading give rise to very different theories (at least as regards what attitudes to take to propositions), which should each be considered on their own merits.

Now, Stephen Schiffer has argued for some distinctive views about what the psychology of borderline cases should be like. As John Macfarlane and Nick Smith have recently urged, there’s a natural way of using Schiffer’s descriptions to fill out in detail one fully “psychologically loaded” degree-theoretic semantics. To recap, Schiffer distinguishes between “standard” partial beliefs (SPBs) which we can assume behave in familiar (probabilistic) ways and have their familiar functional role when there’s no vagueness or indeterminacy at issue. But then we also have special “vagueness-related” partial beliefs (VPBs) which come into play for borderline cases. Intermediate standard partial beliefs allow for uncertainty, but are “unambivalent” in the sense that when we are 50/50 over the result of a fair coin flip, we have no temptation to all-out judge that the coin will land heads. By contrast, VPBs exclude uncertainty, but generate ambivalence: when we say that Harry is smack-bang borderline bald, we are pulled to judge that he is bald, but also (conflictingly) pulled to judge that he is not bald.

Let’s suppose this gives us enough for an initial fix on the two kinds of state. The next issue is to associate them with the numbers a degree-theoretic semantics assigns to propositions (with Edgington, let’s call these numbers “verities”). Here is the proposal: a verity of 1 for p is ideally associated with (standard) certainty that p—an SPB of 1. A verity of 0 for p is ideally associated with (standard) utter rejection of p—an SPB of 0. Intermediate verities are associated with VPBs. Generally, a verity of k for p is associated with a VPB of degree k in p. [Probably, we should say for each verity, both what the ideal VPB and SPB are. This is easy enough: one should have VPBs of zero when the verity is 1 or 0; and SPB of zero for any verity other than 1.]

Now, Schiffer’s own theory doesn’t make play with all these “verities” and “ideal psychological states”. He does use various counterfactual idealizations to describe a range of “VPB*s”—so that e.g. relative to a given circumstance, we can talk about which VPB an idealized agent would take to a given proposition (though it shouldn’t be assumed that the idealization gives definitive verdicts in any but a small range of paradigmatic cases). But his main focus is not on the norms that the world imposes on psychological attitudes, but norms that concern what combinations of attitudes we may properly adopt—-requirements of “formal coherence” on partial belief.

How might a degree theory psychologically loaded with Schifferian attitudes relate to formal coherence requirements? Macfarlane and Smith, in effect, observe that something approximating Schiffer’s coherence constraints arises if we insist that the total partial belief in p (SPB+VPB) is always representable as an expectation of verity (relative to a classical credence distribution over possible situations). We might also observe that component corresponding to Schifferian SPB within this is always representable as the expectation of verity 1 (relative to the same credence). That’s suggestive, but it doesn’t do much to explain the connection between the external norms that we fed into the psychological loading, and the formal coherence norms that we’re now getting out. And what’s the “underlying credence over worlds” doing? If all the psychological loading of the semantics is doing is enabling a neat description of the coherence norms, that may have some interest, but it’s not terribly exciting—what we’d like is some kind of explanation for the norms from facts about psychological loading.

There’s a much more profound way of making the connection: a way of deriving coherence norms from psychologically loaded semantics. Start with the classical case. Truth (truth value 1) is associated with certainty (credence 1). Falsity (truth value 0) is associated with utter rejection (credence 0). Think of inaccuracy as a way of measuring how far a given partial belief is from the actual truth value; and interpret the “external norm” as telling you to minimize overall inaccuracy in this sense.

If we make suitable (but elegant and arguably well-motivated) assumptions about how “accuracy” is to be measured, then it turns out probabilistic belief states emerge as a special class in this setting. Every improbabilistic belief state can be shown to be accuracy-dominated by a probabilistic one—-there’s some particular probability that’ll be necessarily more accurate than the improbability you started with. No probabilistic belief state is dominated in this sense.

Any violations of formal coherence norms thus turns out to be needlessly far from the ideal aim. And this moral generalizes. Taking the same accuracy measures, but applying them to verities as the ideals, we can prove exactly the same theorem. Anything other than the Smith-Macfarlane belief states will be needlessly distant from the ideal aim. (This is generated by an adaptation of Joyce’s 1998 work on accuracy and classical probabilism—see here for the generalization).

There’s an awful lot of philosophy to be done to spell out the connection in the classical case, let alone its non-classical generalization. But I think even the above sketch gives a view on how we might not only psychologically load a non-classical semantics, but also use that loading to give a semantically-driven rationale for requirements of formal coherence on belief states—and with the Schiffer loading, we get the Macfarlane-Smith approximation to Schifferian coherence constraints.

Suppose we endorsed the psychologically-loaded, semantically-driven theory just sketched. Compare our stance to a theorist who endorsed the psychology without semantics—that is, they endorsed the same formal coherence constraints, but disclaimed commitment to verities and their accompanying ideal states. They thus give up on the prospect of giving the explanation of the coherence constraints sketched above. We and they  would agree on what kinds of psychological states are rational to hold together—including what kind of VPB one could rationally take to p when you judge p to be borderline. So they could both agree on the doxastic role of the concept of “borderlineness”, and in that sense give a psychological specification of the concept of indeterminacy. We and they would be allied against rival approaches—say, the claims of the epistemicists (thinking that borderlineness generates uncertainty) and Field (thinking that borderlineness merits nothing more than straight rejection).  The fan of psychology-without-semantics might worry about the metaphysical commitments of his friend’s postulate of a vast range of fine-grained verities (attaching to propositions in circumstances)—metasemantic explanatory demands and higher-order-vagueness puzzles are two familiar ways in which this disquiet might be made manifest. In turn, the fan of psychologically loaded, semantically driven theory might question his friend’s refusal to give any underlying explanation of the source of the requirements of formal coherence he postulates. Can explanatory bedrock really be certain formal patterns amongst attititudes? Don’t we owe an illuminating explanation of why those patterns are sensible ones to adopt? (Kolodny mocks this kind of attitude, in recent work, as picturing coherence norms as a mere “fetish for a certain kind of mental neatness”). That explanation needn’t take a semantically-driven form—but it feels like we need something.

To repeat the basic moral. Classical semantics, as traditionally conceived, is already psychologically loaded. If we go in for non-classical semantics at all (with more than instrumentalist ambitions in mind) we underspecify the theory until we’re told what what the psychological loading of the new semantic values is to be. That’s one kind of complaint against non-classical semantics. It’s always possible to kick away the ladder—to take the formal coherence constraints motivated by a particular elaboration of this semantics, and endorse only these without giving a semantically-driven explanation of why these constraints in particular are in force. Repeating this stance, we can find pairs of views that, while distinct, are importantly allied on many fronts. I think in particular this casts doubt on the kind of argument that Schiffer often sounds like he’s giving—i.e. to argue from facts about appropriate psychological attitudes to borderline cases, to the desirability of a “psychology without semantics” view.

Intuitionism and truth value gaps

I spent some time last year reading through Dummett on non-classical logics. One aim was to figure out what sorts of arguements there might be against combining a truth-value gap view with intuitionistic logic. The question is whether in an intuitionist setting it might be ok to endorse ~T(A)&~T(~A) (The characteristic intuitionistic feature, hereabouts, is a refusal to assert T(A)vT(~A)—which is certainly weaker than asserting its negation. Indeed, when it comes to the law of excluded middle, the intuitionist refuses to assert Av~A in general, but ~(Av~A) is an intuitionistic inconsistency).

On the motivational side: it is striking that in Kripke tree semantics for intuitionistic logic, there are sentences such that neither they nor their negation are “forced”. And if we think of forcing in a Kripke tree as an analogue of truth, that looks like we’re modelling truth value gaps.

A familiar objection to the very idea of truth-value gaps (which appears early on in Dummett—though I can’t find the reference right now) is that asserting the existence of truth value gaps (i.e. endorsing ~T(A)&~T(~A)) is inconsistent with the T-scheme. For if we have “T(A) iff A”, then contraposing and applying modus ponens, we derive from the above ~A and ~~A—contradiction. However, this does require the T-scheme, and you might let the reductio fall on that rather than the denial of bivalence. (Interestingly, Dummett in his discussion of many-valued logics talks about them in terms of truth value gaps without appealing to the above sort of argument—so I’m not sure he’d rest all that much on it).

Another idea I’ve come across is that an intuitionistic (Heyting-style) reading of what “~T(A)” says will allow us to infer from it that ~A (this is based around the thought that intuitionistic negation says “any proof of A can be transformed into a proof of absurdity”). That suffices to reduce a denial of bivalence to absurdity. There are a few places to resist this argument too (and it’s not quite clear to me how to set it up rigorously in the first place) but I won’t go into it here.

Here’s one line of thought I was having. Suppose that we could argue that Av~A entailed the corresponding instance of bivalence: T(A)vT(~A). It’s clear that the latter entails ~(~T(A)&~T(~A))—i.e. given the claim above, the law of excluded middle for A will entail that A is not gappy.

So now suppose we assert that it is gappy. For reductio, suppose Av~A. By the above, this entails that A is not gappy. Contradiction. Hence ~(Av~A). But we know that this is itself an intuitionistic inconsistency. Hence we have derived absurdity from the premise that A is gappy.

So it seems that to argue against gaps, we just need the minimal claim that LEM entails bivalence. Now, it’s a decent question what grounds we might give for this entailment claim; but it strikes me as sufficiently “conceptually central” to the intuitionistic idea about what’s going on that it’s illuminating to have this argument around.

I guess the last thing to point out is that the T-scheme argument may be a lot more impressive in an intuitionistic context in any case. A standard maneuver when denying the T-scheme is to keep the T-rules: to say that A entails T(A), for example (this is consistent with rejecting the T-scheme if you drop conditional proof, as supervaluational and many-valued logicians often do). But in an intuitionistic context, the T-rule contraposes (again, a metarule that’s not good in supervaluational and many-valued settings) to give an entailment from ~T(A) to ~A, which is sufficient to reduce the denial of bivalence to absurdity. This perhaps explains why Dummett is prepared to deny bivalence in non-classical settings in general, but seems wary of this in an intuitionistic setting.

The two cleanest starting points for arguing against gaps for the intuitionist, it seems to me, are either to start with the T-rule, “A entails T(A)” or with the claim “Av~A entails T(A)vT(~A)”. Clearly the first allows you to derive the second. I can’t see at the moment an argument that the second entails the first (if someone can point to one, I’d be very interested), so perhaps basing the argument against gaps on the second is the optimal strategy. (It does leave me with a puzzle—what is “forcing” in a Kripke tree supposed to model, since that notion seems clearly gappy?)

Counting delineations

I presented my paper on indeterminacy and conditionals in Konstanz a few days ago. The basic question that paper poses is: if we are highly confident that a conditional is indeterminate, what sorts of confidence in the conditional itself are open to us?

Now, one treatment I’ve been interested in for a while is “degree supervaluationism”. The idea, from the point of view of the semantics, is to replace appeal to a single intended interpretation (with truth=truth at that interpretation) or set of “intended interpretations” (with truth=truth at all of them) with a measure over the set of interpretations (with truth to degree d = being true at exactly measure d of the interpretations). A natural suggestion, given that setting, is that if you know (/are certain) S is true to measure d, then your confidence in S should be d.

I’d been thinking of degree-supervaluationism in this sense, and the more standard set-of-intended-interpretations supervaluationism, as distinct options. But (thanks to Tim Williamson) I realize now that there may be an intermediate option.

Suppose that S= the number 6 is bleh. And we know that linguistic conventions settle that numbers <5 are bleh, and numbers >7 are not bleh. The available delineations of “nice”, among the integers, are ones where the first non-bleh number is 5, 6, 7 or 8. These will count as the “intended interpretations” for a standard supervaluational treatment, so “6 is bleh” will be indeterminate—in this context, neither true nor false.

I’ve discussed in the past several things we could say about rational confidence in this supervaluational setting. But one (descriptive) option I haven’t thought much about is to say that you should proportion your confidence to the number of delineations on which “6 is bleh” comes out true. In the present case, our confidence that 6 is bleh should be 0.5, our confidence that 5 is bleh should come out 0.25, and our confidence that 7 is bleh should come out 0.25.

Notice that this *isn’t* the same as degree-supervaluationism. For that just required some measure or other over the space of interpretations. And even if that was non-zero everywhere apart from ones which place first non-bleh number in 5-8, there are many options available. E.g. we might have a measure that assigns 0.9 to the interpretation which makes 5 the first non-bleh number, and distributes 0.3333… to the others. In other words, the degree-supervaluationist needn’t think that the measure is a measure *of the number of delineations*. I usually think of it (in the finite case), intuitively, as a measure of the “degree of intendedness” of each interpretation. In a sense, the degree-supervaluationists I was thinking of conceive of the measure as telling us to what extent usage and eligibility and other subvening facts favour one interpretation or another. But the kind of supervaluationists we’re now considering won’t buy into that at all.

I should mention that even if, descriptively, it’s clear what proposal here is, it’s less clear how the count-the-delineations supervaluationists would go about justifying the rule for assigning credences that I’m suggesting for them. Maybe the idea is that we should seek some kind of compromise between the credences that would be rational if we took D to be the unique intended interpretation, for each D in our set of “intended interpretations” (see this really interesting discussion of compromise for a model of what we might say—the bits at the end on mushy credence are particularly relevant). And they’ll be some oddities that this kind of theorist will have to adopt—e.g. for a range of cases, they’ll be assigning significant credence to sentences of the form “S and S isn’t true”. I find that odd, but I don’t think it blows the proposal out of the water.

Where might this be useful? Well, suppose you believe in B-theoretic branching time, and are going to “supervaluate” over the various future-branches (so “there will be a sea-battle” will a truth-value gap, since it is true on some but not all). (This approach originates with Thomason, and is still present, with tweaks, in recent relativistic semantics for branching time). “Branches” play the role of “interpretations”, in this setting. I’ve argued in previous work that this kind of indeterminacy about branching futures leads to trouble on certain natural “rejectionist” readings of what our attitudes to known indeterminate p should be. But a count-the-branches proposal seems pretty promising here. The idea is that we should proportion our credences in p to the *number* of branches on which p is true.

Of course, there are complicated issues here. Maybe there are just two qualitative possibilities for the future, R and S. We know R has a 2/3 chance of obtaining, and S a 1/3 chance of obtaining. In the B-theoretic branching setting, an R-branch will exist, and an S-branch will exist. Now, one model of the metaphysics at this point is that we don’t allow qualitatively duplicate future brnaches: so there are just two future-branches in existence, the R one and the S one. On a count-the-branches recipe, we’ll get the result that we should have 1/2 credence that R will obtain. But that conflicts with what the instruction to proportion our credences to the known chances would give us. Maybe R is primitively attached to a “weight” of 2/3—but our count-the-branches recipe didn’t say anything about that.

An alternative is that we multiply indiscernable futures. Maybe there are two, indiscernable R futures, and only one S future. Then apportioning  the credences in the way mentioned won’t get us into trouble. And in general, if we think whenever the chance (at moment m) that p is k, then the proportion of p-futures to non-p-futures is k, then  we’ll have a recipe that coheres nicely with the principal principle.

Let me be clear that I’m not suggesting that we identify chances with numbers-of-branches. Nor am I suggesting that we’ve got some easy route here for justifying the principal principle. The only thing I want to say is that *if* we’ve got a certain match between chances and numbers of future branches, then two recipes for assigning credences won’t conflict.

(I emphasized earlier that count-the-precisifications supervaluationism had less flexibility than degree-supervaluationism where the relevant measure was unconstrained by counting considerations. In a sense, what the above little discussion highlights is that when we move from “interpretations” to “branches” as the locus of supervaluational indeterminacy, this difference in flexibility evaporates. For in the case where that role is played by actually existing futures, then there’s at least the possibility of mutiplying qualitatively indiscernable futures. That sort of maneuver has little place in the original, intended-interpretations settings, since presumably we’ve got an independent fix on what the interpretations are, and we can’t simply postulate that the world gives us intended interpretations in proporitions that exactly match the credences we independently want to assign to the cases.)

“Supervaluationism”: the word

I’ve got progressively more confused over the years about the word “supervaluations”. It seems lots of people use it in slightly different ways. I’m going to set out my understanding of some of the issues, but I’m very happy to be contradicted—I’m really in search of information here.

The first occurrence I know of is van Fraassen’s treatment of empty names in a 1960’s JP article. IIRC, the view there is that language comes with a partial intended interpretation function, specifying the references of non-empty names. When figuring out what is true in the language, we
look at what is true on all the full interpretations that extend the intended partial interpretation. And the result is that “Zeus is blue” will come out neither true nor false, because on some completions of the intended interpretation the empty name”Zeus” will designate a blue object, and others he won’t.

So that gives us one meaning of a “supervaluation”: a certain technique for defining truth simpliciter out of the model-theoretic notions of truth-relative-to-an-index. It also, so far as I can see, closes off the question of how truth and “supertruth” (=truth on all completions) relate. Supervaluationism, in this original sense, just is the thesis that truth simpliciter should be defined as truth-on-all-interpretations. (Of course, one could argue against supervaluationism in this sense by arguing against the identification; and one could also consistently with this position argue for the ambiguity view that “truth” is ambiguous between supertruth and some other notion—but what’s not open is to be a supervaluationist and deny that supertruth is truth in any sense.)

Notice that there’s nothing in the use of supervaluations in this sense that enforces any connection to “semantic theories of vagueness”. But the technique is obviously suggestive of applications to indeterminacy. So, for example, Thomason in 1970 uses the technique within an “open future” semantics. The idea there is that the future is open between a number of currently-possible histories. And what is true about is what happens on all these histories.

In 1975, Kit Fine published a big and technically sophisticated article mapping out a view of vagueness arising from partially assigned meanings, that used among other things supervaluational techniques. Roughly, the basic move was to assign each predicate with an extension (the set of things to which it definitely applies) and an anti-extension (the set of things to which it definitely doesn’t apply). An interpretation is “admissible” only if it assigns an set of objects to a predicate that is a superset of the extension, and which doesn’t overlap the anti-extension. There are other constraints on admissibility too: so-called “penumbral connections” have to be respected.

Now, Fine does lots of clever stuff with this basic setup, and explores many options (particularly in dealing with “higher-order” vagueness). But one thing that’s been very influential in the folklore is the idea that based on the sort of factors just given, we can get our hands on a set of “admissible” fully precise classical interpretations of the language.

Now the supervaluationist way of working with this would tell you that truth=truth on each admissible interpretation, and falsity=falsity on all such interpretations. But one needn’t be a supervaluationist in this sense to be interested in all the interesting technologies that Fine introduces, or the distinctive way of thinking about semantic indecision he introduces. The supervaluational bit of all this refers only to one stage of the whole process—the step from identifying a set of admissible interpretations to the definition of truth simpliciter.

However, “supervaluationism” has often, I think, been identified with the whole Finean programme. In the context of theories of vagueness, for example, it is often used to refer to the idea that vagueness or indeterminacy arises as a matter of some kind of unsettledness as to what precise extensions are expressions pick out (“semantic indecision”). But even if the topic is indeterminacy, the association with *semantic indecision* wasn’t part of the original conception of supervaluations—Thomason’s use of them in his account of indeterminacy about future contingents illustrates that.

If one understands “supervaluationism” as tied up with the idea of semantic indecision theories of vagueness, then it does become a live issue whether one should identify truth with truth on all admissible interpretations (Fine himself raises this issue). One might think that the philosophically motivated semantic machinery of partial interpretations, penumbral connections and admissible interpretations is best supplemented by a definition of truth in the way that the original VF-supervaluationists favoured. Or one might think that truth-talk should be handled differently, and that the status of “being true on all admissible assignments” shouldn’t be identified with truth simpliciter (say because the disquotational schemes fail).

If you think that the latter is the way to go, you can be a “supervaluationist” in the sense of favouring a semantic indecision theory of vagueness elaborated along Kit Fine’s lines, without being a supervaluationist in the sense of using Van Fraassen’s techniques.

So we’ve got at least these two disambiguations of “supervaluationism”, potentially cross-cutting:

(A) Formal supervaluationism: the view that truth=truth on each of a range of relevant interpretations (e.g. truth on all admissible interpretations (Fine); on all completions (Van Fraassen); or on all histories (Thomason)).
(B) Semantic indeterminacy supervaluationism: the view that (semantic) indeterminacy is a matter of semantic indecision: there being a range of classical interpretations of the language, which, all-in, have equal claim to be the right one.

A couple of comments on each. (A) of course, needs to be tightened up in each case by saying which are the relevant range of classical interpretations quantified over. Notice that a standard way of defining truth in logic books is actually supervaluationist in this sense. Because if you define what it is for a formula “p” to be true as it being true relative to all variable assignments, then open formulae which vary in truth value from variable-assignment to variable assignment end up exactly analogous to formulae like “Zeus is blue” in Van Fraassen’s setting: they will be neither true nor false.

Even when it’s clear we’re talking about supervaluationism in the sense of (B), there’s continuing ambiguity. Kit Fine’s article is incredibly rich, and as mentioned above, both philosophically and technically he goes far beyond the minimal idea that semantic vagueness has something to do with the meaning-fixing facts not settling on a single classical interpretation.

So there’s room for an understanding of “supervaluationism” in the semantic-indecision sense that is also minimal, and which does not commit itself to Fine’s ideas about partial interpretations, conceptual truths as “penumbral constraints” etc. David Lewis in “Many but also one”, as I read him, has this more minimal understanding of the semantic indecision view—I guess it goes back to Hartry Field’s material on inscrutability and indeterminacy and “partial reference” in the early 1970’s, and Lewis’s own brief comments on related ideas in his (1969).

So even if your understanding of “supervaluationism” is the (B)-sense, and we’re thinking only in terms of semantic indeterminacy, then you still owe elaboration of whether you’re thinking of a minimal “semantic indecision” notion a la Lewis, or the far richer elaboration of that view inspired by Fine. Once you’ve settled this, you can go on to say whether or not you’re a supervaluationist in the formal, (A)-sense—and that’s the debate in the vagueness literature over whether truth should be identified with supertruth.

Finally, there’s the question of whether the “semantic indecision” view (B), should be spelled out in semantic or metasemantic terms. One possible view has the meaning-fixing facts picking out not a single interpretation, but a great range of them, which collectively play the role of “semantic value” of the term. That’s a semantic or “first-level” (in Matti Eklund‘s terminology) view of semantic indeterminacy. Another possible view has the meaning-fixing facts trying to fix on a single interpretation which will give the unique semantic value of each term in the language, but it being unsettled which one they favour. That’s a metasemantic or “second-level” view of the case.

If you want to complain that second view is spelled out quite metaphorically, I’ve some sympathy (I think at least in some settings it can be spelled out a bit more tightly). One might also want to press the case that the distinction between semantic and metasemantic here is somewhat terminological—what we choose to label the facts “semantic” or not. Again, I think there might be something to this. There are also questions about how this relates to the earlier distinctions—it’s quite natural to think of Fine’s elaboration as being a paradigmatically semantic (rather than metasemantic) conception of semantic supervaluationism. It’s also quite natural to take the metasemantic idea to go with a conception that is non-supervaluational in the (A) sense. (Perhaps the Lewis-style “semantic indecision” rhetoric might be taken to suggest a metasemantic reading all along, in which way it is not a good way to cash out what’s the common ground among (B)-theorists is). But there’s room for a lot of debate and negotiation on these and similar points.

Now all this is very confusing to me, and I’m sure I’ve used the terminology confusingly in the past. It kind of seems to me that ideally, we’d go back to using “supervaluationism” in the (A) sense (on which truth=supertruth is analytic of the notion); and that we’d then talk of “semantic indecision” views of vagueness of various forms, with its formal representation stretching from the minimal Lewis version to the rich Fine elaboration, and its semantic/metasemantic status specified. In any case, by depriving ourselves of commonly used terminology, we’d force ourselves to spell out exactly what the subject matter we’re discussing is.

As I say, I’m not sure I’ve got the history straight, so I’d welcome comments and corrections.

Structured propositions and metasemantics

Here is the final post (for the time being) on structured propositions. As promised, this is to be an account of the truth-conditions of structured propositions, presupposing a certain reasonably contentious take on the metaphysics of linguistic representation (metasemantics). It’s going to be compatible with the view that structured propositions are nothing but certain n-tuples: lists of their components. (See earlier posts if you’re getting concerned about other factors, e.g. the potential arbitriness in the choice of which n-tuples are to be identified with the structured proposition that Dummett is a philosopher.)

Here’s a very natural way of thinking of what the relation between *sentences* and truth-conditions are, on a structured propositions picture. It’s that metaphysically, the relation of “S having truth-conditions C” breaks down into two more fundamental relations: “S denoting struc prop p” and “struc prop p having truth-conditions C”. The thought is something like: primarily, sentences express thoughts (=struc propositions), and thoughts themselves are the sorts of things that have intrinsic/essential representational properties. Derivatively, sentences are true or false of situations, by expressing thoughts that are true or false of those situations. As I say, it’s a natural picture.

In the previous posting, I’ve been talking as though this direction-of-explanation was ok, and that the truth-conditions of structured propositions should have explanatory priority over the truth-conditions of sentences, so we get the neat separation into the contingent feature of linguistic representation (which struc prop a sentence latches onto) and the necessary feature (what the TCs are, given the struc prop expressed).

The way I want to think of things, something like the reverse holds. Here’s the way I think of the metaphysics of linguistic representation. In the beginning, there were patterns of assent and dissent. Assent to certain sentences is systematically associated with certain states of the world (coarse-grained propositions, if you like) perhaps by conventions of truthfulness and trust (cf. Lewis’s “Language and Languages”). What it is for expressions E in a communal language to have semantic value V is for E to be paired with V under the optimally eligible semantic theory fitting with that association of sentences with coarse-grained propositions.

That’s a lot to take in all at one go, but it’s basically the picture of linguistic representation as fixed by considerations of charity/usage and eligibility/naturalness that lots of people at the moment seem to find appealing. The most striking feature—which it shares with other members of the “radical interpretation” approach to metasemantics—is that rather than starting from the referential properties of lexical items like names and predicates, it depicts linguistic content as fixed holistically by how well it meshes with patterns of usage. (There’s lots to say here to unpack these metaphors, and work out what sort of metaphysical story of representation is being appealed to: that’s something I went into quite a bit in my thesis—my take on it is that it’s something close to a fictionalist proposal).

This metasemantics, I think, should be neutral between various semantic frameworks for generating the truth conditions. With a bit of tweaking, you can fit in a Davidsonian T-theoretic semantic theory into this picture (as suggested by, um… Davidson). Someone who likes interpretational semantics but isn’t a fan of structured propositions might take the semantic values of names to be objects, and the semantic values of sentences to be coarse-grained propositions, and say that it’s these properties that get fixed via best semantic theory of the patterns of assent and dissent (that’s Lewis’s take).

However, if you think that to adequately account for the complexities of natural language you need a more sophisticated, structured proposition, theory, this story also allows for it. The meaning-fixing semantic theory assign objects to names, and structured propositions to sentences, together with a clause specifying how the structured propositions are to be paired up with coarse-grained propositions. Without the second part of the story, we’d end up with an association between sentences and structured propositions, but we wouldn’t make connection with the patterns of assent and dissent if these take the form of associations of sentences with *coarse grained* propositions (as on Lewis’s convention-based story). So on this radical interpretation story where the targetted semantic theories take a struc prop form, we get a simultaneous fix on *both* the denotation relation between sentences and struc props, and the relation between struc props and coarse-grained truth-conditions.

Let’s indulge in a bit of “big-picture” metaphor-ing. It’d be misleading to think of this overall story as the analysis of sentential truth-conditions into a prior, and independently understood, notion of the truth-conditions of structured propositions, just as it’s wrong on the radical interpretation picture to think of sentential content as “analyzed in terms of” a prior, and independently understood, notion of subsentential reference. Relative to the position sketched, it’s more illuminating to think of the pairing of structured and coarse-grained propositions as playing a purely instrumental role in smoothing the theory of the representational features of language. It’s language which is the “genuine” representational phenomenon in the vicinity: the truth-conditional features attributed to struc propositions are a mere byproduct.

Again speaking metaphorically, it’s not that sentences get to have truth-conditions in a merely derivative sense. Rather, structured propositions have truth-conditions in a merely derivative sense: the structured proposition has truth-conditions C if it is paired with C under the optimal overall theory of linguistic representation.

For all we’ve said, it may turn out that the same assignment of truth-conditions to set-theoretic expressions will always be optimal, no matter which language is in play. If so, then it might be that there’s a sense in which structured propositions have “absolute” truth-conditions, not relative to this or that language. But, realistically, one’d expect some indeterminacy in what struc props play the role (recall the Benacerraf point King makes, and the equally fitness of [a,F] and [F,a] to play that “that a is F” role). And it’s not immediately clear why the choice to go one way for one natural language should constrain way this element is deployed in another language. So it’s at least prima facie open that it’s not definitely the case that the same structured propositions, with the same TCs, are used in the semantics of both French and English.

It’s entirely in the spirit of the current proposal that we think of we identify [a,F] with the structured proposition that a is F only relative to a given natural language, and that this creature only has the truth-conditions it does relative to that language. This is all of a piece with the thought that the structured proposition’s role is instrumental to the theory of linguistic representation, and not self-standing.

Ok. So with all this on the table, I’m going to return to read the book that prompted all this, and try to figure out whether there’s a theoretical need for structured propositions with representational properties richer than those attributed by the view just sketched.

[update: interestingly, it turns out that King’s book doesn’t give the representational properties of propositions explanatory priority over the representational properties of sentences. His view is that the proposition that Dummett thinks is (very crudely, and suppressing details) the fact that in some actual language there is a sentence of (thus-and-such a structure) of which the first element is a word referring to Dummett and the second element is a predicate expressing thinking. So clearly semantic properties of words are going to be prior to the representational properties of propositions, since those semantic properties are components of the proposition. But more than this, from what I can make out, King’s thought is that if there was a time where humans spoke a language without attitude-ascriptions and the like, then sentences would have truth-conditions, and the proposition-like facts would be “hanging around” them, but the proposition-like facts wouldn’t have any representational role. Once we start making attitude ascriptions, we implicitly treat the proposition-like structure as if it had the same TCs as sentences, and (by something like a charity/eligibility story) the “propositional relation” element acquires semantic significance and the proposition-like structure gets to have truth-conditions for the first time.

That’s very close to the overall package I’m sketching above. What’s significant dialectically, perhaps, is that this story can explain TCs for all sorts of apparently non-semantic entities, like sets. So I’m thinking it really might be the Benacerraf point that’s bearing the weight in ruling out set-theoretic entities as struc propns—as explained previously, I don’t go along with *that*.]

Structured propositions and truth conditions.

In the previous post, I talked about the view of structured propositions as lists, or n-tuples, and the Benacerraf objections against it. So now I’m moving on to a different sort of worry. Here’s King expressing it:

“A final difficulty for the view that propositions are ordered n-tuples concerns the mystery of how or why on that view they have truth conditions. On any definition of ordered n-tuples we are considering, they are just sets. Presumably, many sets have no truth conditions (eg. The set of natural numbers). But then why do certain sets, certain ordered n-tuples, have truth-conditions? Since not all sets have them, there should be some explanation of why certain sets do have them. It is very hard to see what this explanation could be.”

I feel the force of something in this vicinity, but I’m not sure how to capture the worry. In particular, I’m not sure whether the it’s right to think of structured propositions’ having truth-conditions as a particularly “deep” fact over which there is mystery in the way King suggests. To get what I’m after here, it’s probably best simply to lay out a putative account of the truth-conditions of structured propositions, and just to think about how we’d formulate the explanatory challenge.

Suppose, for example, one put forward the following sort of theory:

(i) The structured proposition that Dummett is a philosopher = [Dummett, being a philosopher].
(ii) [Dummett, being a philosopher] stands in the T relation to w, iff Dummett is a philosopher according to w.
(iii) bearing the T-relation to w=being true at w

Generalizing,

(i) For all a, F, the structured proposition that a is F = [a, F]
(ii) For all individuals a, and properties F, [a, F] stands in the T relation to w iff a instantiates F according to w.
(iii) bearing the T-relation to w=being true at w

In a full generality, I guess we’d semantically ascend for an analogue of (i), and give a systematic account of what structured propositions are associated with which English sentences (presumably a contingent matter). For (ii), we’d give a specification (which there’s no reason to make relative to any contingent facts) about which ordered n-tuples stand in the T-relation to which worlds. (iii) can stay as it is.

The naïve theorist may then claim that (ii) and (iii) amount to a reductive account of what it is for a structured proposition to have truth-conditions. Why does [1,2] not have any truth-conditions, but [Dummett, being a philosopher] does? Because the story about what it is for an ordered pair to stand in the T-relation to a given world, just doesn’t return an answer where the second component isn’t a property. This seems like a totally cheap and nasty response, I’ll admit. But what’s wrong with it? If that’s what truth-conditions for structured propositions are, then what’s left to explain? It doesn’t seem that there is any mystery over (ii): this can be treated as a reductive definition of the new term “bearing the T-relation”. Are there somehow explanatory challenges facing someone who endorses the property-identity (iii)? Quite generally, I don’t see how identities could be the sort of thing that need explaining.

(Of course, you might semantically ascend and get a decent explanatory challenge: why should “having truth conditions” refer to the T-relation. But I don’t really see any in principle problem with addressing this sort of challenge in the usual ways: just by pointing to the fact that the T-relation is a reasonably natural candidate satisfying platitudes associated with truth-condition talk.)

I’m not being willfully obstructive here: I’m genuinely interested in what the dialectic should be at this point. I’ve got a few ideas about things one might say to bring out what’s wrong with the flat-footed response to King’s challenge. But none of them persuades me.

Some options:

(a)Earlier, we ended up claiming that it was indefinite what sets structured propositions were identical with. But now, we’ve given a definition of truth-conditions that is committal on this front. For example, [F,a] was supposed to be a candidate precisification of the proposition that a is F. But (ii) won’t assign it truth conditions, since the second component isn’t a property but an individual.

Reply: just as it was indefinite what the structured propositions were, it is indefinite what sets have truth-conditions, and what specification of those truth-conditions is. The two kinds of indefiniteness are “penumbrally connected”. On a precisification on which the prop that a is F=[a,F], then the clause holds as above; but on a precisification on which that a is F=[F,a], a slightly twisted version of the clause will hold. But no matter how we precisify structured proposition-talk, there will be a clause defining the truth-conditions for the entities that we end up identifying with structured propositions.

(b) You can’t just offer definitional clauses or “what it is” claims and think you’ve evaded all explanatory duties! What would we think of a philosopher of mind who put forward a reductive account whereby pain-qualia were by definition just some characteristics of C-fibre firing, and then smugly claimed to have no explanatory obligations left.

Reply: one presupposition of the above is that clauses like (ii) “do the job” of truth-conditions for structured propositions, i.e. there won’t be a structured proposition (by the lights of (i)) whose assigned “truth-conditions” (by the lights of (ii)) go wrong. So whatever else happens, the T-relation (defined via (ii)) and the truth-at relation we’re interested in have a sort of constant covariation (and, unlike the attempt to use a clause like (ii) to define truth-conditions for sentences, we won’t get into trouble when we vary the language use and the like across worlds, so the constant covariation is modally robust). The equivalent assumption in the mind case is that pain qualia and the candidate aspect of C-fibre firing are necessarily constantly correlated. Under those circumstances, many would think we would be entitled to identify pain qualia and the physicalistic underpinning. Another way of putting this: worries about the putative “explanatory gap” between pain-qualia and physical states are often argued to manifest themselves in a merely contingent correlation between the former and the latter. And that’d mean that any attempt to claim that pain qualia just are thus-and-such physical state would be objectionable on the grounds that pain qualia and the physical state come apart in other possible worlds.
In the case of the truth-conditions of structured propositions, nothing like this seems in the offing. So I don’t see a parody of the methodology recommended here. Maybe there is some residual objection lurking: but if so, I want to hear it spelled out.

(c)Truth-conditions aren’t the sort of thing that you can just define up as you please for the special case of structured propositions. Representational properties are the sort of things possessed by structural propositions, token sentences (spoken or written) of natural language, tokens of mentalese, pictures and the rest. If truth-conditions were just the T-relation defined by clause (ii), then sentences of mentalese and English, pictures etc couldn’t have truth-conditions. Reductio.

Reply: it’s not clear at all that sentences and pictures “have truth-conditions” in the same sense as do structured propositions. It fits very naturally with the structured-proposition picture to think of sentences standing in some “denotation” relation to a structured proposition, through which may be said to derivatively have truth-conditions. What we mean when we say that ‘S has truth conditions C’ is that S denotes some structured proposition p and p has truth-conditions C, in the sense defined above. For linguistic representation, at least, it’s fairly plausible that structured propositions can act as a one-stop-shop for truth-conditions.

Pictures are a trickier case. Presumably they can represent situations accurately or non-accurately, and so it might be worth theorizing about them by associating them with a coarse-grained proposition (the set of worlds in which they represent accurately). But presumably, in a painting that represents Napolean’s defeat at waterloo, there doesn’t need to be separable elements corresponding to Napolean, Waterloo, and being defeated at, which’d make for a neat association of the picture with a structured proposition, in the way that sentences are neatly associated with such things. Absent some kind of denotation relation between pictures and structured propositions, it’s not so clear whether we can derivatively define truth-conditions for pictures as the compound of the denotation relation and the truth-condition relation for structured propositions.

None of this does anything to suggest that we can’t give an ok story about pairing pictures with (e.g.) coarse-grained propositions. It’s just that the relation between structured propositions and coarse-grained propositions (=truth conditions) and the relation between pictures and coarse-grained propositions can’t be the same one, on this account, and nor is even obvious how the two are related (unlike e.g. the sentence/structured proposition case).
So one thing that may cause trouble for the view I’m sketching is if we have both the following: (A) there is a unified representation relation, such that pictures/sentences/structured propositions stand in same (or at least, intimately related) representation relations to C. (B) there’s no story about pictorial (and other) representations that routes via structured propositions, and so no hope of a unified account of representation given (ii)+(iii).

The problem here is that I don’t feel terribly uncomfortable denying (A) and (B). But I can imagine debate on this point, so at least here I see some hope of making progress.

Having said all this in defence of (ii), I think there are other ways for the naïve, simple set-theoretic account of structured propositions to defend itself that don’t look quite so flat-footed. But the ways I’m thinking of depend on some rather more controversial metasemantic theses, so I’ll split that off into a separate post. It’d be nice to find out what’s wrong with this, the most basic and flat-footed response I can think of.

Structured propositions and Benacerraf

I’ve recently been reading Jeff King’s book on structured propositions. It’s really good, as you would expect. There’s one thing that’s bothering me though: I can’t quite get my head around what’s wrong with the simplest, most naïve account of the nature of propositions. (Disclaimer: this might all turn out to be very simple-minded to those in the know. I’d be happy to get pointers to the literature (hey, maybe it’ll be to bits of Jeff’s book I haven’t got to yet…)

The first thing you encounter when people start talking about structured propositions is notation like [Dummett, being a philosopher]. This is supposed to stand for the proposition that Dummett is a philosopher, and highlights the fact that (on the Russellian view) Dummett and the property of being a philosopher are components of the proposition. The big question is supposed to be: what do the brackets and comma represent? What sort of compound object is the proposition? In what sense does it have Dummett and being a philosopher as components? (If you prefer a structured intension view, so be it: then you’ll have a similar beast with the individual concept of Dummett and the worlds-intension associated with “is a philosopher” as ‘constituents’. I’ll stick with the Russellian view for illustrative purposes.)

For purposes of modelling propositions, people often interpret the commas as brackets as the ordered n-tuples of standard set theory. The simplest, most naïve interpretation of what structured propositions are, is simply to identify them as n-tuples. What’s the structured proposition itself? It’s a certain kind of set. What sense are Dummett and the property of being a philosopher constituents of the structured proposition that Dummett is a philosopher? They’re elements of the transitive closure of the relevant set.

So all that is nice and familiar. So what’s the problem? In his ch 1. (and, in passing, in the SEP article here) King mentions two concerns. In this post, I’ll just set the scene by talking about the first. It’s a version of a famous Benacerraf worry, which anyone with some familiarity with the philosophy of maths will have come across (King explicitly makes the comparison). The original Benacerraf puzzle is something like this: suppose that the only abstract things are set like, and whatever else they may be, the referents of arithmetical terms should be abstract. Then numerals will stand for some set or other. But there are all sorts of things that behave like the natural numbers within set theory: the constructions known as the (finite) Zermelo ordinals (null, {null}, {{null}}, {{{null}}}…) and the (finite) von Neumann ordinals (null, {null}, {null,{null}}…) are just two. So there’s no non-arbitrary theory of which sets the natural numbers are.

The phenomenon crops up all over the place. Think of ordered n-tuples themselves. Famously, within an ontology of unordered sets, you can define up things that behave like ordered pairs: either [a,b]={{a},{a,b}} or {{{a},null},{{b}}}. (For details see http://en.wikipedia.org/wiki/Ordered_pair). It appears there’s no non-arbitrary reason to prefer a theory that ‘reduces’ ordered to unordered pairs one way or the other.

Likewise, says King, there looks to be no non-arbitrary choice of set-theoretic representation of structured propositions (not even if we spot ourselves ordered sets as primitive to avoid the familiar ordered-pair worries). Sure, we *could* associate the words “the proposition that Dummett is a philosopher” with the ordered pair [Dummett, being a philosopher]. But we could also associate it with the set [being a philosopher, Dummett] (and choices multiply when we get to more complex structured propositions).

One reaction to the Benacerrafian challenge is to take it to be a decisive objection to an ontological story about numbers, ordered pairs or whatever that allows only unordered sets as a basic mathematical ontology. My own feeling is (and this is not uncommon, I think) that this would be an overreaction. More strongly: no argument that I’ve seen from the Benacerraf phenomenon to this ontological conclusion seems to me to be terribly persuasive.

What we should admit, rather, is that if natural numbers or ordered pairs are sets, it’ll be indefinite which sets they are. So, for example, [a,b]={{a},{a,b}} will be neither definitely true nor definitely false (unless we simply stipulatively define the [,] notation one way or another rather than treating it as pre-theoretically understood). Indefiniteness is pervasive in natural language—everyone needs a story about how it works. And the idea is that whatever that story should be, it should be applied here. Maybe some theories of indefiniteness will make these sort of identifications problematic. But prominent theories like Supervaluationism and Epistemicism have neat and apparently smooth theories of what it we’re saying when we call that identity indefinite: for the supervaluationist, it (may) mean that “[a,b]” refers to {{a},{a,b}} on one but not all precisifications of our set-theoretic language. For the epistemicist, it means that (for certain specific principled reasons) we can’t know that the identity claim is false. The epistemicist will also maintains there’s a fact of the matter about which identity statement connecting ordered and unordered sets is true. And there’ll be some residual arbitrariness here (though we’ll probably have to semantically ascend to find it)—but if there is arbitriness, it’s the sort of thing we’re independently committed to to deal with the indefiniteness rife throughout our language. If you’re a supervaluationist, then you won’t admit there’s any arbitriness: (standardly) the identity statement is neither true nor false, so our theory won’t be committed to “making the choice”.

If that’s the right way to respond to the general Benacerraf challenge, it’s the obvious thing to say in response to the version of that puzzle that arises for the Benacerraf case. And this sort of generalization of the indefiniteness maneuver to philosophical analysis is pretty familiar, it’s part of the standard machinery of the Lewisian hoardes. Very roughly, the programme goes: figure out what you want the Fs to do, Ramsify away terms for Fs and you get a way to fix where the Fs are amidst the things you believe in: they are whatever satisfy the open sentence that you’re left with. Where there are multiple, equally good satisfiers, then deploy the indefiniteness maneuver.

I’m not so worried on this front, for what I take to be pretty routine reasons. But there’s a second challenge King raises for the simple, naïve theory of structured propositions, which I think is trickier. More on this anon.

Must, Might and Moore.

I’ve just been enjoying reading a paper by Thony Gillies. One thing that’s very striking is the dilemma he poses—quite generally—for “iffy” accounts of “if” (i.e. accounts that see English “if” as expressing a sentential connective, pace Kratzer’s restrictor account).

The dilemma is constructed around finding a story that handles the interaction between modals and conditionals. The prima facie data is that the following pairs are equivalent:

  • If p, must be q
  • If p, q

and

  • If p, might be q
  • Might be (p&q)

The dilemma proceeds by first looking at whether you want to say that the modals scope over the conditional or vice versa, and then (on the view where the modal is wide-scoped) looking into the details of how the “if” is supposed to work and showing that one or other of the pairs come out inequivalent. The suggestion in the paper is if we have the right theory of context-shiftiness, and narrow-scope the modals, then we can be faithful to the data. I don’t want to take issue with that positive proposal. I’m just a bit worried about the alleged data itself.

It’s a really familiar tactic, when presented with a putative equivalence that causes trouble for your favourite theory, to say that the pairs aren’t equivalent at all, but can be “reasonably inferred” from each other (think of various ways of explaining away “or-to-if” inferences). But taken cold such pragmatic explanations can look a bit ad hoc.

So it’d be nice if we could find independent motivation for the inequivalence we need. In a related setting, Bob Stalnaker uses the acceptability of Moorean-patterns to do this job. To me, the Stalnaker point seems to bear directly on the Gillies dilemma above.

Before we even consider conditionals, notice that “p but it might be that not p” sounds terrible. Attractive story: this is because you shouldn’t assert something unless you know it to be true; and to say that p might not be the case is (inter alia) to deny you know it. One way of bringing out the pretty obviously pragmatic nature of the tension in uttering the conjunction here is to note that asserting the following sort of thing looks much much better:

  • it might be that not p; but I believe that p

(“I might miss the train; but I believe I’ll just make it”). The point is that whereas asserting “p” is appropriate only if you know that p, asserting “I believe that p” (arguably) is appropriate even if you know you don’t know it. So looking at these conjunctions and figuring out whether they sound “Moorean” seems like a nice way of filtering out some of the noise generated by knowledge-rules for assertion.

(I can sometimes still hear a little tension in the example: what are you doing believing that you’ll catch the train if you know you might not? But for me this goes away if we replace “I believe that” with “I’m confident that” (which still, in vanilla cases, gives you Moorean phenomena). I think in the examples to be given below, residual tension can be eliminated in the same way. The folks who work on norms of assertion I’m sure have explored this sort of territory lots.)

That’s the prototypical case. Let’s move on to examples where there are more moving parts. David Lewis famously alleged that the following pair are equivalent:

  • it’s not the case that: if were the case that p, it would have been that q
  • if were that p, it might have been that ~q

Stalnaker thinks that this is wrong, since instances of the following sound ok:

  • if it were that p, it might have been that not q; but I believe if it were that p it would have been that q.

Consider for example: “if I’d left only 5 mins to walk down the hill, (of course!) I might have missed the train; but I believe that, even if I’d only left 5 mins, I’d have caught it. ” That sounds totally fine to me. There’s a few decorations to that speech (“even” “of course” “only”). But I think the general pattern here is robust, once we fill in the background context. Stalnaker thinks this cuts against Lewis, since if mights and woulds were obvious contradictories, then the latter speech would be straightforwardly equivalent to something of the form “A and I don’t believe that A”. But things like that sounds terrible, in a way that the speech above doesn’t.

We find pretty much the same cases for “must” and indicative “if”.

  • It’s not true that if p, then it must be that q; but I believe that if p, q.

(“it’s not true that if Gerry is at the party, Jill must be too—Jill sometimes gets called away unexpectedly by her work. But nevertheless I believe that if Gerry’s there, Jill’s there.”). Again, this sounds ok to me; but if the bare conditional and the must-conditional were straightforwardly equivalent, surely this should sound terrible.

These sorts of patterns make me very suspicious of claims that “if p, must q” and “if p, q” are equivalent, just as the analogous patterns make me suspicious of the Lewis idea that “if p, might ~q” and “if p, q” are contradictories when the “if” is subjunctive. So I’m thinking the horns of Gillies’ dilemma aren’t equal: denying the must conditional/bare conditional equivalence is independently motivated.

None of this is meant to undermine the positive theory that Thony Gillies is presenting in the paper: his way of accounting for lots of the data looks super-interesting, and I’ve got no reason to suppose his positive story won’t have a story about everything I’ve said here. I’m just wondering whether the dilemma that frames the debate should suck us in.

Degrees of belief and supervaluations

Suppose you’ve got an argument with one premise and one conclusion, and you think its valid. Call the premise p and the conclusion q. Plausibly, constraints on rational belief follow: in particular, you can’t rationally have a lesser degree of belief in q than you have in p.

The natural generalization of this to multi-premise cases is that if p1…pn|-q, then your degree of disbelief in q can’t rationally exceed the sum of your degrees of disbelief in the premises.

FWIW, there’s a natural generalization to the multi-conclusion case too (a multi-conclusion argument is valid, roughly, if the truth of all the premises secures the truth of at least one conclusion). If p1…pn|-q1…qm, then the sum of your degrees of disbelief in the conclusions can’t rationally exceed the sum of your degrees of disbelief in the premises.

What I’m interested in at the moment is to what extent this sort of connection can be extended to non-classical settings. In particular (and connected with the last post) I’m interested in what the supervaluationist should think about all this.

There’s a fundamental choice to be made at the get-go. Do we think that “degrees of belief” in sentences of a vague language can be represented by a standard classical probability function? Or do we need to be a bit more devious?

Let’s take a simple case. Construct the artificial predicate B(x), so that numbers less than 5 satisfy B, and numbers greater than5 fail to satisfy it. We’ll suppose that it is indeterminate whether 5 itself is B, and that supervaluationism gives the right way to model this.

First observation. It’s generally accepted that for the standard supervaluationist

p &~Det(p)|-absurdity;

Given this and the constraints on rational credence mentioned earlier, we’d have to conclude that my credence in B(5)&~Det(B(5)) must be 0. I have credence 0 in absurdity; and the degree of disbelief in the conclusion of this valid argument (1) must not exceed the degree of disbelief in its premises.

Let’s think that through. Notice that in this case, my credence in ~Det(B(5)) can be taken to be 1. So given minimal assumptions about the logic of credences, my credence in B(5) must be 0.

A parallel argument running from ~B(5)&~Det(~B(5))|-absurdity gives us that my credence in ~B(5) must be 0.

Moreover, supervaluational entails all classical tautologies. So in particular we have the validity: |-B(5)v~B(5). The standard constraint in this case tells us that rational credence in this disjunction must be 1. And so, we have a disjunction in which we have credence 1, each disjunct of which we have credence 0 in. (Compare the standard observation that supervaluational disjunctions can be non-prime: the disjunction can be true when neither disjunct is).

This is a fairly direct argument that something non-classical has to be going on with the probability calculus. One move at this point is to consider Shafer functions (which I know little about: but see here). Now maybe that works out nicely, maybe it doesn’t. But I find it kinda interesting that the little constraint on validity and credences gets us so quickly into a position where something like this is needed if the constraint is to work. It also gives us a recipe for arguing against standard supervaluationism: argue against the Shafer-function like behaviour in our degrees of belief, and you’ll ipso facto have an argument against supervaluationism. For this, the probablistic constraint on validity is needed (as far as I can see): for its this that makes the distinctive features mandatory.

I’d like to connect this to two other issues I’ve been working on. One is the paper on the logic of supervaluationism cited below. The key thing here is that it raises the prospect of p&~Dp|-absurdity not holding, even for your standard “truth=supertruth” supervaluationist. If that works, the key premise of the argument that forces you to have degree of belief 0 in both an indeterminate sentence ‘p’ and its negation goes missing.

Maybe we can replace it by some other argument. If you read “D” as “it is true that…” as the standard supervaluationist encourages you to, then “p&~Dp” should be read “p&it is not true that p”. And perhaps that sounds to you just like an analytic falsity (it sure sounded to me that way); and analytic falsities are the sorts of things one should paradigmatically have degree of belief 0 in.

But here’s another observation that might give you pause (I owe this point to discussions with Peter Simons and John Hawthorne). Suppose p is indeterminate. Then we have ~Dp&~D~p. And given supervaluationism’s conservativism, we also have pv~p. So by a bit of jiggery-pokery, we’ll get (p&~Dp v ~p&~D~p). But in moods where I’m hyped up thinking that “p&~Dp” is analytically false and terrible, I’m equally worried by this disjunction. But that suggests that the source of my intuitive repulsion here isn’t the sort of thing that the standard supervaluationist should be buying. Of course, the friend of Shafer functions could just say that this is another case where our credence in the disjunction is 1 while our credences in each disjunct is 0. That seems dialectically stable to me: after all, they’ll have *independent* reason for thinking that p&~Dp should have credence 0. All I want to insist is that the “it sounds really terrible” reason for assigning p&~Dp credence 0 looks like it overgeneralizes, and so should be distrusted.

I also think that if we set aside truth-talk, there’s some plausibility in the claim that “p&~Dp” should get non-zero credence. Suppose you’re initially in a mindset where you should be about half-confident of a borderline case. Well, one thing that you absolutely want to say about borderline cases is that they’re neither true nor false. So why shouldn’t you be at least half-confident in the combination of these?

And yet, and yet… there’s the fundamental implausibility of “p&it’s not true that p” (the standard supervaluationist’s reading of “p&~Dp”) having anything other than credence 0. But ex hypothesi, we’ve lost the standard positive argument for that claim. So we’re left, I think, with the bare intuition. But it’s a powerful one, and something needs to be said about it.

Two defensive maneuvers for the standard supervaluationist:

(1) Say that what you’re committed to is just “p& it’s not supertrue that p”. Deny that the ordinary concept of truth can be identified with supertruth (something that as many have emphasized, is anyway quite plausible given the non-disquotational nature of supertruth). But crucially, don’t seek to replace this with some other gloss on supertruth: just say that supertruth, superfalsity and gap between them are appropriate successor concepts, and that ordinary truth-talk is appropriate only when we’re ignoring the possibility of the third case. If we disclaim conceptual analysis in this way, then it won’t be appropriate to appeal to intuitions about the English word “true” to kick away independently motivated theoretical claims about supertruth. In particular, we can’t appeal to intuitions to argue that “p&~supertrue that p” should be assigned credence 0. (There’s a question of whether this should be seen as an error-theory about English “truth”-ascriptions. I don’t see it needs to be. It might be that the English word “true” latches on to supertruth because supertruth what best fits the truth-role. On this model, “true” stands to supertruth as “de-phlogistonated air” according to some, stands to oxygen. And so this is still a “truth=supertruth” standard supervaluationism.)

(2) The second maneuver is to appeal to supervaluational degrees of truth. Let the degree of supertruth of S be, roughly, the measure of the precisifications on which S is true. S is supertrue simpliciter when it is true on all the precisifications, i.e. measure 1 of the precisifications. If we then identify degrees of supertruth with degrees of truth, the contention that truth is supertruth becomes something that many find independently attractive: that in the context of a degree theory, truth simpliciter should be identified with truth to degree 1. (I think that this tendancy has something deeply in common with the temptation (following Unger) to think that nothing that nothing can be flatter than a flat thing: nothing can be truer than a true thing. I’ve heard people claim that Unger was right to think that a certain class of adjectives in English work this way).

I think when we understand the supertruth=truth claim in that way, the idea that “p&~true that p” should be something in which we should always have degree of belief 0 loses much of its appeal. After all, compatibly with “p” not being absolutely perfectly true (=true), it might be something that’s almost absolutely perfectly true. And it doesn’t sound bad or uncomfortable to me to think that one should conform one’s credences to the known degree of truth: indeed, that seems to be a natural generalization of the sort of thing that originally motivated our worries.

In summary. If you’re a supervaluationist who takes the orthodox line on supervaluational logic, then it looks like there’s a strong case for a non-classical take on what degrees of belief look like. That’s a potentially vulnerable point for the theory. If you’re a (standard, global, truth=supertruth) supervaluationist who’s open to the sort of position I sketch in the paper below, prima facie we can run with a classical take on degrees of belief.

Let me finish off by mentioning a connection between all this and some material on probability and conditionals I’ve been working on recently. I think a pretty strong case can be constructed for thinking that for some conditional sentences S, we should be all-but-certain that S&~DS. But that’s exactly of the form that we’ve been talking about throughout: and here we’ve got *independent* motivation to think that this should be high-probability, not probability zero.

Now, one reaction is to take this as evidence that “D” shouldn’t be understood along standard supervaluationist lines. That was my first reaction too (in fact, I couldn’t see how anyone but the epistemicist could deal with such cases). But now I’m thinking that this may be too hasty. What seems right is that (a) the standard supervaluationist with the Shafer-esque treatment of credences can’t deal with this case. But (b) the standard supervaluationist articulated in one of the ways just sketched shouldn’t think there’s an incompatibility here.

My own preference is to go for the degrees-of-truth explication of all this. Perhaps, once we’ve bought into that, the “truth=degree 1 supertruth” element starts to look less important, and we’ll find other useful things to do with supervaluational degrees of truth (a la Kamp, Lewis, Edgington). But I think the “phlogiston” model of supertruth is just about stable too.

[P.S. Thanks to Daniel Elstein, for a paper today at the CMM seminar which started me thinking again about all this.]

Supervaluations and logical revisionism paper

Happy news today: the Journal of Philosophy is going to publish my paper on the logic of supervaluationism. Swift moral. It ain’t logical revisionary; and if it is, it doesn’t matter.

This previous post gives an overview, if anyone’s interested…

Now I’ve just got to figure out how to transmute my beautiful LaTeX symbols into Word…