Category Archives: Probability

Paracompleteness and credences in contradictions.

The last few posts have discussed non-classical approaches to indeterminacy.

One of the big stumbling blocks about “folklore” non-classicism, for me, is the suggestion that contradictions (A&~A) be “half true” where A is indeterminate.

Here’s a way of putting a constraint that appeals to me: I’m inclined to think that an ideal agent ought to fully reject such contradictions.

(Actually, I’m not quite as unsympathetic to contradictions as this makes it sound. I’m interested in the dialethic/paraconsistent package. But in that setting, the right thing to say isn’t that A&~A is half-true, but that it’s true (and probably also false). Attitudinally, the ideal agent ought to fully accept it.)

Now the no-interpretation non-classicist has the resources to satisfy this constraint. She can maintain that the ideal degree of belief in A&~A is always 0. Given that:

p(A)+p(B)=p(AvB)+p(A&B)

we have:

p(A)+p(~A)=p(Av~A)

And now, whenever we fail to fully accept Av~A, it will follow that our credences in A and ~A don’t sum to one. That’s the price we pay for continuing to utterly reject contradictions.

The *natural* view in this setting, it seems to me, is that accepting indeterminacy of A corresponds to rejecting Av~A. So someone fully aware that A is indeterminate should fully reject Av~A. (Here and in the above I’m following Field’s “No fact of the matter” presentation of the nonclassicist).

But now consider the folklore nonclassicist, who does take talk of indeterminate propositions being “half true” (or more generally, degree-of-truth talk) seriously. This is the sort of position that the Smith paper cited in the last post explores. The idea there is that indeterminacy corresponds to half-truth, and fully informed ideal agents should set their partial beliefs to match the degree-of-truth of a proposition (e.g. in a 3-valued setting, an indeterminate A should be believed to degree 0.5). [NB: obviously partial beliefs aren’t going to behave like a probability function if truth-functional degrees of truth are taken as an “expert function” for them.]

Given the usual min/max take on how these multiple truth values get settled over conjunction and negation, for the fullyinformed agent we’ll get p(Av~A) set equal to the degree of truth of Av~A, i.e. 0.5. And exactly the same value will be given to A&~A. So contradictions, far from being rejected, are appropriately given the same doxastic attitude as I assign to “this fair coin will land heads”

Another way of putting this: the difference between our overall attitude to “the coin will land heads” and “Jim is bald and not bald” only comes out when we consider attitudes to contents in which these are embedded. For example, I fully disbelieve B&~B when B=the coin lands heads; but I half-accept it for B=A&~A . That doesn’t at all ameliorate the implausibility of the initial identification, for me, but it’s something to work with.

In short, the Field-like nonclassicist sets A&~A to 0; and that seems exactly right. Given this and one or two other principles, we get a picture where our confidence in Av~A can take any value—right down to 0; and as flagged before, the probabilities of A and ~A carve up this credence between them, so in the limit where Av~A has probability 0, they take probability 0 too.

But the folklore nonclassicist I’ve been considering, for whom degrees-of-truth are an expert function for degrees-of-belief, has 0.5 as a pivot. For the fully informed, Av~A always exceeds this by exactly the amount that A&~A falls below it—and where A is indeterminate, we assign them all probability 0.5.

As will be clear, I’m very much on the Fieldian side here (if I were to be a nonclassicist in the first place). It’d be interesting to know whether folklore nonclassicists do in general have a picture about partial beliefs that works as Smith describes. Consistently with taking semantics seriously, they might think of the probability of A as equal to the measure of the set of possibilities where A is perfectly true. And that will always make the probability of A&~A 0 (since it’s never perfectly true); and meet various other of the Fieldian descriptions of the case. What it does put pressure on is the assumption (more common in degree theorists than 3-value theorists perhaps) that we should describe degree-of-truth-0.5 as a way of being “half true”—why in a situation where we know A is halftrue, would we be compelled to fully reject it? So it does seem to me that the rhetoric of folklore degree theorists fits a lot better with Smith’s suggestions about how partial beliefs work. And I think it’s objectionable on that account.

[Just a quick update. First observation. To get a fix on the “pivot” view, think of the constraint being that P(A)+P(~A)=1. Then we can see that P(Av~A)=1-P(A&~A), which summarizes the result. Second observation. I mentioned above that something that treated the degrees of truth as an expert function “won’t behave like a probability function”. One reflection of that is that the logic-probability link will be violated, given certain choices for the logic. E.g. suppose we require valid arguments to preserve perfect truth (e.g. we’re working with the K3 logic). Then A&~A will be inconsistent. And, for example, P(A&~A) can be 0.5, while for some unrelated B, P(B) is 0. But in the logic A&~A|-B, so probability has decreased over a valid argument. Likewise if we’re preserving non-perfect-falsity (e.g. we’re working with the LP system). Av~A will then be a validity, but P(Av~A) can be 0.5, yet P(B) be 1. These are for the 3-valued case, but clearly that point generalizes to the analogous definitions of validity in a degree valued setting. One of the tricky things about thinking about the area is that there are lots of choice-points around, and one is the definition of validity. So, for example, one might demand that valid arguments preserve both perfect truth and non-perfect falsity; and then the two arguments above drop away since neither |-Av~A nor A&~A|- on this logic. The generalization to this in the many-valued setting is to demand e-truth preservation for every e. Clearly these logics are far more constrained than the K3 or LP logics, and so there’s a better chance of avoiding violations of the logic-probability link. Whether one gets away with it is another matter.]

Aristotelian indeterminacy and partial beliefs

I’ve just finished a first draft of the second paper of my research leave—title the same as this post. There’s a few different ways to think about this material, but since I hadn’t posted for a while I thought I’d write up something about how it connects with/arises from some earlier concerns of mine.

The paper I’m working on ends up with arguments against standard “Aristotelian” accounts of the open future, and standard supervaluational accounts of vague survival. But one starting point was an abstract question in the philosophy of logic: in what sense is standard supervaluationism supposed to be revisionary? So let’s start there.

The basic result—allegedly—is that while all classical tautologies are supervaluational tautologies, certain classical rules of inference (such as reductio, proof by cases, conditional proof, etc) fail in the supervaluational setting.

Now I’ve argued previously that one might plausibly evade even this basic form of revisionism (while sticking to the “global” consequence relation, which preserves traditional connections between logical consequence and truth-preservation). But I don’t think it’s crazy to think that global supervaluational consequence is in this sense revisionary. I just think that it requires an often-unacknowledged premise about what should count as a logical constant (in particular, whether “Definitely” counts as one). So for now let’s suppose that there are genuine counterexamples to conditional proof and the rest.

The standard move at this point is to declare this revisionism a problem for supervaluationists. Conditional proof, argument by cases: all these are theoretical descriptions of widespread, sensible and entrenched modes of reasoning. It is objectionably revisionary to give them up.

Of course some philosophers quite like logical revisionism, and would want to face-down the accusation that there’s anything wrong with such revisionism directly. But there’s a more subtle response available. One can admit that the letter of conditional proof, etc are given up, but the pieces of reasoning we normally call “instances of conditional proof” are all covered by supervaluationally valid inference principles. So there’s no piece of inferential practice that’s thrown into doubt by the revisionism of supervaluational consequence: it seems that all that happens is that the theoretical representation of that practice has to take a slightly more subtle form than one might except (but still quite a neat and elegant one).

One thing I mention in that earlier paper but don’t go into is a different way of drawing out consequences of logical revisionism. Forget inferential practice and the like. Another way in which logic connects with the rest of philosophy is in connection to probability (in the sense of rational credences, or Williamson’s epistemic probabilities, or whatever). As I sketched in a previous post, so long as you accept a basic probability-logic constraint, which says that the probability of a tautology should be 1, and the probability of a contradiction should be 0, then the revisionary supervaluational setting quickly forces you to a non-classical theory of probability: one that allows disjunctions to have probability 1 where each disjunct has probability 0. (Maybe we shouldn’t call such a thing “probability”: I take it that’s terminological).

Folk like Hartry Field have argued completely independently of this connection to Supervaluationism that this is the right and necessary way to handle probabilities in the context of indeterminacy. I’ve heard others say, and argue, that we want something closer to classicism (maybe tweaked to allow sets of probability functions, etc). And there are Dutch Book arguments to consider in favour of the classical setting (though I think the responses to these from the perspective of non-classical probabilities are quite convincing).

I’ve got the feeling the debate is at a stand-off, at least at this level of generality. I’m particularly unmoved by people swapping intuitions about degrees of belief it is appropriate to have in borderline cases of vague predicates, and the like (NB: I don’t think that Field ever argues from intuition like this, but others do). Sometimes introspection suggests intriguing things (for example, Schiffer makes the interesting suggestion that one’s degree of belief in a conjunction of two vague propositions is typically matches one’s degree of belief in the propositions themselves). But I can’t see any real dialectical force here. In my own case, I don’t have robust intuitions about these cases. And if I’m to go on testimonial evidence on others intuitions, it’s just too unclear what people are reporting on for me to feel comfortable taking their word for it. I’m worried, for example, they might just be reporting the phenomenological level of confidence they have in the proposition in question: surely that needn’t coincide with one’s degree of belief in the proposition (thinking of an exam you are highly nervous about, but are fairly certain you will pass… your behaviour may well manifest a high degree of belief, even in the absence of phenomenological trappings of confidence). In paradigm cases of indeterminacy, it’s hard to see how to do better than this.

However, I think in application to particular debates we might be able to make much more progress. Let us suppose that the topic for the day is the open future, construed, minimally, as the claim that while there are definite facts about the past and present, the future is indefinite.

Might we model this indefiniteness supervaluationally? Something like this idea (with possible futures playing the role of precisifications) is pretty widespread, perhaps orthodoxy (among friends of the open future). It’s a feature of MacFarlane’s relativistic take on the open future, for example. Even though he’s not a straightforward supervaluationist, he still has truth-value gaps, and he still treats them in a recognizably supervaluational-style way.

The link between supervaluational consequence and the revisionionary behaviour of partial beliefs should now kick in. For if you know with certainty that some P is neither true nor false, we can argue that you should invest no credence at all in P (or in its negation). Likewise, in a framework of evidential probabilities, P gets no evidential probability at all (nor does its negation).

But think what this says in the context of the open future. It’s open which way this fair coin lands: it could be heads, it could be tails. On the “Aristotelian” truth-value conception of this openness, we can know that “the coin will land heads” is gappy. So we should have credence 0 in it, and none of our evidence supports it.

But that’s just silly. This is pretty much a paradigmatic case where we know what partial belief we have and should have in the coin landing heads: one half. And our evidence gives exactly that too. No amount of fancy footwork and messing around with the technicalities of Dempster-Shafer theory leads to a sensible story here, as far as I can see. It’s just plainly the wrong result. (One doesn’t improve matters very much by relaxing the assumptions, e.g. taking the degree of belief in a failure of bivalence in such cases to fall short of one: you can still argue for a clearly incorrect degree of belief in the heads-proposition).

Where does that leave us? Well, you might reject the logic-probability link (I think that’d be a bad idea). Or you might try to argue that supervaluational consequence isn’t revisionary in any sense (I sketched one line of thought in support of this in the paper cited). You might give up on it being indeterminate which way the coin will land—i.e. deny the open future, a reasonably popular option. My own favoured reaction, in moods when I’m feeling sympathetic to the open future, is to go for a treatment of metaphysical indeterminacy where bivalence can continue to hold—my colleague Elizabeth Barnes has been advocating such a framework for a while, and it’s taken a long time for me to come round.

All of these reactions will concede the broader point—that at least in this case, we’ve got an independent grip on what the probabilities should be, and that gives us traction against the Supervaluationist.

I think there are other cases where we can find similar grounds for rejecting the structure of partial beliefs/evidential probabilities that supervaluational logic forces upon us. One is simply a case where empirical data on folk judgements has been collected—in connection with indicative conditions. I talk about this in some other work in progress here. Another which I talk about in the current paper, and which I’m particularly interested in, concerns cases of indeterminate survival. The considerations here are much more involved than in indeterminacy we find in connection to the open future or conditionals. But I think the case against the sort of partial beliefs supervaluationism induces can be made out.

All these results turn on very local issues. None, so far as see, generalizes to the case of paradigmatic borderline cases of baldness and the rest. I think that makes the arguments even more interesting: potentially, they can serve as a kind of diagnostic: this style of theory of indeterminacy is suitable over here; that theory over there. That’s a useful thing to have in one’s toolkit.

Degrees of belief and supervaluations

Suppose you’ve got an argument with one premise and one conclusion, and you think its valid. Call the premise p and the conclusion q. Plausibly, constraints on rational belief follow: in particular, you can’t rationally have a lesser degree of belief in q than you have in p.

The natural generalization of this to multi-premise cases is that if p1…pn|-q, then your degree of disbelief in q can’t rationally exceed the sum of your degrees of disbelief in the premises.

FWIW, there’s a natural generalization to the multi-conclusion case too (a multi-conclusion argument is valid, roughly, if the truth of all the premises secures the truth of at least one conclusion). If p1…pn|-q1…qm, then the sum of your degrees of disbelief in the conclusions can’t rationally exceed the sum of your degrees of disbelief in the premises.

What I’m interested in at the moment is to what extent this sort of connection can be extended to non-classical settings. In particular (and connected with the last post) I’m interested in what the supervaluationist should think about all this.

There’s a fundamental choice to be made at the get-go. Do we think that “degrees of belief” in sentences of a vague language can be represented by a standard classical probability function? Or do we need to be a bit more devious?

Let’s take a simple case. Construct the artificial predicate B(x), so that numbers less than 5 satisfy B, and numbers greater than5 fail to satisfy it. We’ll suppose that it is indeterminate whether 5 itself is B, and that supervaluationism gives the right way to model this.

First observation. It’s generally accepted that for the standard supervaluationist

p &~Det(p)|-absurdity;

Given this and the constraints on rational credence mentioned earlier, we’d have to conclude that my credence in B(5)&~Det(B(5)) must be 0. I have credence 0 in absurdity; and the degree of disbelief in the conclusion of this valid argument (1) must not exceed the degree of disbelief in its premises.

Let’s think that through. Notice that in this case, my credence in ~Det(B(5)) can be taken to be 1. So given minimal assumptions about the logic of credences, my credence in B(5) must be 0.

A parallel argument running from ~B(5)&~Det(~B(5))|-absurdity gives us that my credence in ~B(5) must be 0.

Moreover, supervaluational entails all classical tautologies. So in particular we have the validity: |-B(5)v~B(5). The standard constraint in this case tells us that rational credence in this disjunction must be 1. And so, we have a disjunction in which we have credence 1, each disjunct of which we have credence 0 in. (Compare the standard observation that supervaluational disjunctions can be non-prime: the disjunction can be true when neither disjunct is).

This is a fairly direct argument that something non-classical has to be going on with the probability calculus. One move at this point is to consider Shafer functions (which I know little about: but see here). Now maybe that works out nicely, maybe it doesn’t. But I find it kinda interesting that the little constraint on validity and credences gets us so quickly into a position where something like this is needed if the constraint is to work. It also gives us a recipe for arguing against standard supervaluationism: argue against the Shafer-function like behaviour in our degrees of belief, and you’ll ipso facto have an argument against supervaluationism. For this, the probablistic constraint on validity is needed (as far as I can see): for its this that makes the distinctive features mandatory.

I’d like to connect this to two other issues I’ve been working on. One is the paper on the logic of supervaluationism cited below. The key thing here is that it raises the prospect of p&~Dp|-absurdity not holding, even for your standard “truth=supertruth” supervaluationist. If that works, the key premise of the argument that forces you to have degree of belief 0 in both an indeterminate sentence ‘p’ and its negation goes missing.

Maybe we can replace it by some other argument. If you read “D” as “it is true that…” as the standard supervaluationist encourages you to, then “p&~Dp” should be read “p&it is not true that p”. And perhaps that sounds to you just like an analytic falsity (it sure sounded to me that way); and analytic falsities are the sorts of things one should paradigmatically have degree of belief 0 in.

But here’s another observation that might give you pause (I owe this point to discussions with Peter Simons and John Hawthorne). Suppose p is indeterminate. Then we have ~Dp&~D~p. And given supervaluationism’s conservativism, we also have pv~p. So by a bit of jiggery-pokery, we’ll get (p&~Dp v ~p&~D~p). But in moods where I’m hyped up thinking that “p&~Dp” is analytically false and terrible, I’m equally worried by this disjunction. But that suggests that the source of my intuitive repulsion here isn’t the sort of thing that the standard supervaluationist should be buying. Of course, the friend of Shafer functions could just say that this is another case where our credence in the disjunction is 1 while our credences in each disjunct is 0. That seems dialectically stable to me: after all, they’ll have *independent* reason for thinking that p&~Dp should have credence 0. All I want to insist is that the “it sounds really terrible” reason for assigning p&~Dp credence 0 looks like it overgeneralizes, and so should be distrusted.

I also think that if we set aside truth-talk, there’s some plausibility in the claim that “p&~Dp” should get non-zero credence. Suppose you’re initially in a mindset where you should be about half-confident of a borderline case. Well, one thing that you absolutely want to say about borderline cases is that they’re neither true nor false. So why shouldn’t you be at least half-confident in the combination of these?

And yet, and yet… there’s the fundamental implausibility of “p&it’s not true that p” (the standard supervaluationist’s reading of “p&~Dp”) having anything other than credence 0. But ex hypothesi, we’ve lost the standard positive argument for that claim. So we’re left, I think, with the bare intuition. But it’s a powerful one, and something needs to be said about it.

Two defensive maneuvers for the standard supervaluationist:

(1) Say that what you’re committed to is just “p& it’s not supertrue that p”. Deny that the ordinary concept of truth can be identified with supertruth (something that as many have emphasized, is anyway quite plausible given the non-disquotational nature of supertruth). But crucially, don’t seek to replace this with some other gloss on supertruth: just say that supertruth, superfalsity and gap between them are appropriate successor concepts, and that ordinary truth-talk is appropriate only when we’re ignoring the possibility of the third case. If we disclaim conceptual analysis in this way, then it won’t be appropriate to appeal to intuitions about the English word “true” to kick away independently motivated theoretical claims about supertruth. In particular, we can’t appeal to intuitions to argue that “p&~supertrue that p” should be assigned credence 0. (There’s a question of whether this should be seen as an error-theory about English “truth”-ascriptions. I don’t see it needs to be. It might be that the English word “true” latches on to supertruth because supertruth what best fits the truth-role. On this model, “true” stands to supertruth as “de-phlogistonated air” according to some, stands to oxygen. And so this is still a “truth=supertruth” standard supervaluationism.)

(2) The second maneuver is to appeal to supervaluational degrees of truth. Let the degree of supertruth of S be, roughly, the measure of the precisifications on which S is true. S is supertrue simpliciter when it is true on all the precisifications, i.e. measure 1 of the precisifications. If we then identify degrees of supertruth with degrees of truth, the contention that truth is supertruth becomes something that many find independently attractive: that in the context of a degree theory, truth simpliciter should be identified with truth to degree 1. (I think that this tendancy has something deeply in common with the temptation (following Unger) to think that nothing that nothing can be flatter than a flat thing: nothing can be truer than a true thing. I’ve heard people claim that Unger was right to think that a certain class of adjectives in English work this way).

I think when we understand the supertruth=truth claim in that way, the idea that “p&~true that p” should be something in which we should always have degree of belief 0 loses much of its appeal. After all, compatibly with “p” not being absolutely perfectly true (=true), it might be something that’s almost absolutely perfectly true. And it doesn’t sound bad or uncomfortable to me to think that one should conform one’s credences to the known degree of truth: indeed, that seems to be a natural generalization of the sort of thing that originally motivated our worries.

In summary. If you’re a supervaluationist who takes the orthodox line on supervaluational logic, then it looks like there’s a strong case for a non-classical take on what degrees of belief look like. That’s a potentially vulnerable point for the theory. If you’re a (standard, global, truth=supertruth) supervaluationist who’s open to the sort of position I sketch in the paper below, prima facie we can run with a classical take on degrees of belief.

Let me finish off by mentioning a connection between all this and some material on probability and conditionals I’ve been working on recently. I think a pretty strong case can be constructed for thinking that for some conditional sentences S, we should be all-but-certain that S&~DS. But that’s exactly of the form that we’ve been talking about throughout: and here we’ve got *independent* motivation to think that this should be high-probability, not probability zero.

Now, one reaction is to take this as evidence that “D” shouldn’t be understood along standard supervaluationist lines. That was my first reaction too (in fact, I couldn’t see how anyone but the epistemicist could deal with such cases). But now I’m thinking that this may be too hasty. What seems right is that (a) the standard supervaluationist with the Shafer-esque treatment of credences can’t deal with this case. But (b) the standard supervaluationist articulated in one of the ways just sketched shouldn’t think there’s an incompatibility here.

My own preference is to go for the degrees-of-truth explication of all this. Perhaps, once we’ve bought into that, the “truth=degree 1 supertruth” element starts to look less important, and we’ll find other useful things to do with supervaluational degrees of truth (a la Kamp, Lewis, Edgington). But I think the “phlogiston” model of supertruth is just about stable too.

[P.S. Thanks to Daniel Elstein, for a paper today at the CMM seminar which started me thinking again about all this.]

Sleeping bookie

I’ve spent more of this week than is healthy thinking about the Sleeping Beauty puzzle (thanks in large part to this really interesting post by Kenny). I don’t think I’ve got anything terribly novel to say, but I thought I’d set out my current thinking to see if people agree with my take on what the dialectic is on at least one aspect of the puzzle.

Sleeping Beauty is sent to sleep by philosophical experimenters. He (for, in a strike for sexual equality, this Beauty is male) will be woken up on Monday morning, told on Monday afternoon what day it is, and sent to sleep again after being given a drug which will mean that the next time he wakes up, he will have no memories of what transpired. Depending on the result of a fair coin flip, he will either be woken up in exactly similar circumstances on Tuesday morning, or be left to sleep through the day. Beauty is aware of the setup.

How confident should Beauty be on Monday morning that the coin to be flipped in a few hours will land heads (remember, he knows it’s a fair coin). Halfers say: he should have credence 1/2 that it’ll be heads. Thirders say: the credence should be 1/3. (All sides agree that on Sunday his credence should be 1/2).

What I’m interested in is whether there are Dutch book arguments for either view. The very simplest takes the following form. Sell Beauty a [$30,T] bet for $15 on Sunday evening. Then, if Beauty’s a halfer, on Monday and (if awoken) Tuesday mornings, sell him [$20,H] bets on each awakening for $10.

If H obtains, Beauty loses the first bet but wins the sole remaining bet (on Monday morning), for a net loss of $5. If T obtains, Beauty wins the first bet, but loses the next two, for a net loss of $5 again. So Beauty is guaranteed to lose money.

This is in some sense a diachronic dutch book. But as several people note, it’s not a particularly convincing argument that there’s something wrong with Beauty being a halfer. For notice that the information here is asymmetric: the bookie offering the bets needs to have more information than Beauty, since it is crucial to their strategy to offer twice as many bets if the result of the coin flip is tails, than if it is heads.

Hitchcock aims to give a revised Dutch book argument for the same conclusion that avoids this problem. He suggests that the experimenters put the bookie through the same procedure as they put Beauty through, and the bookie’s strategy should then simply be to offer Beauty the bets every time they both wake. That has the net effect of offering the same set of bets as above for a sure loss for Beauty, but the bookie and Beauty are in the same epistemic state. This is the sleeping bookie argument.

What I’d like to claim (inspired by Bradley and Leitgeb) is that if we concentrate too much on the epistemic state of hypothetical bookies, we’ll get led astray. Looking at the overall mechanism whereby bets are offered to Beauty, we initially described this as one where an agent (bookie) is offering bets to Beauty each time they are both awake. But I’d prefer to describe this as a case where a complex agency (the bookie and the experimenters in cahoots) are offering bets to Beauty. The second description seems at least as good as the first: after all, without the compliance of the experimenters, the bookie’s dutch book strategy can’t be implemented. But the system constituted by the experimenters and the bookie clearly has access to the information about the result of the coin toss, and arranges for the bets to be made appropriately, even though the bookie alone lacks this information.

Now dutch book arguments are only as good as the results we can extract from them about what credences are rational to have in given circumstances. And clearly, if Beauty knows that the bets coming at him encode information about the outcome on which the bet turns, then he needn’t (perhaps shouldn’t) simply bet according to his credences, but adjust them to take into account the encoded information. That’s why, to get a fix on what Beauty’s credences are, we put a ban on the bookie having excess information. That’s why the first dutch book argument for thirding looks like a bad way to get a fix on what Beauty’s credences are. But this rationale for forbidding the bookie from having excess information generalizes, so that we shouldn’t trust dutch books in any situation where the mechanism whereby bets are offered (whether in the hands of a single individual, or a system) relies on information about the outcome on which the bet turns. (Equally, if the bookie had extra information, but the system of bets doesn’t exploit this in any way, there’s as yet no case against trusting the dutch book argument, it seems to me.)

The moral I take from all this is that what’s going on in the head of some individual we deign to call “bookie” is neither here nor there: what matters is the pattern of bets and whether that pattern exploits information about the outcomes on which the bet turns. This is effectively what I take Bradley and Leitgeb to argue for in their very nice article. What they suggest (roughly) is that a necessary condition on taking a dutch book argument to give a fix on rational credences, is that the pattern of bets be uncorrelated with the outcomes on which the bets turn. I conjecture (tentatively), that this is really what the ban on bookie’s having extra information was trying to get at all along. The upshot is that Hitchcock’s sleeping bookie argument is problematic in the same way as the initial dutch book argument against halfers.

But more than this. If we refocus attention on the issue of the goodstanding of the pattern of bets, rather than the epistemic states of hypothetic bookies, we can put together a dutch book argument against thirders. For suppose that the experimenters offer Beauty a [$30,H] bet for $15 on Sunday, and then a genuine bet of [$30,T] for $20 on Monday morning no matter what happens, and (so he can’t tell what’s going on) a fake bet where he’ll automatically get his stake returned, apparently of [$30,T] for $20 on Tuesday. Then he’ll be guaranteed a loss of $5 no matter what happens. Of course, the experimenters here have knowledge of the outcomes. But (arguably) that doesn’t matter, because the bets they offer are uncorrelated with the outcomes of the event on which the bets turn: the system of bets offered is the same no matter what the outcome is, so (it seems to me) the information that the experimenters have isn’t implicit in the pattern of bets in any sense. So I think there’s a better dutch book argument against thirding, than there is against halfing. (Or at least, I’d be interested in seeing the case against this in detail).

All this is not to say that the halfer is out of the woods. A quite different dutch book argument is given in a paper by Draper and Pust, which exploits the standard halfer’s story (Lewis’s) about what happens on Monday afternoon, once Beauty has been told what day it is. The Lewisian halfer thinks that once Beauty realizes its Monday, he should have credence 2/3 that Heads is the result. And that, it appears, is a dutch-bookable situation.

Notice that this isn’t directly an argument against the thesis that Beauty should have credence 1/2 in Heads on Monday morning. It is, in effect, an argument that he should also have credence 1/2 in Heads on Tuesday. And, with a few other widely accepted assumptions, these combine to give rise to a contradiction (see for example, Cian Dorr’s presentation of the Beauty case as a paradox).

If this is all we say, then we should conclude that we really do have here a puzzling argument for a contradiction, where all the premises look pretty plausible and the two crucial ones both seem prima facie defensible via dutch book strategies. Maybe, as some suggest, we should revise our claims about updating of credences to make halfing in both circumstances appropriate: or maybe there’s something unavoidably irrational in Beauty’s predicament. What will finally come out in the wash as the best response to the puzzle is one matter; whether the dutch book considerations support halfing or thirding on Monday morning is another; and it is only on this narrow point that I’m claiming that there is a pro tanto case to be a halfer.

Thoughts?

Probabilistic multi-conclusion validity

I’ve been thinking a bit recently about how to generalize standard results relating probability to validity to a multi-conclusion setting.

The standard result is the following (where the uncertainty of p is 1-probability of p):

An argument is classically valid
iff
for all classical probability functions, the sum of the uncertainties of the premises is at least as great as the uncertainty of the conclusion.

It’ll help if we restate this as follows:

An argument is classically valid
iff
for all classical probability functions, the sum of the uncertainties of the premises + the probability of the conclusion is at least 1.

Stated this way, there’s a natural generalization available:

A multi-conclusion argument is classically valid
iff
for all classical probability functions, the sum of the uncertainties of the premises + the probabilities of the conclusions is greater than or equal to 1.

And once we’ve got it stated, it’s a corollary of the standard result (I believe).
It’s pretty easy to see directly that this works in the “if” direction, just by considering classical probability functions which only assign 1 or 0 to propositions.

In the “only if” direction (writing u for uncertainty and p for probability)

Consider A,B|=C,D. This holds iff A,B,~C,~D|= holds by a standard premise/conclusion swap result. And we know u(~C)=p(C), u(~D)=p(D). By the standard result, the sum of uncertainties of the premises of a single-conclusion argument must be greater than that of the conclusion. That is, the single-conc argument holds iff u(A)+u(B)+u(~C)+u(~D) is greater than equal to 1. But by the above identification, this holds iff u(A)+u(B)+p(C)+p(D) is greater than or equal to 1. This should generalize to arbitrary cases. QED.

Thresholds for belief

I’m greatly enjoying reading David Christensen’s Putting logic in its place at the moment. Some remarks he makes about threshold accounts of the relationship between binary and graded beliefs seemed particularly suggestive. I want to use them here to suggest a certain picture of the relationship between binary and graded belief. No claim to novelty here, of course, but I’d be interested to hear about worries about this specific formulation (Christensen himself argues against the threshold account).

One worry about threshold accounts is that they’ll make constraints on binary beliefs look very weird. Consider, for example, the lottery paradox. I am certain that someone will win, but for each individual ticket, I’m almost certain that it’s a loser. Suppose that having belief of degree n sufficed for binary belief. Then, by choosing a big enough lottery, we can make it that I believe a generalization (there will be a winner) while believing the negation of each of its premises. So I believe each of a logically inconsistent set.

This sort of situation is very natural from the graded belief perspective: the beliefs in question meet constraints of probabilistic coherence. But there’s a strong natural thought that binary beliefs should be constrained to be logically consistent. And of course, the threshold account doesn’t deliver this.

What Christensen points to is some observations by Kyburg about limited consistency results that can be derived from the threshold account. Minimally, binary beliefs are required to be weakly consistent: for any threshold above zero, one cannot believe a single contradictory proposition. But there are stronger results too. For example, for any threshold above 0.5, one cannot believe a pair of mutually contradictory propositions. One can see why this is if one remembers the following result: that a logically valid argument is such that the improbability of its conclusion cannot be greater than the sum of the improbabilities of its premises. For the case where the conclusion is absurd (i.e. the premises are contradictory) we get the the sum of the improbabilities of the premises must be less than or equal to 1.

In general, then, what we get is the following: if the threshold for binary belief is at least 1-1/n, then one cannot believe each of an inconsistent set of n propositions.

Here’s one thought. Let’s suppose that the threshold for binary belief is context dependent in some way (I mean here to use this broadly, rather than committing to some particularly potentially controversial semantic analysis of belief attributions). The threshold that marks the shift to binary belief can vary depending on aspects of the context. The thought, crudely put, is that there’ll be the following constraint on what thresholds can be set: in a context where n propositions are being entertained, then the threshold for binary belief must be at least 1-1/n.

There is, of course, lots to clarify about this. But notice that now relative to every context, we’ll get logical consistency as a constraint on the pattern of binary belief (assuming that to belief that p is in part to entertain that p).

[As Christensen emphasises, this is not the same thing as getting closure holding in every context. Suppose we consider the three propositions, A, B, and A&B. Consistency means that we cannot accept the first two and accept the negation of the last. And indeed, with the threshold set at 2/3, we get this result. But closure would tell us that every situation in which we believe the first two, we should believe the last. But it’s quite consistent to believe A and B (say, by having credence 2/3 in each) and to fail to believe A&B (say, by having credence 1/3 in this proposition). Probabilistic coherence isn’t going to save the extendability of beliefs by deduction, for any reasonable choice of threshold.

Of course, if we allow a strong notion of disbelief or rejection, such that someone disbelieves that p iff their uncertainty of p is past the threshold (the same threshold as for belief), then we’ll be able to read off from the consistency constraint that in a valid argument, if one believes the premises, one should abandon disbelief in the conclusion. This is not closure, but perhaps it might sweeten the pill of giving up on closure.]

Without logical consistency being a pro tanto normative constraint on believing, I’m sceptical that we’re really dealing with a notion of binary belief at all. Suppose this is accepted. Then we can use the considerations above to argue (1) that if the threshold account of binary belief is right, then thresholds (if not extreme) must be context dependent, since for no choice of threshold less than 1 will consistency be upheld. (2) that there’s a natural constraint on thresholds in terms of the number of propositions obtained.

The minimal conclusion, for this threshold theorist, is that the more propositions they entertain, the harder it will be for them to count as beliefs. Consider the lottery paradox construed this way:


1 loses

2 loses

N loses

So: everyone loses

Present this as the following puzzle: We can believe all the premises, and disbelieve the conclusion, yet the latter is entailed by the former.

We can answer this version of the lottery paradox using the resources described above. In a context where we’re contemplating this many propositions, the threshold for belief is so high that we won’t count as believing the individual props. But we can explain why it seems so compelling: entertain each individually, and we will believe it (and our credences remain fixed throughout).

Of course, there’s other versions of the lottery paradox that we can formulate, e.g. relying on closure, for which we have no answer. Or at least, our answer is just to reject closure as a constraint on rational binary beliefs. But with a contextually variable threshold account, as opposed to a fixed threshold account, we don’t have to retreat any further.

Chances, counterfactuals and similarity

A happy-making feature of today is that Philosophy and Phenomenological Research have just accepted my paper “Chances, Counterfactuals and Similarity”, which has been hanging around for absolutely ages, in part because I got a “revise and resubmit” just as I was finishing my thesis and starting my new job, and in part because I got so much great feedback from a referee that there was lots to think about.

The way I think about it, it is a paper in furtherance of the Lewisian project of reducing counterfactual facts to similarity-facts between worlds, which feeds into a general interest in what kinds of modal structure (cross-world identities, metrics and measures, stronger-than-modal relations etc) you need to appeal to for metaphysical purposes. Lewis has a distinctive project of trying to reduce all this apparent structure to the economical basis of de dicto modality — what’s true at this world or that — and (local) similarity facts. Counterpart theory is one element of this project: showing how cross-world identities might be replaced by similarity relations and de dicto modality. Another element is the reduction of counterfactuals to closeness of worlds, and closeness of worlds is ultimately cashed out in terms of one world’s fitting another’s laws, and there being large areas where the local facts in each world match exactly. Again, we find de dicto modality of worlds and local similarity at the base.

Lewis’s main development of this view looks at a special case, where the actual world is presupposed to have deterministic laws. But to be general (and presumably, to be applicable to the actual world!) we want to have an account that holds for the situation where the laws of nature are objective-chance-laws. Lewis does suggest a way of extending his account to the chancy case. It’s attacked by Hawthorne in a recent paper—ultimately successfully, I think. In any case, Lewis’s ideas in this area always looked (to me) like a bit of a patch-up job, so I suggest a more principled Lewisian treatment, which then avoids the Hawthorne-style objections to the Lewis original.

The basic thought (which I found in Adam Elga’s work on Humean laws of nature) is that “fitting” chancy laws of nature is not just a matter of not violating those laws. Rather, to fit a chancy law is to be objectively typical relative to the probability function those laws determine. Given this understanding, we can give a single Lewisian account of what comparative similarity of worlds amounts to, phrased in terms of fit. The ambition is that when you understand “fit” in the way appropriate to deterministic laws, you get Lewis’s original (unextended) account. And when you understand “fit” in the way I argue is appropriate to chancy laws, you get my revised suggestion. All very satisfying, if you can get it to work!