Regrets? I’ve had a few…

Just a quick note about something that’s puzzling me.

Frank Arntzenius has a really nice paper (no regrets) in which he gives an interesting argument for causal decision theory. The basic thought is this: if you know that you’ll (by rational means) come to desire something later, you should desire that thing now. (Obviously that formulation needs tightening—see paper for details). He imposes a “desire reflection principle”: your level of desire in p should match your expected level of desire in p, at future time t.

He points out the following. If the desirabilities of various propositions are described by evidential decision theory, then desire-reflection is violated. Suppose you think that in a Newcomb case you desire to 1-box, because desirability goes by EDT value. Suppose you know that before you’re given the money, the distribution of money in the boxes will be revealed. At the point of revelation, you will (by EDT lights) desire that you had two-boxed earlier—no matter what information you receive. So the current ordering of desirability of 1-boxing vs. 2-boxing is reversed when we look at expected future desirability. Desire-reflection rules such scenarios out. Arntzenius argues that CDT (which recommends 2-boxing from the start) won’t violate desire-reflection.

Why care about desire-reflection? Well, it sounds really compelling, to begin with; and if we’re already fans of van Fraassen’s belief-reflection principle, it’d be very natural to take both attitudes to behave in analogous ways in this regard. To motivate it, Arntzenius writes: “If your future self has more information than you do, surely you should listen to her advice, surely you should trust her assessment of the desirabilities of your possible actions.”

But the problem with these sorts of motivation (for me) is that it overgenerates—really for desirability we could substitute any pro-attitude, and we’d find something that sounds equally compelling. If my future self has more information, I should listen to the advice— for example, on what to desire, what to hope for, what to wish for, etc etc.

Here’s my puzzle. There are surely some pro-attitudes that violate desire reflection. EDT surely *does* describe how much you’d like to receive this news rather than that (it’s described by some as “news value”, and that seems like a good name). Suppose I faced the Newcomb situation yesterday, and don’t know which way I acted. Caring only about money, the best news I can receive, given my current poor epistemic state, is that I one-boxed—for I expect to find more money in my bank given that information, then given the alternative. That—let me assure you—-is what I would hope I did (if I cared about being rational, maybe things’d be different—but I only care about money in the bank).

But say I’ll be told at breakfast what the distribution of money in fact was in the Newcomb situation I faced—before being told which way I acted. Once I’ve got that extra piece of info, then no matter which way it goes, I’ll be hoping that I two-boxed—for given the extra distribution information (whatever it is) the news value of 2-boxing will be greater than 1-boxing.

So this is basically just to repeat Arntzenius’s setup, and then asking you to agree that for some pro-attitude—hope, in this case—we violate reflection. We might not like this, but I think it’s pretty pointless to deny it. (After all, it’s not like EDT-values are ill-defined in some way—it’s not like there’s any reason to think it’s *impossible* to adopt propositional attitudes that behave in the way it describes—and, as a matter of fact, I think hoping does in fact work this way).

We needn’t deny there’s some pro-attitude—a different one—that CDT describes. Call that CDT-desire. (I believe David Etlin has a paper arguing that we genuinely have two attitudes hereabouts—I’m looking forward to reading his paper). Hoping violates reflection. CDT-Desiring satisifies it. Pro-attitudes—and the very notion of desirability—seems disanalogous to belief in this regard. For I take it if we’re fans of reflection we really don’t think there’s some kind of representational state belief* that is reflection-violating.

So we need some *discriminating* motivation—something that tells us that desirability *in the sense relevant to rationalizing action* should satisify desire-reflection. If we had something like that, then we could rule out hope, and in favour of CDT-desire, as the relevant notion. But I don’t see we’ve got the tools as yet.

Despite these concerns, there’s seems to me something deeply illuminating in thinking about the EDT/CDT contrast in terms of desire reflection. The problem is, I can’t see yet its distinctive relevance to action. Is there some kind of diachronic coherence constraint on planning for action, specifically, that “wishful thinking” needn’t involve? Why would it matter?

13 responses to “Regrets? I’ve had a few…

  1. Maybe we should focus explicitly on action. Here’s a compelling-sounding slogan: “Don’t act in ways you know you’re going to come to regret”. But to get mileage out of this, we need to ask what “regret” means here. Plausibly, you regret p iff you “desire that you did the opposite”. But the critical question is then: in which sense of desire should we understand regret, and thus the slogan? The hoping sort or the CDT-desiring sort?

    The problem is that if we read “regret” in the “hoping” way, then it’s easy to see that we’ll regret 2-boxing—and either way we can get to situations where we’ll regret 1-boxing. So I’m afraid that explicit formulations in terms of regret and action won’t cut it (without begging the question).

    If desire-reflection held for every pro-attitude worth its salt, then it’d hold for regret, and regret would have to be understood in terms of CDT-desire. But then we’re back to the issues raised in the post above.

  2. Suppose there’s a fact about what the best thing to do is (not relative to my information, but objectively, perhaps given my basic preferences). Then, if I’m fully informed and perfectly rational, I know what this thing is. I now want to do the best thing. So if I know what I would want to do if fully informed, then I know that that thing is what it is best to do, so it is practically rational to do that thing. Thus practical rationality can’t violate reflection in the extreme case of perfect information.

    Hoping, however, can violate reflection even in that extreme case. If I were perfectly informed, I would hope that I had 2-boxed, but I now rationally hope that I will 1-box. So the fact that hoping violates reflection shows that hoping doesn’t norm practical rationality.

    Note the crucial point, which I admit is cheap but still think is ok: I could just be informed of what the best thing to do is, and practical rationality can’t violate reflection with respect to that piece of information. But hoping can! Even if I know that I will come to know that 2-boxing is the thing to do (and I thus now infer that 2-boxing is the thing to do), I can carry on hoping that I 1-box. Again, practical rationality can’t work this way. If I knew (per impossibile) that I would later learn that 1-boxing was rational, that would entail trivially that 1-boxing was rational. I think it’s that simple.

    By the way there are well known counterexamples to perfect reflection for desire and practical rationality. The most famous is that I might believe that if I knew more about the workings of my digestive system that would make me less keen on eating my supper; but it isn’t rational for me to become less keen on eating my supper now just to reflect the better-informed attitude. Nor is it rational for me not to eat my supper just because I wouldn’t want to if I were better informed in this way. The explanation is that reflection only holds where the change in desire resulting from better information is one that is rationalised (not just caused) by that better information. Also, it’s inaccurate to say that reflection is diachronic; it’s to do with better information, not future information (think of cases where I anticipate forgetting something).

  3. Hi Daniel,

    That’s an interesting line of thought—I’m going to have to think about it. Am I getting the idea if I say it this way? (1) Suppose there’s a best thing to do (say, calculated by the real current objective chances, given by basic preferences). Then if I come to know that that’s the best thing to do (as an addition to my otherwise impoverished information) then *in that state* it must be the choiceworthy option. (2) But hoping can’t capture this fact. I can still rationally hope to 1-box, if that brings the best news; and in another information state, I can still rationally hope to 2-box. (iii) these hopes are compatible with receiving the information that the *other* option is overall the “objectively” best option—for the objectively suboptimal option can still bring me better news (since e.g. it can bring me information about the standing state of the world that’s irrelevant to calculating the “objectively best” option, given the way the world is.

    Is that right, or am I misunderstanding?

    If it is right, I think it’s very interesting. Of course, CDT fans always wanted to appeal to calculations relative to standing states. But the way this goes now, we’re not directly presupposes that as an answer to what the information-relative “best thing to do is”, but by appeal to the idea that fully-informed choiceworthiness is a kind of “expert function” for less-than-fully-informed choiceworthiness.

    I’d like to get clear on exactly how this idea about “expert bestness” relates to desire reflection—they seem intimately connected, but I’m not sure I could say what the relationship is.

    I wonder if the EDT fan is going to call this question begging. After all, essentially a chance-formulation of CDT exactly says that the choiceworthy action (in a low information state) is calculated by the expectation of objective-chance-calculated desirability of actions. So maybe the conception of “fully informed best action” might be seen as tendentious. What would be nice would be to use violations of desire-reflection to argue (for example) that there’s no conception of “information-neutral choiceworthiness, given basic preferences” that plays a kind of expert-function role for EDT. And actually, you didn’t mention the chance calculation in your original (that was me!) so maybe it’ll go through.

    One worry is whether telling you that a certain action is objectively best communicates extra factual information. Suppose I’m convinced that EDT is a guide to action. And suppose someone tells me that the objectively best thing to do in my situation is to 2-box. Now, if objective bestness were just about what option gives me more money, given the current state of the world, that’s no news to the EDT-ist. Thus she still hopes to 1-box, despite having the additional information—and we’re off and running. But dropping that assumption, who knows what information is coded into the claim that “the objectively best thing to do is to 2-box”? It might be rational for the EDT-ist to react by increasing her credences dramatically that this is a situation in which predictor gets things wrong—-and that *kind* of information might then effect what she hopes for. So there’s I worry that there’s some premise that information about objectively best outcomes doesn’t change the credences used to calculate EDT-hope here. (It’s a fairly abstract concern, I admit).

  4. Ok, so I was pretty drunk on mulled wine when I made my earlier comment and it now all seems much murkier than it did then.

    I agree that we need to say more about how the idea of expertise relates to reflection. I think we’ve got something along DAB lines going on: it’s a constraint on the relevant pro-attitude that it be redescribable as (or intimately linked to) a judgement about desirability (or the analogue of desirability for a pro-attitude other than desire). Now a reason for thinking that better information improves the judgement will be a reason for thinking that it also improves the pro-attitude. Is there such a reason? It depends on whether what the judgement is about is or is not information-relative. That is, is “best action” always relative to an information-state? Note – both EDTers and CDTers can agree that one kind of norm is information-relative. But CDTers would say that this phenomenon can’t go all the way down. There must be a practical point to improving our information-state; if the only norms of practical rationality were information relative then the decisions made with bad information would be just as good *in every respect* as those made with better information. The deep intuition is that information-relative norms are designed for coping with non-ideal cases, and that their verdicts are thus inferior to the verdicts of the norms suited for the ideal cases. Reflection is a consequence of the fact that better information serves a practical point.

  5. Hi Robbie,

    I agree about the two kinds of desire. I like to think of them as indicative desire (‘it would be great if Shakespeare didn’t write Hamlet’) and subjunctive desire (‘it would be great if Shakespeare hadn’t written Hamlet’).

    That indicative desire can violate Reflection doesn’t seem too puzzling. The general pattern is that you desire A more than B because A is a stronger indicator of some other good, G. If this is the only reason why you prefer A, then A is better than B, but B&G is better than A&G, and B&~G is better than A&~G. (The structure resembles Simpson’s paradox.) Once you know whether or not G obtains, you will therefore prefer B to A because the indication advantage of A gets canceled. So there’s nothing wrong about indicatively desiring something of which you knows that your better-informed future self will no longer desire it.

    BTW, the argument only works with Goldstein’s version of Reflection, not with van Fraassen’s: if you learn not only that you will prefer B to A, but also how strongly you will come to desire B, then you can infer whether or not G obtains, so the indication advantage gets canceled already in the present. That is, indicative desire does not violate a van Fraassen style Reflection principle. (I’ve blogged about this a while ago here: http://www.umsu.de/wo/2008/517.)

    The other kind of desire, subjunctive desire, seems to satisfy both forms of Reflection. (Though it would be good to actually have a proof
    of this; one would presumably need to derive Reflection from the imaging semantics.)

    I think ‘regret’ can only be understood as a form of subjunctive desire. Roughly, you regret A if you think things would have been better if not-A. The problem for an indicative reading is that regretting A seems to entail being certain that A, and then it is unclear how you could still judge that things are actually better if not-A than if A. And I do think the no-regret principle has more intuitive pull when put subjunctively than indicatively. But maybe that’s because I already believe that choiceworthiness goes with subjunctive desirability..

  6. “Though it would be good to actually have a proof
    of this; one would presumably need to derive Reflection from the imaging semantics.” Turns out to be quite simple:

    On the imaging account, the (subjunctive) desirability of A at time i is

    1) D_i(A) = \sum_w V(w^A) P_i(w),

    where V(w^A) is the value of the A-world that’s closest to world w. So the expectation of the time-2 desirability of A is

    2) Exp(D_2(A)) = \sum_w V(w^A) Exp(P_2(w)).

    By Belief Reflection,

    3) P_1(A) = Exp(P_2(A)).

    So by (2) and (3),

    4) Exp(D_2(A)) = \sum_w V(w^A) P_1(w) = D_1(A).

    Of course we have to assume not only the imaging account, but also Belief Reflection as well as constancy of basic values.

  7. Hi Wo,

    Very interesting!

    I wonder how essential the imaging account is to your style of argument? Suppose we use Joyce’s general form of CDT:

    (1*) D_i(A) = \sum_w V(w) P(w\A),

    Where P(X\Y) is the “causal conditional probability”, however that is defined (via imaging, chance, etc). The idea would then be to appeal to a strengthened belief-reflection principle for causal conditional probabilities:

    (3*) P_1(A\B) = Exp(P_2(A\B)).

    Expectation of t2 desire is:

    (2*) Exp(D_2(A)) = \sum_w V(w)Exp(P_2(w\A)).

    By (2*) and (3*), we have:

    (4*) Exp(D_2(A)) = \sum_w V(w) P_1(w\A) = D_1(A).

    So the question will be whether we have belief reflection for the quantity C(X\Y). In the case where this is understood via simple imaging, we have C(X\Y)=C(X>Y), where the arrow is the Stalnaker-conditional given by the world-ordering that defines simple imaging. Belief reflection on that conditional gives our result.

    In other cases, it’s not so obvious—but maybe it’s independently plausible. And we could look at e.g. K-partition and Chance formulations to see whether or not they satisfy the conditional belief reflection principle.

    One reason I’m concerned with this is that I’m a bit suspicious of the imaging formulation—for Lewis-impossibility style reasons I think it’s going to diverge from expected conditional chance, to its detriment…

  8. Hmmm, back of an envelope here, but I think simple belief reflection should allow one to derive the stronger belief reflection principle for expected-chance formulations (at least when the relevant chances are kept fixed—-one of the issues here is what happens when it’s later chances that are in play, which’ll depend I guess on constraints between earlier and later chances themselves).

    Does that sound right?

  9. Hey Robbie,

    I see, yes, that sounds right. This would also carry over to K-partition account, I guess.

    I don’t really have an opinion about imaging versus K-partitioning, but I feel reluctant about the chance account, mainly because I’m afraid there aren’t enough chances to do the work. Interesting that you prefer this route. What are the triviality-style arguments you mentioned to the effect that the chance account comes apart from the imaging account? I don’t know much of that literature.

    BTW, the “Notify me by email” checkbox here seems broken.

  10. Ah, I didn’t see the confirmation link in the email, so I guess the notifications work. Anyway, I found the comments feed in the meantime.

  11. Hi Wo,

    The concern about “not enough chances” sounds interesting. Is there a specific worry that you’re thinking of? I’ve had a collection of vague worries around this area but haven’t really had a structure to put them in. I was wondering a bit about how a chance-based account applies to deterministic worlds. Connectedly, there’s the issue about whether we should be thinking all along about going by non-fundamental chances (e.g. for the Newtonian case, statistical mechanical chances a la Albert/Loewer). But I can imagine worries about situations where fine-grained micro-information comes into play—

    Another thought is that you might get severe underdetermination of the chances if you get them via the Lewisian BSA account, for example. It’d be interesting to think about whether that would be a problem.

    In the light of these sort of vague worries, I’m a little reluctant to fully commit to the chance approach, but when we’ve got determinate chances around, no inadmissible information etc, matching causal conditional probabilities to conditional chance seems to deliver exactly the right results, and any deviation from it seems bad. Unless impossibility results can be finessed, they seem to guarantee that imaging in particular diverges from the conditional chances.

    Here’s one analogy: you could formulate EDT with imaging probabilities of a Stalnaker conditional, rather than conditional probabilities. But received opinion is that these will diverge. It seems to me that (for the fan of EDT) the divergences will reflect badly on the imaging formulation, since conditional probabilities *do* seem the thing we need to capture evidential preferences.

    A (slightly) less abstract concern. If “imaging” is tied to the English counterfactual, then how imaging behaves is going to be hostage to views about how the counterfactual behaves. In a earlier paper of mine on counterfactuals and chance, I give what you might think of as a fixed-up version of the Lewis quasi-miracle treatment. But one can read off some fairly severe divergences between imaging and conditional chance on this view. And when you implement *my* notion of imaging into a CDT framework, you get what seem to me like obviously silly predictions about what counts as a rational action. Now, that may be seen as a bad result for my view, but I think it does place the burden on people favouring the imaging formulation to say something about why they think that the results they’ll get will work out sensibly.

    I’ve got nothing really against the K-partition approach, though of course the devil is in what account we give of the privileged partition. I’d like it to match the chance account in cases where the latter is applicable—and I don’t see why it can’t do that in the abstract, but I wouldn’t want to venture an opinion on it without hearing what the members of K were…

    On the impossibility results: I’ve sent you a short note. The general idea is to consider special cases of agents who are fully opinionated about chances, or which element of K is actual, etc. For those special agents, in order for simple imaging and K-expectation/chance formulations to deliver the same results, it looks to me as if we’re going to have to have the probability of the conditional being the conditional probability—and so the original impossibility results start arising. That’s obviously a bit rough, but you see the idea…

  12. The later sections of the Arntzenius paper argue that CDT vindicates desire reflection—from a brief glance something like the args above may be involved.

  13. Interesting. Have to think about this.

    Re my chance worries: I’m somewhat attracted to a BSA-type theory of chance that looks roughly like what Carl Hoefer is working on. This would treat chances in QM on a par with chances in statistical mechanics or genetics, and the vast majority of propositions (or rather pairs of propositions, since the primitive is conditional chance) would have undefined chance.

Leave a comment