Category Archives: Decision theory

Regrets? I’ve had a few…

Just a quick note about something that’s puzzling me.

Frank Arntzenius has a really nice paper (no regrets) in which he gives an interesting argument for causal decision theory. The basic thought is this: if you know that you’ll (by rational means) come to desire something later, you should desire that thing now. (Obviously that formulation needs tightening—see paper for details). He imposes a “desire reflection principle”: your level of desire in p should match your expected level of desire in p, at future time t.

He points out the following. If the desirabilities of various propositions are described by evidential decision theory, then desire-reflection is violated. Suppose you think that in a Newcomb case you desire to 1-box, because desirability goes by EDT value. Suppose you know that before you’re given the money, the distribution of money in the boxes will be revealed. At the point of revelation, you will (by EDT lights) desire that you had two-boxed earlier—no matter what information you receive. So the current ordering of desirability of 1-boxing vs. 2-boxing is reversed when we look at expected future desirability. Desire-reflection rules such scenarios out. Arntzenius argues that CDT (which recommends 2-boxing from the start) won’t violate desire-reflection.

Why care about desire-reflection? Well, it sounds really compelling, to begin with; and if we’re already fans of van Fraassen’s belief-reflection principle, it’d be very natural to take both attitudes to behave in analogous ways in this regard. To motivate it, Arntzenius writes: “If your future self has more information than you do, surely you should listen to her advice, surely you should trust her assessment of the desirabilities of your possible actions.”

But the problem with these sorts of motivation (for me) is that it overgenerates—really for desirability we could substitute any pro-attitude, and we’d find something that sounds equally compelling. If my future self has more information, I should listen to the advice— for example, on what to desire, what to hope for, what to wish for, etc etc.

Here’s my puzzle. There are surely some pro-attitudes that violate desire reflection. EDT surely *does* describe how much you’d like to receive this news rather than that (it’s described by some as “news value”, and that seems like a good name). Suppose I faced the Newcomb situation yesterday, and don’t know which way I acted. Caring only about money, the best news I can receive, given my current poor epistemic state, is that I one-boxed—for I expect to find more money in my bank given that information, then given the alternative. That—let me assure you—-is what I would hope I did (if I cared about being rational, maybe things’d be different—but I only care about money in the bank).

But say I’ll be told at breakfast what the distribution of money in fact was in the Newcomb situation I faced—before being told which way I acted. Once I’ve got that extra piece of info, then no matter which way it goes, I’ll be hoping that I two-boxed—for given the extra distribution information (whatever it is) the news value of 2-boxing will be greater than 1-boxing.

So this is basically just to repeat Arntzenius’s setup, and then asking you to agree that for some pro-attitude—hope, in this case—we violate reflection. We might not like this, but I think it’s pretty pointless to deny it. (After all, it’s not like EDT-values are ill-defined in some way—it’s not like there’s any reason to think it’s *impossible* to adopt propositional attitudes that behave in the way it describes—and, as a matter of fact, I think hoping does in fact work this way).

We needn’t deny there’s some pro-attitude—a different one—that CDT describes. Call that CDT-desire. (I believe David Etlin has a paper arguing that we genuinely have two attitudes hereabouts—I’m looking forward to reading his paper). Hoping violates reflection. CDT-Desiring satisifies it. Pro-attitudes—and the very notion of desirability—seems disanalogous to belief in this regard. For I take it if we’re fans of reflection we really don’t think there’s some kind of representational state belief* that is reflection-violating.

So we need some *discriminating* motivation—something that tells us that desirability *in the sense relevant to rationalizing action* should satisify desire-reflection. If we had something like that, then we could rule out hope, and in favour of CDT-desire, as the relevant notion. But I don’t see we’ve got the tools as yet.

Despite these concerns, there’s seems to me something deeply illuminating in thinking about the EDT/CDT contrast in terms of desire reflection. The problem is, I can’t see yet its distinctive relevance to action. Is there some kind of diachronic coherence constraint on planning for action, specifically, that “wishful thinking” needn’t involve? Why would it matter?