In my paper on accuracy and non-classical logic/semantics, I adapt Jim Joyce’s accuracy-domination theorem to a non-classical setting. His result shows that (under certain assumptions) an improbabilistic belief state is “accuracy dominated” by a probabilistic one (i.e. the latter is closer to the truth, no matter which world is actual). I generalized this to a case where the “worlds” and “truth values” are non-classical, and proved accuracy domination for a notion of “generalized probability”.
Jeff Paris then gave a talk at Leeds, and chatting to him afterwards, it emerged that he’d been interested in something very similar: his 2001 paper “A note on the dutch book method” shows that belief states that aren’t generalized probabilities are susceptible to a dutch book—again, the results cover non-classical as well as classical settings, and the characterization of generalized probability he uses pretty much coincides with mine. (The paper is great, by the way—well worth checking out).
This looked like more than coincidence: what lies behind it? I’ve written up a quick note on the relationship between dutch books and accuracy. It turns out that Paris’s core result on the way to proving the dutch book theorem (an application of the separating hyperplanes theorem) has both his dutch book theorem, and a version of accuracy-domination, as easy corollaries. (The version of accuracy domination is one that measures accuracy by the Brier Score—the square Euclidean distance).
But that isn’t quite the end of matters—the proof just shows that in one specific case, we can construct a specific dutch-book/accuracy-dominating belief state. In effect, if we’re at an improbabilistic belief state, it shows how to construct a probabilistic one that has a property that turns out to be sufficient for both dutch-booking and accuracy-domination. But the property isn’t necessary for either.
But it’s not hard to figure out the more general connection: every accuracy-dominating point corresponds to a dutch book. And although not every dutch book corresponds to an accuracy-dominating point, there’s always some accuracy-dominating point reachable by manipulation of the dutch book.
So I feel I see why the formal connection between the results is now (and remember that these hold in a very general setting—way beyond the standard classical case). But there remain questions: in particular, what about where we measure accuracy by something other than the Brier score? Is there some kind of liberalization of the assumptions of the dutch book argument that corresponds to loosening of the assumptions about how accuracy is measured (and is it philosophically illuminating?)
Thoughts, criticisms, suggestions, most welcome.
Hey Robbie — this is very neat stuff! [I was hoping to catch you in Konstanz — we’ll have to get you to FEW next year for sure.] I worry that there cannot be such a straightforward connection between Dutch Books and Accuracy Dominance. Kenny Easwaran and I are writing a paper which argues that there is an important disanalogy between the two. Basically, it can be explained via the following fanciful story (this is informal and a little sloppy, but it should give the gist).
Suppose John is trying to figure out which credence function he should adopt (or, at least, which ones he should refrain from adopting). He accepts various sorts of principles which constrain the set of acceptable credence functions. For instance, he wants to avoid being susceptible to a Dutch Book, and he also thinks that acceptable credence functions should obey the Principal Principle (given his knowledge of chances, etc.). He might use these constraints to narrow down the set of acceptable credence functions. John could do this in two ways. He could rule-out the Dutch-Bookable credence functions first, and then rule-out the functions that violate the Principal Principle (given his current knowledge about objective chance). Or, he could rule-out the functions that violate the Principal Principle (given his current knowledge about objective chance) first, and then rule-out the Dutch-Bookable functions. Either way, the same set of credence functions will be ruled-out. Specifically, the resulting set will contain no non-probabilistic functions. The Dutch-Book “filter” always eliminates all non-probabilistic functions from a set — even if that set has already been “filtered” by another constraint (e.g., the Principal Principle). Interestingly, not all contemporary arguments for probabilism trade on such a feature. Specifically, “accuracy-based” arguments for probabilism trade on a property called “inadmissibility”, which behaves much differently than the property of being Dutch-Bookable. The credence functions that are ruled-out by considerations of “inadmissibility” will depend on which credence functions have already been ruled-out (e.g., which coherent alternative credence functions are available to the incoherent agent). In this sense, “inadmissibility” considerations do not have as robust a connection to probabilism as (e.g.) “Dutch-Bookability” considerations do.
This may be compatible with the strength of analogy you had in mind, but I thought it was worth getting out there (since our paper is not yet ready for prime time). Note, also, that this disanalogy does not depend on choice of scoring rule, etc. It is much more general than that. And, I suspect it also holds in all non-classical settings as well (but I’m less sure about how that should be set-up, etc.).
I’d be curious to hear what you think about this disanalogy/phenomenon.
Hey Branden, great to hear from you (I was really disappointed I couldn’t make it to Konstanz—since I’m going to be away in NY for a couple of months this fall, I cut down on summer travel. It looked great!).
The formal connection between dutch-booking and accuracy-domination I was looking at, is that for each belief state c that accuracy-dominates b, there’s a vector defined by b-c, which turns out to be a dutch book for b. (And, as a partial converse, if we’ve only got finitely many worlds in play, for each dutch book d, considered as a vector, b-kd for some scalar k accuracy-dominates b). That formal result assumed a kind of “plenitude” of both credences and books of bets: each vector can be construed either as a potential belief state, or as a book of bets.
Now suppose we wiped out some of credal space somehow, so the point c (among others) is simply no longer part of the space. What you and Kenny are pointing out, I take it, is in that circumstance there’s no guarantee that b remains accuracy-dominated. But refining credal space doesn’t change what dutch books are around. So dutch-bookability is unaffected. In the gloss above, the thing to object too is the claim that the “point” defined by something like b-kd, is an “available” belief state—this might point at one of those potential belief states that have been filtered away.
Here’s a quick thought…. maybe there’s a dual to this in the opposite direction. The recipe for constructing a dutch-book from accuracy-domination is to look at a certain book, described by the vector b-c. Is there any room for a move analogous to the one you make against accuracy-domination?
Well, suppose we’re not figuring out just what to believe, but what to do. And maybe there’s a set of principles our agent uses to filter out some courses of action as opposed to others. Perhaps he discards those that are morally repugnant, and chooses those that maximize value among the rest, for example (that’s a kind of Nozickian picture). On that view, action-space is “refined”, before we start doing decision-theoretic calculations to figure out e.g. which bets to take. And there’s at least a formal possibility now that certain courses of action that seem initially well-described: e.g. accepting book of bets d, are no longer in the relevant sense in action-space. So potentially we could have an improbabilistic belief state b, which can’t be dutch-booked, simply because taking that action is already beyond the pale.
I’m not very sure whether this dualization really makes sense (for a start, it’s not so much action, as preference, that seems relevant to dutch-book arguments in their best form, and I’m unsure how the above would adapt). But I find it an intriguing idea to think through! And the general point that to fairly compare accuracy-domination and credence we might want to look at filters on rational mental states as a whole, including both preferences and credences, seems a decent one.
Anyway, I hope that makes sense! I gotta dash now, but I hope to think more about this soon.
Very interesting, Robbie — Thanks! Food for thought, indeed… Hey — we should get together when you’re in NY. Please look me up!
Two very brief platitudes about the connection between Dutch bookability and accuracy:
1) Having accurate beliefs will in general increase how successful you are when you bet (modulo luck). This isn’t directly connected to Dutch books, but still, both involve betting. (I did say these were platitudes…)
2) Learning that I am Dutch bookable isn’t the kind of thing that should lead me to change my beliefs, but being accuracy dominated might be. If I have nonprobabilistic beliefs, and you tell me “hey, you can be Dutch booked!”, I will certainly change how I bet. But why should this information change the strength of any of my beliefs? This isn’t the sort of evidence that should be the basis of my updating my credence: it seems irrelevant to the content of each and every one of my beliefs. However, being told I am accuracy dominated does seem like it more directly points out a defect in my beliefs themselves: it might not tell me where to go, but it tells me I should move my credence function. (I’m not sure whether this is a new point or just a restating of Joyce’s argument against Dutch books…)
Also, Richard Pettigrew has a paper in preparation on Joyce’s argument for probabilism and the Principal Principle, available on his website here:
Click to access acpp1stdraft.pdf
As Branden says, this is a really nice observation, Robbie. In case you are interested in the Principal Principle paper, I’ve been updating it recently to make it palatable to Humeans, reductionists, and generally those who will be bothered by undermining futures — these are absent from the current paper. The new version of the paper is a few weeks off, but here are some slides that might give the basic idea better than the paper anyway:
Click to access WiPOct2010.pdf
I’d really like to know more about how your observation interacts with Branden and Kenny’s lovely point about accuracy domination arguments when some of the space of possible credence functions is removed.
Hi Seamus, thanks for the link to Richard’s paper! I’m not sure I buy what you say in (2), though—though I guess this will depend on how we deploy the dutch book theorem in an argument for probabilism. I guess I’ve been convinced by recent (rather brief) reading on the topic that it’s best presented as an argument that your overall mental state is irrational—that you’re committed to inconsistent desires. And the only way to avoid this irrationality is to shift your beliefs. I would have thought that the most natural disanalogy around here would come if you thought that accuracy domination gives you at least a pointer as to what belief state to shift to (to one of the dominating but not-itself dominated ones) but dutch books don’t hold out the promise of anything similar. But it didn’t sound like you wanted to appeal to that.
Is the thought that being in a belief state that guarantees that your overall mental state is irrational, isn’t the right kind of reason to change your beliefs?
Kenny and I have a paper coming out (in Dialectica) on the aforementioned oddity of “accuracy-dominance” arguments for coherence norms (in a more general setting). Here’s a link to the manuscript:
Click to access dialectica.pdf