I’m continuing my side-interest in thinking about reinterpretations of the social choice literature. Today I want to talk about applying another part of this to the question of how a group of people can agree on a collective set of opinions.
The background here: I’ll take it that each member of the group has degrees of belief over a set of propositions. And I’ll adopt an accuracy-first framework according to which there is a way of evaluating an arbitrary set of credences by measuring, to put it roughly, how far those degrees of belief are from the actual truth values. To be concrete (though it won’t matter for this post), I’ll use the Brier score, and assume the distance from the truth of a belief state b is given by the sum (over propositions p) of the square of the differences between the degree of belief in p (a real number in ) and the truth value (0 if false, 1 if true). As is familiar from that literature, we can then start thinking of accuracy as a kind of epistemic value, and then each person’s credences—which assign probabilities to each world—allow us to calculate the expected accuracy of any other belief state, from their perspective. (This construction makes sense and is of interest whether we think of the epistemic value modelled by the Brier score as objective or an aspect of personal preferences).
One fact about the Brier score (in common with the vast majority of scoring rules that are discussed in the accuracy literature) is that it’s “proper”. This technical property means that for any agent whose credences are probabilistic, the credence that maximizes expected accuracy, from their perspective, are those credences that they themselves possess. On the other hand, they can rank others’ credences as better or worse. If a group fully discloses its credences, each member will expect the most accurate credence to be the one that they themselves already have, but they may expect, for example, Ann’s credences to be more accurate than Bob’s.
Once we’re thinking about accuracy in groups, we can get to work. For example, Kevin Zollman has some very interesting work constructing epistemic versions of prisoner’s dilemmas and other game-theoretic strategic problems by starting with the kind of setup just sketched, and then considering situations where agent’s altruistically care not just about the accuracy of their own beliefs, but the accuracy of other group members. And in previous posts, I’ve discussed Richard Pettigrew’s work that grounds particular ways of “credence pooling” i.e. picking a single compromise credal state, based on minimizing aggregate inaccuracy.
But today, I want to do something a bit different. Like Pettigrew, I’m going to think about a situation where the task of the group is to pick a single compromise credal state–a compromise or “team credence”. Like Zollman, I’m going to think about this through the lens of game theory. But for today I’ll be thinking about the relevance of results from game theory/social choice theory I haven’t seen explored in this connection: Nash’s theory of bargaining.
Here’s the setup. We have our group of agents, and they need to choose a team credence for various practical purposes (maybe they’re a group of scientists who need to agree on what experiments to do next, and who are looking for a consensus on what they have learned on relevant matters so far, on the basis of which to evaluate the options. Or maybe they’re a committee facing some practical decisions about how to allocate roles next year, and they need to resolve disagreements on relevant matters ahead of time, to feed into decision making). Now, any probability function could in principle be adopted as the team credence (we’ll assume). And of course they could fail to reach a consensus. Now, some possible credences are worse than giving up on consensus altogether—a team credence with high credence in wrongheaded or racist propositions is definitely worse than just splitting and going seperate ways. But we’ll assume that each group member i can pick a credence such that they’d be indifferent between having that as the team credence, and giving up altogether. In accordance with accuracy-first methodology, we’ll assume that credences are better and worse by the lights of an agent exactly in proportion to how accurate the agent expects that credence to be. The expected accuracy of
by i’s lights is a measure of i’s “breaking point”—an candidate team credence that is expected to be less accurate than that is something where i will give up than agree to. Finally, we’ll assume that there is a set of credences S which are above everyone’s breaking point–everyone will think that it’s better to let some member of S stand for the team than give up altogether. We assume this set is convex and compact.
The choice of a team credence now fits the template of a bargaining problem. There is a “threat point” d which measures the (here, epistemic) utility of failing to reach agreement. And there are a range of possible compromises, parento-superior to the profile of breaking points, with the different parties to the bargaining problem having in principle very different views about which compromise to go for. (Notice that in this case all parties to the bargaining problem agree on the fundamental value—they want accuracy maximized. But their different starting credences map candidate team credences to very different expected accuracies, and this leads to divergent evaluation of the options.) Crucially, we are assuming that in this bargaining situation the agents stay steadfast–they do not compromise their own credence in light of learning about the views of other team members. Rather, they agree to disagree on a interpersonal level, but look for a team-level compromise.
Our problem now is to characterize what a good compromise would be. And this is where adapting Nash’s work on practical bargaining problems might help. I will write his assumptions informally and adapted to the epistemic scenario.
First, a “pareto” assumption, x in S is a good compromise (a solution to the bargaining problem) only if there’s no y in S such that everyone expects y to be more accurate than S.
Second “contraction consistency”, if you have a bargaining position (d,T) which differs from that involving (d,S) only by eliminating some candidates for team credence, then if x is a good compromise in S and x is within T, then x is a good compromise within T. Eliminating some alternatives that are not selected doesn’t change what a good team credence is, unless it eliminates that credence itself!
A third assumption concerns symmetric bargaining situations specifically. Let S* be the set of expected accuracy profiles generated by S, i.e. an n-tuple whose ith element is the expected accuracy of a candidate team credence by the ith person’s lights. A symmetric bargaining situation is a very special one where the set of candidates as a whole looks the same from everyone’s perspective —S* is invariant under permutations of the group members (and the same goes for the threat point d). This third assumption says that in this special symmetrical case, the epistemic utility for each person of a good compromise will be the same. No asymmetry out without asymmetry in!
The final assumption is an interesting one. It says, essentially, that the character of the bargaining solution cannot depend on certain aspects of individual’s evaluation of them. Formally, it is this: if the ith agent evaluates credences not by accuracy, but by accuracy*, where accuracy* is a positive affine transformation of accuracy (e.g. takes the form a.accuracy(c)+b, a>0) then the identity of the bargaining solution is essentially unchanged. Rather than the original profile of epistemic utilities associated with each potential team credence, the profile of the solution will now have a different number in the ith spot–the image under the affine transformation of what was there originally. The underlying team credence that is the solution remains the same (that’s the real content of the assumption), but its evaluation by the ith member, as you’d expect, is tranformed with the move to accuracy to accuracy*.
There’s a metaphysical assumption about accuracy that would entail this. It is that epistemic utility or (expected) accuracy itself is a measure which (for each person) is invariant under positive scaling and addition of constants, ie affine transformation. On this conception, there is no good sense to be made of questions like “is the accuracy of this proposition greater than zero”? though there is decent sense to be made of questions like “does this proposition have accuracy greater than the least possible accuracy value?”. It allows us to ask and answer questions like: is the difference in accuracy between credal state a and b greater than that between c and d? and crucially: is the accuracy (by i’s lights) of credal state a greater than that of b at every world? But also crucially on the reading in play here, there would not be any good sense of interpersonal accuracy comparisons. There is no good question about whether I rank a credal state c as more accurate than you do.
This is a little strange, perhaps. The absence of meaningful interpersonal comparisons is almost unintelligible if you think of accuracy as some objective feature of the relation between credences and truth values, a value that is the same for all people. But suppose accuracy (relative to a person) is an aspect of the way that the person values the true beliefs of others. Then each of us might value accuracy, but have idiosyncratic tradeoffs between accuracy and other matters. I, a scholar, care about accuracy much more than mere practical benefits. You, a knave, weight practical benefits more heavily. That gives one a sense about why interpersonal comparisons are not automatic (e.g. we should not simply assume that the epistemic value of having credence 1 in a truth is the same for you as me, even if for both of us it is maximal so far as accuracy goes). It is orthodoxy, in some circles, that ordinary utilities do not allow meaningful interpersonal comparisons—the thinking being that there is no basis for eliciting such comparisons in the choices, or for settling a scale or zero point. Once we get out of the mindset on which comparisons are easy and automatic, then it seems to me that there’s no obvious reason to insist on interpersonal epistemic utility/accuracy comparabilities.
If you accept that scale and zero-point of accuracy for each person reflect mere “choices of unit” then the fourth and final assumption above about a good compromise credence follows automatically—how to solve the bargaining problem shouldn’t turn on choices of units in which we express what one of us has at stake. So the “absolute expected accuracy” of a candidate compromise credence from my perspective shouldm’t matter for the selection of a team credence. Instead, factors with real content will matter, which are things such as: the patterns in relative differences in expected accuracy between the available candidates, from a single perspective.
Putting this all together, Nash’s proof allows us to identify a unique solution to the bargaining problem (and its nontrivial there is any solution: we are very close here to assumptions that lead to Arrow’s impossibility results). A team credence which meets these conditions must maximize a certain product, where the elements multiplied together are the differences, for each of us, between the expected accuracy of c and the expected accuracy of our personal breaking point. Given we are making the invariance assumption and so can choose our “zero points” for epistemic utility, arbitrarily and independently, it is natural to choose a representation on which we each set the epistemic utility of our personal breaking point to zero. On that representation, the team credence meeting the above constraints must maximize the product of each team member’s expected accuracy.
(How does this work? You can find the proof (for the case of a two-membered group) on p.145ff in Gaertner’s primer on social choice theory, and it’s a lovely geometrical result. I won’t go through it here, but basically, you take the image in multidimensional expected-accuracy space of your set of options, and use the invariance and contraction consistency assumptions to show that the solution to a given bargaining problem is equivalent to another one that takes a particularly neat symmetric form (and pick out from it what the Nash maximize-the-product solution predicts to be the solution in that case) . And then you can use symmetry and pareto assumptions to show that the prediction is correct. It’s very elegant.)
So I think this all makes sense under the epistemic interpretation, and that bargaining theory is another place where decision theory, broadly construed, can bear an epistemic interpretation. I haven’t given you yet a formula for calculating the compromise credence itself—the one that satisfies Nash’s criterion. Perhaps in the next post….
Let me step back finally to make a few points about the assumptions. First, pareto seems the only really obvious one of the constraints. Symmetry is an interesting principle, but where some of a group are experts, and others are not, then perhaps it looks implausible. Maybe the solution should weight the valuations of experts (expected utility by their lights) higher. On the other hand, in a situation where infromation has been shared in advance, non-experts have already factored in the testimony of experts as best they can, so factoring this into compromise as well might be double-counting.
Another point about symmetry is that (if we think of expected accuracy as a distance between credence functions) what it tells us is that in certain special circumstances, we should pick a compromise that is equidistant between all agents’ credences. But notice that picking an equidistant point may not minimize the aggregate distance between utilities. Think of a situation where we have an N+1 membered group, N of whom have the same credence, but one dissents, and which meets the symmetry condition. You might think that a good compromise should be nearer the credal state that all but one of the group already have. But no: symmetry tells us that a good compromise equalizes epistemic utility in this scenario, rather than favouring the points “nearer” to most of the group. In the symmetrical setting where every group member either has credence a or b, it doesn’t matter what the relative numbers are that favour a vs. b, the solution is the same. This might seem odd! But remember two things: (1) the natural alternative thought is that we should “utilitarian style” care about the aggregate epistemic utility. But since invariance tells us that there are no meaningful interpersonal comparisons (not even of utility differences) then “aggregate epistemic utility” or “minimizing aggregate distance” isn’t well-defined in the first place. (2) the conception of a bargaining problem is one where every individual has a veto, and so can bring about the threat-point. So while N people who agree might think they “outvote” the last member, a veto from the last member destroys things for everyone, just as much as if many were disenfranchised. This might warm one up to the equalization in symmetry. Still, in view of (1), my own thought is that symmetry is really a principle that is most plausible if you already buy into invariance, rather than something that stands alone.
Contraction consistency seems pretty plausible to me, but has been the focus of investigation in the social choice literature, so there is an interesting project out there of exploring the consequences of tweaking it under the epistemic interpretation.
Finally, what of invariance? Well, a lot of my previous posts on these matters (as with Pettigrew’s work) started from the assumption this is false. We assume that accuracy scores are comparable between different individuals, even if they disagree on what the expected accuracy of a given credal state is. But it’s a really good question whether this was a reasonable assumption! So I think we can view the dialectic as follows: either accuracy/expected accuracy/epistemic utility is interpersonally comparable or it isn’t. If it is, then e.g. compromise by minimizing aggregate distance between compromise point and credences will be a great candidate for being the right recipe (depending on what accuracy measure you use, as Pettigrew shows, this could lead to compromise by arithmetical or geometric averaging). If epistemic utility is not interpersonally comparable, then we have a new recipe for picking a compromise credence available here, defined relative to the “threat” or “breaking point” profile among the group.
Lastly, this post has focused on accuracy and compromise credence. What about combined credal-utility states, evaluated by distance from truth/ideal utility, that I’ve discussed in previous posts? I’d like to extend the same results to them (and note, this is not to return to the original Nash dialectic, which is about bargaining for the best “lottery over social states” e.g. act, in light of utilities, not bargaining for utility-involving mental states in light of their expected “correspondance with ideal utilities”). Along with getting a concrete formula for Nash-compromise credences, that’s work for another time.