Monthly Archives: December 2020

Solving the epistemic bargaining problem

In the last post, I described an epistemic bargaining problem, and applied Nash’s solution to it. That gave me a characterization of a compromise “team credence”, as that credence which maximizes a certain function of the team members’ epistemic utilities. This post now describes more directly what that team credence will be (for now, I only consider the special case where the team credence concerns a single proposition). Here’s the TL;DR: so long as two agents are each sufficiently epistemically motivated to find a compromise, then they will compromise on a team credence that is the linear average of their individual credences iff they are equally motivated to find a compromise. If there is an imbalance of motivation, the compromise will be strictly between the linear average and the starting credence of the agent who is less motivated by finding the team credence.

To recap, we had a set of team members. These team members each assign “expected epistemic utility” to possible credal states (including credal states that concern only attitudes to the single proposition p). Expected epistemic utility is assumed to be the expected accuracy of the target credal state x, and inaccuracy is assumed to be measured by the Brier score (1-x(p))^2 if p is true, (0-x(p))^2 if p is false (to get a measure of accuracy, subtract inaccuracy from 1). For an agent whose own credal state is a, it’s a familiar piece of bookwork to show this implies that the epistemic utility of a credal state x(p) is 1-(x(p)-a(p))^2—one minus the square euclidean distance between the agent’s own a(p) and the target credence x(p). From now on, I’ll drop the indexing to p, since only one proposition will be at issue throughout.

The epistemic decision facing the several agents is this: they can form a team credence in p with a specific value, so long as all cooperate. But if any one dissents, no team credence is formed. The situation where no team credence is formed is assumed to have a certain epistemic utility 1-\delta_A for the epistemic utility for agent A (who has credal state a). So forming the specific team credence x will be worthwhile from A’s perspective if and only if the epistemic utility (expected accuracy) of forming that team credence is greater than the default, i.e. iff 1-(x-a)^2> 1- \delta_A. Now, we may be able to find a possible credal state $d_a$ which the agent ranks equally with the default credal state, 1-(d_A-a)^2= 1-\delta_A. You can think of d_A as A’s breaking point—the credence at which it’s no longer in her epistemic interests to form a team credence, since she’s indifferent between that and the default where no team credence is formed. A little rearrangement gets us to: d_A= a\pm\sqrt \delta_A. So really we have both an upper and lower breaking point, and within these bounds, a zone of acceptable compromises, within which a team credence will look good from that agent’s perspective.

Possible credences will be, as is standard, within the interval [0,1]. d_A=a\pm\sqrt \delta_A may well be outside that interval. Consider, for example, the case where an agent has credence 1 or 0 in p to start with, or a situation where not forming a team credence is a true epistemic disaster, of disutility >1. It’ll be formally convenient to still talking of breaking point credences in these cases, but that’ll just be a manner of speaking.

A precondition of having a bargaining problem is that there are some potential team credences x that all team members prefer to have team credence x than the scenario where no team credence is formed. That is, for all y in the group, 1-(x(p)-y(p))^2> 1-\delta_Y. That amounts to insisting that the set of open intervals (y-\sqrt \delta_Y,y+\sqrt\delta_Y) have a non-empty intersection, or equivalently, where y runs over hte team members, \max_y (y-\sqrt \delta_Y)<\min_y (y+\sqrt \delta_Y).

That’s a lot of maths, but concretely you can think about it like this: it’ll be harder to strike a compromise on p the more distant the team member’s credences are from each other. If, however, each feels great urgency to finding a team credence, that will widen the zone of compromise from their perspective. So even if A starts with credence 1 in p, and B starts with credence 0 in p, then they will each view some compromises between the two of them as acceptable if A’s lower breaking point is higher than B’s upper breaking point. That relates to the utility/disutility of failure: if failure for A is worse than the epistemic cost of having credence 0.5 in p, and similarly for B, then credence 0.5 is a potential compromise for the two of them.

So these are the conditions for there to be a bargaining problem in the first place. If they are met, everyone wants a deal. The question that remains is which among this set they should pick. As described in the last post, if we endorse Nash’s four conditions we have an answer: the team credence will be the x that maximizes the following quantity: \Pi_y ((1-(x-y)^2)-( 1-\delta_Y). Simplifying a touch, the latter curve becomes \Pi_y (\delta_Y-(x-y)^2)).

Before moving on, I do want to make one observation that will be important later: that the Nash compromise team credence is somewhere in [a,b]. This is pretty intuitively obvious, but to argue for it formally: suppose that we have N agents with various starting credences, whose span is the interval [a,b]. If the upper endpoint b is not within the intersection of the zones of compromise, then nothing above b is within this intersection either (since those constraints take the form of an intersection of open balls around points no greater than b). On the other hand, suppose that the b is contained within the intersection of all the agents’ zones of compromise. Then we can see that the Nash product cannot take its maximum value above b. That’s because b will be nearer all the starting credences than anything above b, and so every multiplicand (and hence the product) will be greater at the endpoint than it is at any point above it. The same goes, in reverse, for the other endpoint a.

So what does Nash’s maximization condition say about specific cases? One complicating factor is that this is a constrained maximization problem, over the set (\max_y (y-\sqrt \delta_Y),\min_y (y+\sqrt \delta_Y)). So looking at the curve defined by Nash’s product alone doesn’t contain all the information we need to pick the constrained maximum. I’ll come back to this at the end, but for now, I’ll ignore the issue and concentrate on finding a local maximum of the Nash curve.

Let’s get going with the local maximization problem then, for the two-agent case. We want to find a turning point of the Nash curve ((x-a)^2-\delta_A)((x-b)^2-\delta_B). To see what’s going on with this quartic polynomial, consider a special simple case with a at 1 and b at 0, and both offsets at zero. That gives (x-1)^2x^2, which I’ve sketched for you (as with the other images that follow) using wolframalpha:

This polynomial has roots at 0 and 1—and every candidate team credence of course has to be within this zone. And so you can see immediately that the leading candidate to be the compromise credence will be the local maximum of this curve. To find the value of the maximum, we remember secondary school maths and set the derivative of the curve equal to zero.

Note we can factorize the derivative as 2x(x-1)(2x-1). The cubic has three roots: 0 and 1 (those are the minimum points of the original quartic curve) and 0.5.

This looks promising! Interpreting this in the epistemic bargaining way, we have started with two agents with extremal credences 1 and 0, and found that we have a local maximum for the Nash curve at their linear average. Further, this is representative of a general pattern. If you start from (x-a)^2(x-b)^2, 0<b<a<1then you get curves that look like distorted versions of the above, but with roots of the original curve at a and b, and a local maximum at \frac{a+b}{2}—the linear average of the starting credences.

Things look even better when we recall that the maximum value of the Nash curve *for values of x that met the relevant constraints* was going to have to be within the interval spanned by the starting credences. For that tells us we can ignore all the curve except the bit between 0 and 1 in the extremal case (but we knew that anyway!) and between a and b in the general case. The arms of the original curve that shoot off to infinity can be ignored, therefore—and that’s one big step towards arguing that the local maximum (the turning point we’ve just calculated in this special case) is the point satisfying Nash’s conditions.

Now, this is all very well, except for the annoying fact that the simple case in question was one where \delta_A=\delta_B=0, which translates to both agents having a null zone of compromise (in terms of breaking points: the breaking point credence for each agent is the point at which they’re at). There’s no non-trivial bargaining problem at all here! So that there’s a local maximum of the curve at the linear average doesn’t tell us anything of interest. Bother.

But the general case can be understood in relation to this one. Again starting from extremal credences for simplicity (A having credence 1 in p, B having credence 0 in p), the non-trivial bargaining problems will take the form ((x-1)^2-\delta_A)((x)^2-\delta_B). Multiplying this out we have: ((x-1)^2(x)^2-\delta_B(x-1)^2-\delta_A x^2+\delta_A\delta_B. Differentiating this and setting the result to zero we have \frac{d}{dx}((x-1)^2(x)^2)-2\delta_B(x-1)-\delta_A x=0. This is equivalent to finding the intersection of the cubic sketched above and the linear curve 2(\delta_B+\delta_A)x-2\delta_B. To illustrate, here’s a sketch of what happens when both parameters are set to 1. The intersection, and so the local maximum of the original curve, is the average of the two credences, at 0.5:

Below is the case where \delta_A=1 but \delta_B=0.5. Note that the intersection is now below the linear average of the two starting credences (that makes sense: the parameters tell us that A, with full credence, is more distant from her breaking point than B, who has zero credence—or equivalently, failing to agree a team credence is in relative terms better for B than for A. So A is in the weaker bargaining position and the solution is more in line with B’s credence):

And below is what happens if we reverse the parameters, with the intersection point, as expected, nearer to the agent with less to lose, in this case, A (who had full credence):

What of the general two-agent case, where the equation for which we’re finding the local maximum is ((x-a)^2-\delta_A)((x-b)^2-\delta_B)? This time I’ll run through it algebraically. Assume without loss of generality a>b. From a note earlier, we know the compromise credence is to be found in the interval [b,a]. So under what conditions is it in the top half, (\frac{a+b}{2}, a]? At the bottom half [b,\frac{a+b}{2})? And at the midpoint \frac{a+b}{2}?

First, multiply out: ((x-a)^2(x-b)^2-\delta_B(x-a)^2-\delta_A (x-b)^2)+\delta_A\delta_B. Second, differentiate the quartic and set the result to zero. This gives us \frac{d}{dx}((x-a)^2(x-b)^2)-2\delta_B(x-a)-\delta_A(x-b)=0. This is equivalent to finding the intersection of the cubic \frac{d}{dx}((x-a)^2(x-b)^2) and the linear curve 2(\delta_B+\delta_A)x-2(\delta_A b+\delta_B a). The former, recall, is like a squished version of the the earlier cubic, with roots at b,  \frac{a+b}{2}, a –the middle root corresponding to the local maximum.

Now, you can eyeball the curve sketches above to convince yourself that the following biconditional: the intersection of the cubic and linear will be at an x-value within (\frac{a+b}{2}, a] iff the linear curve intersects the x-axis within the same interval. Inverting the linear equation, we find that its intersection with the x-axis will be \frac{\delta_A b+\delta_B a}{\delta_A+\delta_B}. That will be within the interval only if \frac{\delta_A b+\delta_B a}{\delta_A+\delta_B}>\frac{a+b}{2}, which is to say: 2(\delta_A b+\delta_B a) >(a+b)(\delta_A+\delta_B). Multiply out, simplify while remembering we were assuming a>b, and you will find this is equivalent to: \delta_B>\delta_A. (Interpretation: the epistemic utility of no team credence is higher for A (at 1-\delta_A) than for B (at 1-\delta_B)).

By similiar manipulations, we can show that the intersection of the two curves lies within [b,\frac{a+b}{2}) only if \delta_A>\delta_B. And the two curves intersect at \frac{a+b}{2} iff \delta_B=\delta_A. And given the curves intersect somewhere in the interval [b,a], and the three conditions are mutually exclusive, we can now strengthen these two conditionals to biconditionals.

To summarize: Maximizing the Nash curve over [a,b] to pick a compromise team-credence gives the linear average when the epistemic utility of failing to reach a team-credence is the same for both parties. When there is an imbalance in the epistemic utilities of failure, if we pick the team credence in the same way, we’ll get a result that is nearer the starting credence of the agent with less to lose from failure.

All this comes with the caveat mentioned earlier: I’ve been talking about how to find the maximum of the Nash curve over the whole of [a,b]. We need to also remember that we were to find the maximum over a constrained set of credences, and this might be a proper subinterval of [a,b]. At the limit, as we saw earlier, they may be no credences meeting the constraints at all. So it’s not guaranteed (yet) that the Nash compromise in all (two person, one proposition) cases satisfies the description given above. But it will meet that condition if the zones of compromise are big enough: if they are big enough that [a,b] is contained within them.

That’s enough for today! The next set of questions for this project I hope are pretty clear: Is there more to say about the case when constraints are a proper subinterval of [a,b]? How does this generalize to about the N-person case? How does this generalize to a multiple proposition case? How does it generalize to scoring rules other than the Brier score?

P.S. Thanks to Seamus Bradley and Nick Owen for discussion of this. As they noted, you can use computer assistance to find exact roots for the cubic and so the turning point of the quartic. Unfortunately, those exact roots look horrific, which pushed me towards the qualitative results reported above. I include the horror for the sake of interest, with C and D being the delta terms for a and b respectively:

P.P.S. Some further notes about the next set of questions.

(i) On the issue of when we have a well defined bargaining problem. For the two-person case, the following holds: there is a non-trivial bargaining problem when \sqrt \delta_A+\sqrt \delta_B>b-a. In the special case where \delta_A=\delta_B, that means \delta_A= \delta_B>(b-a)^2/4. The compromise zone is maximal, i.e. the whole of [a,b] iff \delta_A,\delta_B\geq (b-a).

The following graphical characterization was illuminating to me. In general, the quartic (\delta_A-(x-a)^2)(\delta_B-(x-b)^2)^2) has a “W” shaped curve. For very negative values of x, then both (\delta_A-(x-a)^2) and $(\delta_B-(x-b)^2)^2)$ are large and negative, and so their product is large and positive. For very positive values of x, both are large and again negative, so their product is large and again positive. If all roots are real, then moving from left to right, as x approaches a we get an interval where (\delta_A-(x-a)^2) is positive and the other negative, then an interval where both are positive, and then a period when only $(\delta_B-(x-b)^2)^2)$ is positive, before both turn negative. Now note that the middle interval exactly corresponds to the values for which both agents have positive utility, and so is exactly the zone of compromise. So another characterization of the compromise zone is the area between the middle two roots of the quartic (if those roots are imaginery, there is no bargaining problem). This is important, because it illuminates why finding the local maximum is the right method–it’s because the constraints are that we only maximize in that specific interval between the middle two roots, and the maximum subject to that constraint is exactly the local maximum.

(ii) For the 3-person case, we can consider constraints and maxima. For the former, it’s a necessary condition that the most confident and least confident individual overlap, and so if we designate those a and b, we again need the following for there to be a non-trivial zone of compromise: \sqrt \delta_A+\sqrt \delta_B>b-a. In addition, however, we need the zone around c to overlap this, which is a complex three case condition involving c and $latex\delta_C$. If c is at the midpoint of a and b then any nonzero \delta_C will do. Likewise, it is necessary for the compromise zone to be maximal that \sqrt \delta_A,\sqrt \delta_B\geq (b-a), and then there is a more complex condition involving c and \delta_C. \sqrt\delta_C\geq (b-a) suffices, but e.g. if c is the midpoint then a smaller \delta_C will do. Something similar happens with the general N-person case—that the zones of compromise of the extremes overlap is a necessary condition, and then there’s an array of more complex conditions for those whose credence lies between the two extremes.

The N-person case involves us finding a maximal point of a polynomial of degree 2N. The new challenge here is that there are multiple local maxima—which can all be between 0 and 1, in principle. The generalization of the point made earlier is now crucial to understand what is happening. Suppose all roots are real As we scan from left to right, we start from a point where all the multiplicands (epistemic utility difference of agents) are negative, and gradually hit points where more and more agents have positive epistemic utility difference (toggling the sign of the product, i.e. the curve, from positive to negative and back again), until eventually all multiplicands are positive. Then we continue to scan to the right more and more turn negative. The key observation is the zone of compromise for all agents are those points at which all agents have positive utility, which is the middle hump of the serpent’s back. The conditions for this existing, and for being maximal, i.e. covering [a,b], are given above. But a crucial observation is that the maximization problem is now well defined: we need to find the local maximum of this middle hump.

What are the compromise credences in these cases? Well, here’s one special case that is as you’d expect: if c is at the average of a and b, and the threatened losses are the same for a and b, then the compromise team credence will be c. If c is nearer to a than b, the credence is nearer to a than b, and vice versa. If the threatened loss is bigger for a than b, then the compromise is nearer b (if c is at the average). How these trade off, and the N-person generalization, will require more work. There’s some attractive initial hypotheses that fail. For example, in the two person case, as reported above, when the potential losses are equal, the compromise is the average. But the natural generalization of this already fails in the three person case: when two agents have credence 1 and a third has credence 0, with all relevant deltas being set to 1, the predicted compromise credence is 0.602, less than the arithmetical mean of 0.666…

(iii) For the M-proposition case: we consider a credence function c over M propositions to be a point in M-dimensional space, and map that by the Brier score to an epitemic value of that credence given a truth value assignment, the expected utility of which relative to credence b is the square euclidean distance between b and c. So a compromise team credence function becomes a maximization problem of the surface defined by the Nash product in that M-dimensional space. Now, curve sketching in WolframAlpha raised my confidence that you solve this maximization problem by solving the maximization problems for each of its one dimensional projections (evidence: when the threat points are the same for each person, then the solution is the linear average, just as we found above). But I can’t right now give a general argument for this. I presume a bit of knowledge about differential geometry should make this conjecture pretty easy to support or refute.

(iv) I haven’t thought about other measures of accuracy yet.

Bargaining to group consensus

I’m continuing my side-interest in thinking about reinterpretations of the social choice literature. Today I want to talk about applying another part of this to the question of how a group of people can agree on a collective set of opinions.

The background here: I’ll take it that each member of the group has degrees of belief over a set of propositions. And I’ll adopt an accuracy-first framework according to which there is a way of evaluating an arbitrary set of credences by measuring, to put it roughly, how far those degrees of belief are from the actual truth values. To be concrete (though it won’t matter for this post), I’ll use the Brier score, and assume the distance from the truth of a belief state b is given by the sum (over propositions p) of the square of the differences between the degree of belief in p (a real number in [0,1]) and the truth value (0 if false, 1 if true). As is familiar from that literature, we can then start thinking of accuracy as a kind of epistemic value, and then each person’s credences—which assign probabilities to each world—allow us to calculate the expected accuracy of any other belief state, from their perspective. (This construction makes sense and is of interest whether we think of the epistemic value modelled by the Brier score as objective or an aspect of personal preferences).

One fact about the Brier score (in common with the vast majority of scoring rules that are discussed in the accuracy literature) is that it’s “proper”. This technical property means that for any agent whose credences are probabilistic, the credence that maximizes expected accuracy, from their perspective, are those credences that they themselves possess. On the other hand, they can rank others’ credences as better or worse. If a group fully discloses its credences, each member will expect the most accurate credence to be the one that they themselves already have, but they may expect, for example, Ann’s credences to be more accurate than Bob’s.

Once we’re thinking about accuracy in groups, we can get to work. For example, Kevin Zollman has some very interesting work constructing epistemic versions of prisoner’s dilemmas and other game-theoretic strategic problems by starting with the kind of setup just sketched, and then considering situations where agent’s altruistically care not just about the accuracy of their own beliefs, but the accuracy of other group members. And in previous posts, I’ve discussed Richard Pettigrew’s work that grounds particular ways of “credence pooling” i.e. picking a single compromise credal state, based on minimizing aggregate inaccuracy.

But today, I want to do something a bit different. Like Pettigrew, I’m going to think about a situation where the task of the group is to pick a single compromise credal state–a compromise or “team credence”. Like Zollman, I’m going to think about this through the lens of game theory. But for today I’ll be thinking about the relevance of results from game theory/social choice theory I haven’t seen explored in this connection: Nash’s theory of bargaining.

Here’s the setup. We have our group of agents, and they need to choose a team credence for various practical purposes (maybe they’re a group of scientists who need to agree on what experiments to do next, and who are looking for a consensus on what they have learned on relevant matters so far, on the basis of which to evaluate the options. Or maybe they’re a committee facing some practical decisions about how to allocate roles next year, and they need to resolve disagreements on relevant matters ahead of time, to feed into decision making). Now, any probability function could in principle be adopted as the team credence (we’ll assume). And of course they could fail to reach a consensus. Now, some possible credences are worse than giving up on consensus altogether—a team credence with high credence in wrongheaded or racist propositions is definitely worse than just splitting and going seperate ways. But we’ll assume that each group member i can pick a credence c_i such that they’d be indifferent between having that as the team credence, and giving up altogether. In accordance with accuracy-first methodology, we’ll assume that credences are better and worse by the lights of an agent exactly in proportion to how accurate the agent expects that credence to be. The expected accuracy of c_i by i’s lights is a measure of i’s “breaking point”—an candidate team credence that is expected to be less accurate than that is something where i will give up than agree to. Finally, we’ll assume that there is a set of credences S which are above everyone’s breaking point–everyone will think that it’s better to let some member of S stand for the team than give up altogether. We assume this set is convex and compact.

The choice of a team credence now fits the template of a bargaining problem. There is a “threat point” d which measures the (here, epistemic) utility of failing to reach agreement. And there are a range of possible compromises, parento-superior to the profile of breaking points, with the different parties to the bargaining problem having in principle very different views about which compromise to go for. (Notice that in this case all parties to the bargaining problem agree on the fundamental value—they want accuracy maximized. But their different starting credences map candidate team credences to very different expected accuracies, and this leads to divergent evaluation of the options.) Crucially, we are assuming that in this bargaining situation the agents stay steadfast–they do not compromise their own credence in light of learning about the views of other team members. Rather, they agree to disagree on a interpersonal level, but look for a team-level compromise.

Our problem now is to characterize what a good compromise would be. And this is where adapting Nash’s work on practical bargaining problems might help. I will write his assumptions informally and adapted to the epistemic scenario.

First, a “pareto” assumption, x in S is a good compromise (a solution to the bargaining problem) only if there’s no y in S such that everyone expects y to be more accurate than S.

Second “contraction consistency”, if you have a bargaining position (d,T) which differs from that involving (d,S) only by eliminating some candidates for team credence, then if x is a good compromise in S and x is within T, then x is a good compromise within T. Eliminating some alternatives that are not selected doesn’t change what a good team credence is, unless it eliminates that credence itself!

A third assumption concerns symmetric bargaining situations specifically. Let S* be the set of expected accuracy profiles generated by S, i.e. an n-tuple whose ith element is the expected accuracy of a candidate team credence by the ith person’s lights. A symmetric bargaining situation is a very special one where the set of candidates as a whole looks the same from everyone’s perspective —S* is invariant under permutations of the group members (and the same goes for the threat point d). This third assumption says that in this special symmetrical case, the epistemic utility for each person of a good compromise will be the same. No asymmetry out without asymmetry in!

The final assumption is an interesting one. It says, essentially, that the character of the bargaining solution cannot depend on certain aspects of individual’s evaluation of them. Formally, it is this: if the ith agent evaluates credences not by accuracy, but by accuracy*, where accuracy* is a positive affine transformation of accuracy (e.g. takes the form a.accuracy(c)+b, a>0) then the identity of the bargaining solution is essentially unchanged. Rather than the original profile of epistemic utilities associated with each potential team credence, the profile of the solution will now have a different number in the ith spot–the image under the affine transformation of what was there originally. The underlying team credence that is the solution remains the same (that’s the real content of the assumption), but its evaluation by the ith member, as you’d expect, is tranformed with the move to accuracy to accuracy*.

There’s a metaphysical assumption about accuracy that would entail this. It is that epistemic utility or (expected) accuracy itself is a measure which (for each person) is invariant under positive scaling and addition of constants, ie affine transformation. On this conception, there is no good sense to be made of questions like “is the accuracy of this proposition greater than zero”? though there is decent sense to be made of questions like “does this proposition have accuracy greater than the least possible accuracy value?”. It allows us to ask and answer questions like: is the difference in accuracy between credal state a and b greater than that between c and d? and crucially: is the accuracy (by i’s lights) of credal state a greater than that of b at every world? But also crucially on the reading in play here, there would not be any good sense of interpersonal accuracy comparisons. There is no good question about whether I rank a credal state c as more accurate than you do.

This is a little strange, perhaps. The absence of meaningful interpersonal comparisons is almost unintelligible if you think of accuracy as some objective feature of the relation between credences and truth values, a value that is the same for all people. But suppose accuracy (relative to a person) is an aspect of the way that the person values the true beliefs of others. Then each of us might value accuracy, but have idiosyncratic tradeoffs between accuracy and other matters. I, a scholar, care about accuracy much more than mere practical benefits. You, a knave, weight practical benefits more heavily. That gives one a sense about why interpersonal comparisons are not automatic (e.g. we should not simply assume that the epistemic value of having credence 1 in a truth is the same for you as me, even if for both of us it is maximal so far as accuracy goes). It is orthodoxy, in some circles, that ordinary utilities do not allow meaningful interpersonal comparisons—the thinking being that there is no basis for eliciting such comparisons in the choices, or for settling a scale or zero point. Once we get out of the mindset on which comparisons are easy and automatic, then it seems to me that there’s no obvious reason to insist on interpersonal epistemic utility/accuracy comparabilities.

If you accept that scale and zero-point of accuracy for each person reflect mere “choices of unit” then the fourth and final assumption above about a good compromise credence follows automatically—how to solve the bargaining problem shouldn’t turn on choices of units in which we express what one of us has at stake. So the “absolute expected accuracy” of a candidate compromise credence from my perspective shouldm’t matter for the selection of a team credence. Instead, factors with real content will matter, which are things such as: the patterns in relative differences in expected accuracy between the available candidates, from a single perspective.

Putting this all together, Nash’s proof allows us to identify a unique solution to the bargaining problem (and its nontrivial there is any solution: we are very close here to assumptions that lead to Arrow’s impossibility results). A team credence which meets these conditions must maximize a certain product, where the elements multiplied together are the differences, for each of us, between the expected accuracy of c and the expected accuracy of our personal breaking point. Given we are making the invariance assumption and so can choose our “zero points” for epistemic utility, arbitrarily and independently, it is natural to choose a representation on which we each set the epistemic utility of our personal breaking point to zero. On that representation, the team credence meeting the above constraints must maximize the product of each team member’s expected accuracy.

(How does this work? You can find the proof (for the case of a two-membered group) on p.145ff in Gaertner’s primer on social choice theory, and it’s a lovely geometrical result. I won’t go through it here, but basically, you take the image in multidimensional expected-accuracy space of your set of options, and use the invariance and contraction consistency assumptions to show that the solution to a given bargaining problem is equivalent to another one that takes a particularly neat symmetric form (and pick out from it what the Nash maximize-the-product solution predicts to be the solution in that case) . And then you can use symmetry and pareto assumptions to show that the prediction is correct. It’s very elegant.)

So I think this all makes sense under the epistemic interpretation, and that bargaining theory is another place where decision theory, broadly construed, can bear an epistemic interpretation. I haven’t given you yet a formula for calculating the compromise credence itself—the one that satisfies Nash’s criterion. Perhaps in the next post….

Let me step back finally to make a few points about the assumptions. First, pareto seems the only really obvious one of the constraints. Symmetry is an interesting principle, but where some of a group are experts, and others are not, then perhaps it looks implausible. Maybe the solution should weight the valuations of experts (expected utility by their lights) higher. On the other hand, in a situation where infromation has been shared in advance, non-experts have already factored in the testimony of experts as best they can, so factoring this into compromise as well might be double-counting.

Another point about symmetry is that (if we think of expected accuracy as a distance between credence functions) what it tells us is that in certain special circumstances, we should pick a compromise that is equidistant between all agents’ credences. But notice that picking an equidistant point may not minimize the aggregate distance between utilities. Think of a situation where we have an N+1 membered group, N of whom have the same credence, but one dissents, and which meets the symmetry condition. You might think that a good compromise should be nearer the credal state that all but one of the group already have. But no: symmetry tells us that a good compromise equalizes epistemic utility in this scenario, rather than favouring the points “nearer” to most of the group. In the symmetrical setting where every group member either has credence a or b, it doesn’t matter what the relative numbers are that favour a vs. b, the solution is the same. This might seem odd! But remember two things: (1) the natural alternative thought is that we should “utilitarian style” care about the aggregate epistemic utility. But since invariance tells us that there are no meaningful interpersonal comparisons (not even of utility differences) then “aggregate epistemic utility” or “minimizing aggregate distance” isn’t well-defined in the first place. (2) the conception of a bargaining problem is one where every individual has a veto, and so can bring about the threat-point. So while N people who agree might think they “outvote” the last member, a veto from the last member destroys things for everyone, just as much as if many were disenfranchised. This might warm one up to the equalization in symmetry. Still, in view of (1), my own thought is that symmetry is really a principle that is most plausible if you already buy into invariance, rather than something that stands alone.

Contraction consistency seems pretty plausible to me, but has been the focus of investigation in the social choice literature, so there is an interesting project out there of exploring the consequences of tweaking it under the epistemic interpretation.

Finally, what of invariance? Well, a lot of my previous posts on these matters (as with Pettigrew’s work) started from the assumption this is false. We assume that accuracy scores are comparable between different individuals, even if they disagree on what the expected accuracy of a given credal state is. But it’s a really good question whether this was a reasonable assumption! So I think we can view the dialectic as follows: either accuracy/expected accuracy/epistemic utility is interpersonally comparable or it isn’t. If it is, then e.g. compromise by minimizing aggregate distance between compromise point and credences will be a great candidate for being the right recipe (depending on what accuracy measure you use, as Pettigrew shows, this could lead to compromise by arithmetical or geometric averaging). If epistemic utility is not interpersonally comparable, then we have a new recipe for picking a compromise credence available here, defined relative to the “threat” or “breaking point” profile among the group.

Lastly, this post has focused on accuracy and compromise credence. What about combined credal-utility states, evaluated by distance from truth/ideal utility, that I’ve discussed in previous posts? I’d like to extend the same results to them (and note, this is not to return to the original Nash dialectic, which is about bargaining for the best “lottery over social states” e.g. act, in light of utilities, not bargaining for utility-involving mental states in light of their expected “correspondance with ideal utilities”). Along with getting a concrete formula for Nash-compromise credences, that’s work for another time.