Maximizing similarity and charity: redux

This is a quick post (because it’s the last beautiful day of the year). But in the last post, I was excited by the thought that a principle of epistemic charity that told you to maximize self-similarity in interpretation would correspond to a principle of metaphysical charity in which the correct belief/desire interpretation of an individual maximized knowledge, morality, and other ideal characteristics.

That seemed nice, because similarity-maximization seemed easier to defend as a reliable practical interpretative principle than maximizing morality/knowledge directly. The similarity-maximization seems to presuppose only that interpreter and interpretee are (with high enough objective probability) cut from the same cloth. A practical knowledge/morality maximization version of charity, on the other hand, looks like it has to get into far more contentious background issues.

But I think this line of thought has a big problem. It’s based on the thought that if the facts about belief and desire are those that the ideal interpreter would attribute. If the ideal interpreter is an omniscient saint (and let’s grant that this is built into the way we understand the idealization) then similarity-maximization will make the ideal interpreter choose theories of any target that make them as close to an omniscient saint as possible—i.e. maximize knowledge and morality.

Alright. But the thing is that similarity maximization as practiced by ordinary human beings is reliable, if it is, because (with high enough probability) we resemble each other in our flaws as well as our perfections. My maximization of Sally’s psychological similarity to myself may produce warranted beliefs because I’m a decent sample of human psychology. But a hypothetical omniscient saint is not even hypothetically a decent sample of human psychology. The ideal interpreter shouldn’t be maximizing Sally’s psychological similarity to themself, but rather her similarity to some representative individual (like me).

Now, you might still get an interesting principle of metaphysical charity out of similarity-maximization, even if you have to make it agent-relative by having the ideal interpeter maximizing similarity to x, for some concrete individual x (if you like, this ideal interpreter is x’s ideal interpretive advisor). If you have this relativization built into metaphysical charity, you will have to do something about the resulting dangline parameter—maybe go for a kind of perspectival relativism about psychological facts, or try to generalize this away as a source of indeterminacy. But it’s not the morality-and-knowledge maximization I originally thought resulted.

I need to think about this dialectic some more: it’s a little complicated. Here’s another angle to approach the issue. You could just stick with characterizing “ideal interpreter” as I originally did, as omniscient saints going through the same de se process as we ourselves do in interpreting others, and stipulate that belief/desire facts are what they those particular ideal interpreters say they are. A question, if we do this, is whether this would undercut a practice of flesh and blood human beings (FAB) interpreting others by maximizing similarity to themselves. Suppose FAB recognizes two candidate interpretations available of a target—and similarity-to-FAB ranks interpretation A over B, whereas similarity-to-an-omniscient-saint ranks B over A. In that situation, won’t the stipulation about what fixes the belief/desire facts mean that FAB should go for B, rather than A? But similarity-maximization charity would require the opposite.

One issue here is whether we could ever find a case instantiating this pattern which doesn’t have a pathological character. For example, if cases of this kind needed FAB to identify a specific thing that the omniscient agent knows, that they do not know—then they’d be committed to the Moorean proposition “the omnsicient saint knows p, but I do not know p”. So perhaps there’s some more room to explore whether the combination of similarlity-maximization and metaphysical charity I originally put forward could be sustained as a package-deal. But for now I think the more natural pairing with similarity-maximization is the disappointingly relativistic kind of metaphysics given above.

From epistemic to metaphysical charity

I’ll start by recapping a little about epistemic charity. The picture was that we can get some knowledge of other minds from reliable criterion-based rules. We become aware of the behaviour-and-circumstances B of an agent, and form the belief that they are in S, in virtue of a B-to-S rule we have acquired through nature or nuture. But this leaves a lot of what we think we ordinarily know about other minds unexplained (mental states that aren’t plausibly associated with specific criteria). Epistemic charity is a topic-specific rule (a holistic one) which takes us from the evidence acquired e.g. through criterion-based rules like the above, to belief and desire ascriptions. The case for some topic-specific rule will have to be made by pointing to problems with topic-neutral rules that might be thought to do the job (like IBE). Once that negative case is made we can haggle about the character of the subject-specific rule in question.

If we want to make the case that belief-attributions are warranted in the Plantingan sense, the central question will be whether (in worlds like our own, in application to the usual targets, and in normal circumstances) the rule of interpreting others via the charitable instruction to “maximize rationality” is a reliable one. That’s surely a contingent matter, but it might be true. But we shouldn’t assume that just because a rule like this is reliable in application to humans, that we can similarly extend it to other entities—animals and organizations and future general AI.

There’s also the option of defending epistemic charity as the way we ought to interpret others, without saying it leads to beliefs that are warranted in Plantinga’s sense. One way of doing that would be to emphasize and build on some of the pro-social aspects of charity. The idea is that we maximize our personal and collective interests by cooperating, and defaulting to charitable interpretation promotes cooperation. One could imagine charity being not very truth-conducive, and these points about its pragmatic benefits obtaining—especially if we each take advantage of others’ tendancy to charitably interpret us by hiding our flaws as best we can. Now, if we let this override clear evidence of stupidity or malignity, then the beneficial pro-social effects might be outweighed by constant disappointment as people fail to meet our confident expectations. So this may work best as a tie-breaking mechanism, where we maximize individual and collective interest by being as pro-social as possible under constraints of respecting clear evidence.

I think the strongest normative defence of epistemic charity will have to mix and match a bit. It maybe that some aspects of charitable interpretation (e.g. restricting the search space to “theories” of other minds of a certain style, e.g. broadly structurally rational) look tempting targets to defend as reliable, in application to creatures like us. But as we give the principles of interpretation-selection greater and greater optimism bias, they get harder to defend as reliable, and it’s more tempting to reach for a pragmatic defence.

All this was about epistemic charity, and is discussed in the context of flesh and blood creatures forming beliefs about other minds. There’s a different context in which principles of charity get discussed, and that’s in the metaphysics of belief and desire. The job in that case is to take a certain range of ground-floor facts about how an agent is disposed to act and the perceptual information available to them (and perhaps their feelings and emotions too) and then selecting the most reason-responsive interpretation of all those base-level facts. The following is then proposed as a real definition of what it is for an agent to believe that p or desire that q: it is for that belief or desire to be part of the selected interpretation.

Metaphysical charity says what it is for someone to believe or desire something in the first place, doesn’t make reference to any flesh and blood interpreter, and a fortiori doesn’t have its base facts confined to those to which flesh and blood interpreters have access. But the notable thing is that (at this level of abstract definition) it looks like principles of epistemic and metaphysical charity can be paired. Epistemic charity describes, inter alia, a function from a bunch of information about acts/intentions and perceivings to overall interpretations (or sets of interpretations, or credence distributions over sets of interpretations). It looks like you can generate a paired principle of metaphysical charity out of this by applying that function to a particular rich starting set: the totality of (actual and counterfactual) base truths about the intentions/perceivings of the target. (We’ll come back to slippage between the two on the way).

It’s no surprise, then, that advocates of metaphysical charity have often framed the theory in terms of what an “ideal interpreter” would judge. We imagine a super-human agent whose “evidence base” were the totality of base facts about our target, and ask what interpretation (or set of interpretations, or credences over sets of interpretations) they would come up with. An ideal interpeter implementing a maximize-rationality priciple of epistemic charity would pick out the interpretation which maximizes rationality with respect to the total base facts, which is exactly what metaphysical charity selected as the belief-and-desire fixing theory. (What happens if the ideal interpreter would deliver a set of interpretations, rather than a single? That’d correspond to a tweak on metaphysical charity where agreement among all selected interpretations suffices for determinate truth. What if it delivers a credence distribution over such a set? That’d correspond to a second tweak, where the degree of truth is fixed by the ideal interpreters’ credence).

You could derive metaphysical charity from epistemic charity by adding (some refinement of) an ideal-interpreter bridge principle: saying that what it is for an agent to believe that p/desire that q is for it to be the case that an ideal interpreter, with awareness of all and only a certain range of base facts, would attribute those attitudes to them. Granted this, and also the constraint that they any interpreter ought to conform to epistemic charity, anything we say about epistemic charity will induce a corresponding metaphysical charity. The reverse does not hold. It is perfectly consistent to endorse metaphysical charity, but think that epistemic charity is all wrong. But with this ideal-interpreter bridge set up, whatever we say about epistemic charity will carry direct implications for the metaphysics of mental content.

Now metaphysical charity relates to the reliability of epistemic charity in one very limited respect. Given metaphysical charity, epistemic charity is bound to be reliable in one very restricted range of cases: a hypothetical case where a flesh and blood interpreter has total relevant information about the base facts, and so exactly replicates the ideal interpreter counterfactuals about whom fixes the relevant facts. Now, these cases are pure fiction–they do not arise in the actual world. And they cannot be straightforwardly used as the basis for a more general reliability principle.

Here’s a recipe that illustrates this, that I owe to Ed Elliott. Suppose that our total information about x is Z, which leaves open the two total patterns of perceivings/intendings A and B. Ideal interpretation applied to A delivers interpretation 1, the same applied to B delivers interpretation 2. 1 is much more favourable than 2. Epistemic charity applied to limited information Z tells us to attribute 1. But there’s nothing in the ideal interpreter/metaphysical charity picture that tells us A/1 is more likely to come about than B/2.

On the other hand, consider the search-space restrictions—say to interpretations that make a creature rational, or rational-enough. If we have restricted the search space in this way for any interpreter, then we have an ex ante guarantee that whatever the ideal interpreter comes up with, it’ll be an interpretation within their search space, i.e. one that makes the target rational, or rational-enough. So constraints on the interpretive process will be self-vindicating, if we add metaphysical charity/ideal interpeter bridges to the package, though as we saw, maximizing aspects of the methodology will not be.

I think it’s very tempting for fans of epistemic charity to endorse metaphysical charity. It’s not at all clear to me whether fans of metaphysical charity should taken on the burden of defending epistemic charity. If they do, then the key question will be the normative status of any maximizing principles they embrace as part of the characterization of charity.

Let me just finish by emphasizing both the flexibility and the limits to this package deal. The flexibility comes because you can understand “maximize reasonableness within search-space X” or indeed “maximize G-ness within search-space X” in all sorts of ways, and the bulk of the above discussion will go through. That means we can approach epistemic charity by fine-tuning for the maximization principle that allows us the best chance of normative success. On the other hand, there are some approaches that are very difficult to square with metaphysical charity or ideal interpreters. I mentioned in the previous post a “projection” or “maximize similarity to one’s own psychology” principle, which has considerable prima facie attraction—after all, the idea that humans have quite similar psychologies looks like a decent potential starting point. It’ll be complex translating that into a principle of metaphysical charity. What psychology would the ideal interpreter have, similarity of which must be maximized?

Well, perhaps we can make this work: perhaps the ideal interpreter, being ideal, would be omnsicient and saintly? If so, perhaps this form of epistemic charity would predict a kind of knowledge-and-morality-maximization principle in the metaphysical limit. So this is a phenomenon worth noting: metaphysical knowledge-and-morality maximization could potentially be derived either from epistemic similarity-maximization or epistemic knowledge-and-morality maximization. The normative defences these epistemologies of other minds call for would be very different.

Epistemic charity as proper function.

Our beliefs about the specific beliefs and desires of others are not formed directly on the basis of manifest behaviour or circumstances, simply because in general individual beliefs and desires are not paired up in a one-to-one fashion with specific behaviour/circumstances (that is what I took away from the circularity objection to behaviourism). And with Plantinga, let’s set aside the suggestion we base such attributions in an inference by IBE. As discussed in the last post, the Plantingan complaint is that IBE is only somewhat reliable, and (on a Plantingan theory) this means it could only warrant a rather tenuous, unfirm belief that the explanation is right.

(Probably I should come back to that criticism—it seems important to Plantinga’s case that he thinks there would be close competitors to the other-minds hypothesis, if we were to construe attributions as the result of IBE, as the case for the comparative lack of reliability of IBE is very much stronger when we’re considering picking one out of a bunch of close competitor theories, than when e.g. there’s one candidate explanation that stands out a mile from the field, particularly when we remember we are interested only in reliability in normal circumstances. But surely there are some scientific beliefs that we initially form tentatively by an IBE which we end up believing very firmly, when the explanation they are a part of has survived a long process of testing and confirmation. So this definitely could do with more examination, to see if Plantinga’s charge stands up. It seems to me that Wright’s notion of wide vs. narrow cognitive roles here might be helpful—the thought being that physicalistic explanatory hypothesis we might arrive at by IBE tend to have multiple manifestations and so admit of testing and confirmation in ways that are not just “more of the same” (think: Brownian motion vs statistical mechanical phenomenon as distinct manifestations of an atomic theory of matter.)

What I’m now going to examine is a candidate solution to the second problem of other minds that can sit within a broadly Plantingan framework. Just as with criterion-based inferential rules that on the Plantingan account underpin ascriptions of pain, intentions, perceivings, and the like, the idea will be that we have special purpose belief forming mechanisms that generate (relatively firm) ascriptions of belief and desire. Unlike the IBE model, we’re not trying to subsume the belief formations within some general purpose topic-neutral belief forming mechanism, so it won’t be vulnerable in the way IBE was.

What is the special purpose belief forming mechanism? It’s a famous one: charitable interpretation. The rough idea is that one attributes the most favourable among the available overall interpretations that fits with the data you have about that person. In this case, the “data” may be all those specific criterion-based ascriptions—so stuff like what the person sees, how they are intentionally acting, what they feel, and so on. In a more full-blown version, we would have to factor in other factors (e.g. the beliefs they express through language and other symbolic acts; the influence of inductive generalizations made on the basis of previous interpretations, etc).

What is it for an interpretation to be “more favourable” than another? And what is it for a belief-desire interpretation to fit with a set of perceivings, intentions, feelings etc? For concreteness, I’ll take the latter to be fleshed out in terms of rational coherence between perceptual input and belief change and means-end coherence of beliefs and desires with intentions, and the like—structural rationality constraints playing the role that in IBE, formal consistency might play. And I’ll take favourability to be cashed out as the subject being represented as favourably as is possible—believing as they ought, acting on good reasons, etc.

Now, if this were to fit within the Plantingan project, it has to be the case that there is a component of our cognitive system that goes for charitable interpretation and issues in (relatively firm) ascriptions of mental states to others. Is that even initially plausible? We all have experience of being interpreted uncharitably, and complaining about it. We all know, if we’re honest, that we are inclined to regard some people as stupid or malign, including in cases where there’s no very good direct evidence for that.

I want to make two initial points here. The first is that we need to factor in some of the factors mentioned earlier in order to fairly evaluate the hypothesis here. Particularly relevant will be inductive generalizations from previous experience. If your experience is that everyone you’ve met from class 22B is a bully who wants to cause you pain, you might reasonably not be that charitable to the next person you meet from class 22B, even if the evidence about that person directly is thin on the ground. I’d expect the full-dress version of charity to instruct us to form the most favourable attributions consistent with those inductive generalizations we reasonably hold onto (clearly, there’ll be some nuance in spelling this out, since we will want to allow that sufficient acquaintance with a person allows us to start thinking of them as a counterexample to generalizations we have previously held). For similar reasons, an instruction to be as charitable as possible won’t tell you to assume that every stranger you meet is saintly and omnisicient, and merely behaving in ways that do not manifest this out of a concern not to embarrass you (or some such reason). For starters, it’s somewhat hard to think of decent ideas why omniscient saints would act as everyday people do (just ask those grappling with the problem of evil how easy this is), and for seconds, applied to those people with whom we have most interaction, such hypotheses wouldn’t stand much scrutiny. We have decent inductive grounds for thinking, generically people’s motives and information lie within the typical human band. What charity tells us to do is pick the most favourable interpretation consistent with this kind of evidence. (Notice that even if these inductive generalizations eventually take most of the strain in giving a default interpretation of another, charity is still epistemically involved insofar as (i) charity was involved in the interpretations which form the base from which the inductive generalization was formed; and (ii) insofar as are called on-the-fly to modify our inductively-grounded attributions when someone does something that doesn’t fit with them).

Further, the hypothesis that we have a belief-attributing disposition with charity as its centrepiece is quite consistent with this being defeasible, and quite often defeated. For example, here’s one way human psychology might be. We are inclined by default to be charitable in interpreting others, but we are also set up to be sensitive to potential threats from people we don’t know. Human psychology incorporates this threats-detection system by giving us a propensity to form negative stereotypes of outgroups on the basis of beliefs about bad behaviour or attitudes of salient members of those outgroups. So when these negative stereotypes are triggered, this overrides our underlying charitable disposition with some uncharitable default assumptions encoded in the stereotype. (In Plantingan terms, negative stereotype formation would not be a part of our cognitive structure aimed at truth, but rather one aimed at pragmatic virtues, such as threat-avoidance). Only where the negative stereotypes are absence would we then expect to find the underlying signal of charitable interpretation.

So again: is it even initially plausible that we actually engage in charitable interpretation? The points above suggest we should certainly not test this against our practice in relation to members of outgroups that may be negatively stereotyped. So we might think about this in application to friends and family. As well as being in-groups rather than out-groups, these are also cases where we have a lot of direct (criterion-based) evidence about their perceivings, intendings, feelings over time, so cases where we would expect to be less reliant on inductive generalizations and the like. I think in those cases charity is at least an initially plausible candidate as a principle constraining our interpretative practice. As some independent evidence of this, we might note Sarah Stroud’s account of the normative commitments constitutive of being a friend, which includes an epistemic bias towards charitable interpretation. Now, her theory of this says that it is the special normatively significant relation of friendship that places an obligation of charity upon us, and that is not my conjecture. But insofar as she is right about the phenomenology of friendship as including an inclination to charity, then I think this supports the idea that the idea that charitable interpretation is at least one of our modes of belief attribution. It’s not the cleanest case—because the very presence of the friendship relation is a potential confound—but I think it’s enough to motivate exploring the hypothesis.

So suppose that human psychology does work roughly along the lines just sketched, with charitable-ascription the default, albeit defeasible and overridable. If this is to issue in warranted ascriptions within a Plantigian epistemology, then not only does charitable interpretation have to be a properly-functioning part of our cognitive system, but it would have to be a part that’s aimed at truth, and which reliably issues in true beliefs. Furthermore, it’d have to very reliably issue in true beliefs, if it is, by Plantingan lights, to warrant our firm beliefs about the mental lives of others.

Both aspects might raise eyebrows. There are lots of things one could say in praise of charitable interpretation that are fundamentally pragmatic in character. Assuming the best of others is a pro-social thing to do. Everyone is the hero in their own story, and they like to learn that they are heroes in other people’s stories too. So expressing charitable interpretations of others is likely to strengthen relationships, enable cooperation, and prompt reciprocal charity. All that is good stuff! It might be built up into an ecological rationale for building charitable interpretation into one’s dealing with in-group members (more generally, positive stereotypes), just as threat-avoidance might motivate building cynical interpretation into one’s dealing with out-group members (more generally, negative stereotypes). But if we emphasize this kind of benefit of charitable interpretation, we are building a case for a belief forming mechanism that aims at sociability, not one aimed at truth. (We’re also undercutting the idea that charity is a default that is overridden by e.g. negative stereotypes–it suggests instead different stances in interpretation are tied to the different relationships).

It’s easiest to make the case that an interpretative disposition that is charitable is aimed at truth if we can make the case that it is reliable (in normal circumstances). What do we make of that?

Again, we shouldn’t overstate what it takes for charity to be reliable. We don’t have to defend the view that it’s reliable to assume that strangers are saints, since charity doesn’t tell us to do that (it wouldn’t get make it to the starting blocks of plausibility if it did). The key question will be whether charitable interpretation will be a reliable way of interpreting those with whom we have long and detailed acquaintance (so that the data that dominates is local to them, rather than inductive generalizations). The question is something like the following: are humans generally such that, among the various candidate interpretations that are structurally rationally compatible with their actions, perceptions, feelings (of the kind that friends and family would be aware of) the most favourable is the truest?

Posed that way, that’s surely a contingent issue—and something to which empirical work would be relevant. I’m not going to answer it here! But what I want to say is that if this is a reliable procedure in the constrained circumstances envisaged, then the prospects start to look good for accommodating charity within a Plantingan setup.

Now, even if charity is reliable, there remains the threat it won’t be reliable enough to vindicate the firmness of the confidence I have that family and strangers on the street believe that the sun will rise tomorrow, and so forth. (This is to avoid the analogue of the problem Plantinga poses for inference to the best explanation). This will guide the formulation of exactly how we characterize charity—it better not just say that we endorse the most charitable interpretation that fits the relevant data, with the firmness of that belief unspecified, but also says something about the firmness of such beliefs. For example, it could be that charity tells us to distribute our credence over interpretations in a way that respects how well they rationalize the evidence available so far. In that case, we’d predict that beliefs and desires common to almost all favourable candidates are ascribed much more firmly than beliefs and desires which are part of the very best interpretation, but not on nearby candidates. And we’d make the case that e.g. a belief that the sun will rise tomorrow is going to be part of almost all such candidates. (If we make this move, we need to allow the friend of topic-neutral IBE to make a similar one. Plantinga would presumably say that many of the candidates to be “best explanations” of data, when judged on topic neutral grounds, are essentially sceptical scenarios with respect to other minds. So I think we can see how this response could work here, but not in the topic-neutral IBE setting).

Three notes before I finish. The first is that even if charity as I categorized it (as a kind of justification-and-reason maximizing principle) isn’t vindicated as a special purpose interpretive principle, it illustrates the way that interpretive principles with very substantial content could play an epistemological role in solving the other problem of other minds. For example, a mirror-image principle would be to pick the most cynical interpretation. Among a creatures who are naturally malign dissemblers, that may reliable, and so a principle of cynicism vindicated on exactly parallel lines. And if in fact all humans are pretty similar in their final desires and general beliefs, then a principle of projection, where one by default assumes that other creatures have the beliefs and desires that you, the interpretor, have yourself, might be reliable in the same way. And so that too could be given a backing (Note that this would not count as a topic-neutral inference by analogy. It would be to a topic-specific inference concerned with psychological attribution alone, and so which could in principle issue in much firmer beliefs than a general purpose mechanism which has to avoid false positives in other areas).

Second, the role for charity I have set out above is very different from the way that it’s handled by e.g. Davidson and the Davidsonians (in those moments where they are using it as a epistemological principle, rather than something confined to the metaphysics of meaning). This kind of principle is contingent, and though we could insist that it is somehow built into the very concept of “belief”, that would just be to make the concept of belief somewhat parochial, in ways that Davidsonians would not like.

The third thing I want to point out is that if we think of epistemic charity as grounded in the kind of considerations given above, we should be very wary about analogical extensions of interpretative practices to creatures other than humans. For it could be that epistemic charity is reliable when restricted to people, but utterly unreliable when applied—for example–to Klingons. And if that’s so, then extending our usual interpretative practice to a “new normal” involving Klingons won’t give us warranted beliefs at all. More realistically, there’s often a temptation to extend belief and desire attrributions to non-human agents such as organizations, and perhaps, increasingly, AI systems. But if the reliance on charity is warranted only because of something about the nature of the original and paradigmatic targets of interpretation (humans mainly, and maybe some other naturally occurring entities such as animals and naturally formed groups) that makes it reliable, then it’ll continue to be warranted in application to these new entities if they have a nature which also makes it reliable. It’s perfectly possible that the incentive structures of actually existing complex organizations are just not such that we should “assume the best” of them, as we perhaps should of real people. I don’t take a stand on this—but I do flag it up as something that needs seperate evaluation.

Plantinga on the original problem of other minds and IBE

The other problem of other minds was the following. Grant that we have justification for ascribing various “manifested” mental states to others. Specifically, we have a story about how we are justified in ascribing at least the following: feelings like pain, emotions like joy or fear, perceivings, intendings. Many of these have intentional contents, and we suppose that our story shows how we can be justified (in the right circumstances) in ascribing states of these types for a decent range of contents, though perhaps not all. But such a story, we are assuming, is not yet an epistemic vindication of the total mental states we ascribe to others. Specifically, we ascribe to each other general beliefs about matters beyond the here and now, final desires for rather abstractly described states of affairs (though these two examples are presumably just the tip of the iceberg). So the other problem of other minds is that we need to explain how our justification for ascribing feelings, perceivings, intendings, emotions, extends to justification for all these other states.

The epistemic puzzle is characterized negatively: they are the mental states for which a solution to the original problem of other minds does not apply. And in approaching the other problem of other minds, ascriptions of mental states that are covered by whatever solution we have to the original problem of other minds will be a resource for us to wield. So before going on the second problem, I want to fill in one solution to the first problem so we can see its scope and limits.

In Warrant and Proper Function, Plantinga addresses the epistemic problem of other minds. In the first section of chapter 4, he casts the net widely, as a problem of accounting for the “warrant” of beliefs ascribing everything from “being appeared to redly” to “believing that Moscow, Idaho, is samller than its Russian namesake”. So the official remit covers both the original problem and the other problem of other minds, in my terms. But by the time he gets to section D, where his own view is presented, the goalposts have been shifted (mostly in the course of discussing Wittgensteinian “criteria”. By that point, the point is made in terms of a pair of a mental state S and a description of associated criteria, “behaviour-and-circumstances” B that constitute good but defeasible evidence for the mental states in question. After discussing this, Plantinga comments “So far [the Wittgensteinians] seem to be quite correct; there are criteria or something like them”. And so the question that Plantinga sets himself is to explain how an inference from B to S can leave us warranted in ascribing S, given that he has argued against backing it up with epistemologies based on analogy, abduction, or whatever the Wittgensteinians said.

Plantinga’s account is the following. First, “a human being whose appropriate faculties are functioning properly and who is aware of B will find herself making the S ascription (in the absence of defeaters)… it is part of the human design-plan to make these ascriptions in these circumstances… with very considerable firmness”. And so “if the part of the design plan governing these processes is successfully aimed at truth, then ascriptions of mental states to others will often have high warrant for us; if they are also true, they will constitute knowledge”. Here Plantinga is simply applying his distinctive “proper function” reliabilism. In short: for a belief to be warranted (=such that if its content is true, then it is known) is for it to be produced/sustained by a properly functioning part of a belief-forming system which has the aim of producing true beliefs, and which (across its designed-for circumstances) reliably succeeds in that aim.

Plantinga’s claims that our beliefs about other minds are warranted rely on various contingencies obtaining (on this he is very explicit). It will have to be that the others we encounter are on occasion in mental states like S. It will have to be that B is reliably correlated with S. It will have to be that human minds exhibit certain functions, that they are functioning properly, and that we are in the circumstances they are designed for. The teleology of the inferential disposition involved will have to be right, and the inferential disposition (and its defeating conditions) will have to be set up so as to extract reliably true belief formation out of the reliable correlations between B and S. Any of that can go wrong; but Plantinga invites us to accept that in actual, ordinary cases it is all in place.

The specific cases Plantinga discusses when defending the applicability of this proper function epistemology to the problem of other minds are those where our cognitive structure includes a defeasible inferential disposition taking us from awareness of behaviour-and-circumstances B to ascribing mental state S. That particular account has no application to any ascriptions of mental states S* that do not fit this bill: where there there is no correlation with specific behaviour and circumstances B or no inferential disposition reflecting that correlation (after all, to apply the Plantigan story, we need some “part” of our mental functioning which we can feed into the rest of the Plantingan story, e.g. evaluate whether that part of the overall system is aimed at truth). We can plausibly apply Plantinga’s account to pain-behaviour (pain); to someone tracking a red round object in their field (seeing a red round object); to someone whose arm goes up in a relaxed manner (raising their arm/intending to raise their arm). It applies, in other words, to the kind of “manifestable” mental states that in the last post I took to be in the scope of the other problem of other minds. But, again as mentioned there, it’s hard to fit general beliefs and final desires (not to mention long-term plans and highly specific emotions) into this model. If you tried to force them into the model, you’d have to identify specific behaviour-and-circumstantial “criteria” for attributing e.g. the belief that the sun will rise tomorrow to a person. But (setting aside linguistic behaviour, of which more in future posts) I say: there are no such criteria. Now, one might try to argue against me at this point, attempting to construct some highly conditional and complex disjunctive criteria of the circumstances in which it’d be appropriate to ascribe a belief that the sun will rise tomorrow, thinking through all the possible ways in which one might ascribe total belief-and-desire states which inter alia include this belief. But then I’ll point out that it would seem wild to assume that an inference with conditional and complex disjunctive antecedents will be in the relevant sense a “part” of our mental design. A criterial model is just the wrong model of belief and desire ascription, and I see little point in attempting to paper over that fact.

(Let me note as an aside the following: there may well be behavioural criteria which leads us to classify a person as a believer or desirer, someone who possesses general beliefs and final desires which inform her actions. That is quite different from positing behavioural criteria for specific general beliefs and final desires. It’s the latter I’m doubtful of.)

On the other hand, the Plantingan approach to the problem of other minds doesn’t have to be tied to the B-to-S inferences. Indeed, Plantinga says “Precisely how this works—just what our inborn belief-forming mechanisms here are like, precisely how they are modified by maturation and by experience and leanrign, precisely what role is played by nature as oppposed to nuture–these matters are not (fortunately enough) the objects of this study”. So he’s clearly open to generalizing the account beyond the criterion-based inferential model.

But to leave things open at this point is more or less to simply assert that the other problem of other minds has a solution, without saying what that solution is. For example, you might think at this point that what’s going on is that we form beliefs about the manifest states of others on the basis of behavioural criteria, understood in the Plantigan way, and then engage in something like an inference to the best explanation in embedding these mentalistic “data” within a simple, strong overall theory of what the minds of others are like. One would then give a Plantigan “proper function” defence of inferring to (what is in fact) the best explanation of one’s data as a defeasible belief-forming method producing warranted beliefs. It would have to be a belief-forming method that was the proper functioning of a part of our cognitive systems, a part aimed at truth, a part that reliably secures truth in the designed-for circumstances, etc.

As it happens, Plantinga himself argues that inference to the best explanation will be unsuccesful in solving the the problem of other minds. Let’s take a look at them. The main claim is that “A child’s belief, with respect to his mother, that she has thoughts and feelings, is no more a scientific hypothesis, for him, than the belief that he himself has arms or legs; in each case we come to the belief in question in the basic way, not by way of a tenuous inference to the best explanation or as a sort of clever abductive conjecture. A much more plausible view is that we are constructed… in such a way that these beliefs naturally arise upon the sort of stimuli … to which a child is normally exposed.” Now, Plantinga offers to his opponents a fallback position, whereby they can claim that the child’s beliefs are warranted by the availability of an IBE inference that they do not actually perform (I guess that Plantinga himself ties questions of warrant more closely to the actual genesis of beliefs, but he’s live to the possibility that others might not do so). But he thinks this won’t work, because what we need to explain is the very strong warrant (strong enough for knowledge) that we have in ascriptions of mental states to others, and he thinks that the warrant extractable from an IBE won’t be nearly so strong. He thinks that there are “plenty of other explanatory hypothesis [i.e. other than the hypothesis that other persons have beliefs, desires, hopes, fears, etc] that are equally simple or simpler”. The example given is the explanatory hypothesis that I am the only embodied mind, and that a Cartesian demon gives me strong inclination to believe in the existence of other bodies have minds. I think the best way of construing Plantinga’s argument here is that he’s saying that even if the Cartesian demon hypothesis is not as good as the other-minds hypothesis, if our only reason for dismissing it is the theoretical virtues of the latter hypothesis beyond simplicity and fitting-with-the-data, we’d be irreponsible unless we were live to new evidence coming in that’d turn the tables. So while we might have some kind of warrant in some kind of belief by IBE (that’s to be argued over by a comparison of the relative theoretical virtues of the explanatory hypothesis), we can already see we would be warranted only in “tenuous” and not very firm belief that others have minds, comparable to the kind of nuanced and open-to-contrary-evidence kinds of beliefs we properly take to scientific theories.

Let’s assume that this is a good criticism (I think it’s at least an interesting one). Does it extend to the second problem of other minds? Per Plantinga, we assume a range of criteria-based inferences to firm specific beliefs in a range of perceivings, intendings, emotions, and feelings, as well as to general classifications of them as believers and desirers. That leaves us with the challenge of spelling out how we get to specific general beliefs and final desires, and similar kind of states. Could we see these as a kind of tenuous belief, like a scientific hypothesis? The thesis would not now be that a child would go wrong in the firmness of his beliefs that his mother has feelings, is a thinker, etc–for those are criterion-backed judgements. But he would go wrong if he were comparably firm in his ascription of general beliefs and final desires to her. I take it that while some of our (and a child’s) ascriptions of general beliefs and desires to others will be tenuously held, others, and particularly negative ascriptions, are as firm as any other. I’m as firmly convinced that my partner harbours no secret final desire to corrupt my soul, and that she believes that the sun will rise tomorrow, as I do in her being a believer at all, or someone who feels pain, emotions, and sees the things around her. So I think if there’s merit to Plantinga’s criticism of the IBE model as a response to the original problem of other minds, it extends to using it as a response to the other problem of other minds.

The nice thing about appealing to a topic-neutral belief forming method like inference to the best explanation would be that we’d know exactly what we’d need to do to show that the ascriptions we arrive at are warranted, by Plantigan lights (the one sketched a couple of paras earlier). But the Plantigan worry about IBE is that it does not vindicate the kind of ascriptions that we in fact indulge in. This shows, I think, why Plantingans cannot ignore the problem of saying something more about the structures that underpin ascriptions of general beliefs, final desires and the like. The need there to be some part of our cognitive system issuing in the relevant mental states ascriptions which (like IBE) is reliable in normal circumstances but which (unlike IBE) reliably issues in the very ascriptions we find ourselves with. It’s not at all obvious what will fit the bill, and without that, we don’t have a general Plantingan answer to the problem of other minds.

Postscript: A question that arises, about the way I’m construing Plantinga’s criticism of IBE: suppose that we devised a method of belief formation IBE*, which is just like IBE but issues in *firmer* beliefs. What would go wrong? I tihnk the Plantingan answer must be that IBE* isn’t a reliable enough method to count as producing warranted beliefs, if set up this way. In the intro to Warrant and Proper Function, Plantinga says: “The module of the design plan governing the production of that belief must be such that the statistical or objective probability of a belief’s being true, given that it has been produced in accord with that module in a congenial cognitive environment, is high. How high, precisely? Here we encounter vagueness again; there is no precise answer. It is part of the presumption, however, that the degree of reliability varies as a function of degree of belief. The things we are most sure of—simple logical and arithmetical truths, such beliefs as that I now have a mild ache in my knee (that indeed I have knees) obvious perceptual truths–these are the sorts of beliefs we hold most firmly, perhaps with the maximum degree of firmness, and the ones such that we associate a very high degree of reliability with the modules of the design plan governing their production”. And so the underlying criticism here of an IBE approach to the problem of other minds is that IBE isn’t reliable enough in our kind of environment to count as warranting very firm degrees of belief in anything. And so when we find very firm beliefs, we must look for some other warranting mechanism.

The other epistemic problem of other minds

The classic epistemic problem of other minds goes something like this. I encounter a person in the street, writhing on the floor, exhibiting paradigmatic pain-behaviour. Now, you might run up to help. But for me, whose mind naturally turns to higher things, it poses a question. Sure, I know that the person writhing on the ground is moving their limbs in a certain distinctive way. And I find myself forming the belief on that basis that they are in pain. But with what right do I form the belief? What justifies the leap from pain-behaviour to pain?

You might try to answer by pointing to past experience: on previous occasions I’ve seen someone exhibiting pain-behaviour, they’ve turned out to be in pain. So I’ve got good inductive grounds for thinking that pain-behaviour signals pain. That sounds reasonable, except—what justified me on those earlier occasions in thinking the pain-behaviours were accompanied by pain? I didn’t directly feel the pain myself, after all (like I might have checked to see if smoke was generated by fire). If I’m to be justified in believing that all Fs are G on the basis of a belief that all past observed Fs were G, I better have been justified on those past occasions in thinking that the observed F was a G. So this line of thought just generalizes the question: how am I ever justified in moving from the direct observations (pain behaviour) to pain.

There’s one particular response to this question I’m going to mention and set aside entirely, for now. That is that I was justified in the past in thinking that someone was in pain on the basis of first person testimony—the person telling me that they are in pain. (First person testimony seems more interesting than second person testimony—someone else telling me the person is in pain—for that would just pushes the question back to how they knew). If first-person testimony can (without circularity) play this kind of foundational role in grounding our knowledge of the state of mind of another, that’ll be super-significant. But a competing line of thought, which I’ll be running with for now, is that we can in principle have knowledge that people are in pain, without this being based on their use of language. This is a very natural picture. It is one on which, for example, we learn the meaning of the word “pain” by noting that it’s applied to people who, we know, are in pain.

Now comes the standard framing move of this first epistemic problem of other minds. We spot that there is one case in which we have knowledge that a thing is in pain (and that this is accompanied by pain-behaviour) where our belief that it’s in pain isn’t based on its observable pain behaviour. That happens when the thing in pain is ourselves. Our knowledge that we ourselves are in pain is introspective, rather than observational. This looks like it helps! It gives us access to a set of cases where pain is correlated with pain behaviour. Whenever, then, we are in a position to directly observe whether or not someone is in pain, we see that indeed, normally, pain behaviour is accompanied by pain.

But the framing was a trap. At this point the other-minds sceptic can point to inadequacies in generalizing from pain-behaviour/pain links in the case of a single individual, to a general correlation. Suppose you can only extract balls from a single urn. You notice all the blue balls are heavy, and all the red balls are light. Are you then justified in concluding that all blue balls in any urn are heavy? It seems not: you have no reason for thinking that you’ve taken a fair sampling of the blue balls overall; you have randomly sampled only a certain restricted population: the balls in this urn. At a minimum, we’d need to supplement your egocentric pain/pain-behaviour information with some explanation of why you should take your own case to be representative. But what would that explanation be?

The challenge might be met. After all, on the traditional inductive model of justifying generalizations, we move from local patterns occurring in the region of space-time we inhabit to global generalizations even though we do not “randomly sample” what happens in space and time. Somehow, induction (or something like it) takes us beyond the interpolations of patterns holding through the population randomly sampled, to the unrestricted extrapolation of certain patterns. Whatever secret sauce makes extrapolation beyond the sampled population work in everyday inductions, maybe it is also present in the case of pain and pain behaviour, allowing extrapolation for my case to all cases. But pending some specific account of how the challenge could be met, I think it’s reasonable to look for alternatives.

So that’s the first problem of other minds. It’s a problem of how we even get started in justifiedly attributing (=having justified beliefs about) the mental states of others. And though I’ve run through this for the case of pain, the usual stock example, you could run through the same challenge for many other mental states. Here are some candidates: that x sees a rock, or x sees a red round thing, or sees that the red round thing is on the floor. That x intends to hail a taxi, or x intends to raise her arm. That x is afraid, that x is afraid of that snake. In each case, there’s a characteristic kind of behaviour or relation to the environment that we could in principle describe in non-mentalistic terms, and which would be a basis for justifiedly attributing the various feelings/perceiving/intendings/emotions to the other.

What’s the other problem of other minds then? Well, it’s the problem of how we are justified in the rest of what we believe about the minds of others. The examples I’ve mentioned are quite different from each other (as Bill Wringe recently emphasized, some of centrally involve intentional content, which may pose particular issues), but they are well-represented by pain in the following sense: they are all states which are “specific and manifestable” in a certain sense. Pain is tied to pain-behaviour. Fear of a sanke tied to fear-behaviour targetting the snake. An intention to raise one’s arm is tied to one’s arm going up in a distinctive fashion. Seeing a red round thing is tied to having an unobstructed view of the thing (while awake etc). Those “ties” to the manifest circumstances of a person may be defeasible and contingent, but they’re clearly going to be central to the epistemological story. But there are plenty of mental states that are not like that. The two cases that occupy me the most are: general beliefs or beliefs about things beyond the here-and-now (x’s belief that she is not a brain in a vat; her belief that the sun will rise tomorrow) and final desires (a desire for security, or for justice).

There are plenty of “lines to take” on the first problem of other minds that won’t generalize to these cases. Perhaps we can make sense of simply “perceiving” what others feel, or see, or intend, when they instanatiate the manifestations associated with those (maybe I point you to a story about mirror neurons, or the like which could give you the empirical underpinnings of such a process). Maybe, following Plantinga, we think of the belief formation involved as a defeasible but reliable form of inference—the accurate executation of a belief-forming system successfully aimed at truth, producing in this instance a belief about another’s mind triggered by seeing the manifestation. But general and relatively abstract beliefs have no direct characteristic manifestations (at least setting aside first-person testimony, as we have done), and the same goes for final desires. If argument against characteristic manifestations is needed, I’d point to the famous circularity objections to behaviouristic analyses of individual belief and desire states. Essentially: pair a general and abstract belief up with screwy enough desires, and it fits with almost any behaviour; pair a final desire up with screwy enough belief, and the desire fits with almost any behaviour. So if anything is manifested in behaviour, it would seemingly have to be belief-desire states as a whole. But even then, there are many belief-desire states that would fit with any given piece of behaviour. The idea of direct manifestations in behaviour (or relation to the environments) just seems the wrong model to apply to these states.

If this is an epistemic problem of other minds, then it’s a different problem from the first. But is it a problem? Here’s what I’m imagining. Imagine that we’d solved the first epistemic problem of other minds to our own satisfaction. We have satisfied ourselves, at last, that we’re justified in believing that the person writhing on the floor is indeed in pain—and indeed, that he sees us, and is attracting attention by raising his arm, etc. All of the various manifestation-to-mental state ties discussed earlier, we’ll assume, produced justified beliefs (for specificity, suppose the Plantigian story is correct). But now, given all this as a basis, what justifies us in thinking that he wants help, that he believes that we are able to help him, and so on? Of course, we would naturally attribute all this to a person in those circumstances. We think people in pain would like help, as a general rule. But we need to spell out what justifies this second layer of description of others.

At this point, we start to parallel what went before. So: we might point to past experience with people who are in pain. In the past, people in pain wanted help, and so…. but once again, that pushes the question back to how we knew in those historical cases that help was wanted. We might have been told by others that those historical cases wanted help; but how did our informants know? Because there’s no general tie between abstract and non-immediate beliefs/desires and anything immediately manifested, we can’t credibly say we simply perceive these states of the other, nor that we defeasibly infer them from some observable basis. So the problem is: what to do?

In future posts, I want to say something about how answering this other problem of other minds might go. Essentially, I want to explore a model on which the epistemology of other minds involves an contingent and topic-specific rules for belief formation about the belief-desires of others (“epistemic charity”), whose epistemic standing will have to be assessed and defended. Other than other contingent/topic-specific rules, the main alternatives that I’ll be considering are topic-neutral rules of belief formation (e.g. inference to the best explanation) and also, if I find something useful to say about it, an epistemology which gives language the starring role as a direct manifestation of otherwise hidden beliefs and desires. We’ll see how far I get!

Iteration vs. Entrenchment

I’m going to have one more run at a form of the Lewisian derivation that justifies the strong conclusions (e.g. that the reason for believing A would be a reason for believing each of the iterated B-claims.

I’ll be using strong-indication again, though since this is the only indication relation I’ll use in this discussion, I’ll drop the superscript disambiguation:

  • p\Rightarrow_x q =_{def} \exists rR_x(r, p)\rightarrow \forall r(R_x(r,p)\supset R_x(r,q))

Remember that R is the relation of something being sufficient reason to believe, *relative to background beliefs and epistemic standards*. Let’s introduce a new operator E_x, which will say that the embedded proposition is a background belief or epistemic standard for x—or as I’ll say for short, is entrenched for x.

We have the first three premises on a strong reading of indication again. But I’ll now change the fourth premise from an indication principle to one about E:

  1. A \supset B_u(A))
  2. A\Rightarrow_u \forall yB_y(A))
  3. A \Rightarrow_u q
  4. E_u \forall y [u\sim y]

A linked change is that we abandon IITERATION for a principle that says that propositions about what indicates what to a person is part of their epistemic standards/background beliefs:

  • ENTRENCHMENT \forall c \forall x ([A \Rightarrow_x c]\supset E_x[A\Rightarrow_x c]

The core derivation I have in mind goes like this:

  1. A\Rightarrow_u \forall y B_y A. Premise 2.
  2. E_u(A\Rightarrow_u \forall yB_y A). From 1 via ENTRENCHMENT.
  3. E_u \forall y [u\sim y]. Premise 4.
  4. E_u \forall z(A\Rightarrow_z \forall yB_y A). From 2,3 by NEWSYMMETRY+.
  5. A\Rightarrow_u\forall z B_z \forall yB_y A. From 1,4 by NEWCLOSURE+.

What then are these new principles of NEWSYMMETRY+ and NEWCLOSURE+ and how should we think about them? NEWSYMMETRY+ is another perspectival form based on the validity of strong symmetry:

  • SYMMETRY-S \forall c \forall x ([A \Rightarrow_x c]\wedge \forall y [x\sim y]\supset \forall y[A\Rightarrow_y c])

NEWSYMMETRY+ is then an instance of a principle that propositions that are entrenched for an individual are closed under valid arguments, with SYMMETRY-S providing the relevant valid argument:

  • NEWSYMMETRY+ \forall c \forall x\forall z[E_z[A \Rightarrow_x c]]\wedge [E_z\forall y[x\sim y]]\supset [E_z \forall y[A\Rightarrow_y c]]]

NEWCLOSURE+ is based again validity of closure for the B-operator under strong indication, which is again something that really just reduces to modus ponens for the counterfactual condition hidden inside the indication relation:

  • CLOSURE-S \forall a,c (\forall x B_x (a)\wedge \forall x[a \Rightarrow_x c]\supset \forall x B_x(c)))

But the principle we use isn’t just the idea that some operator or other is closed under closure. The thought is instead a principle about reason-transmission that goes as follows. Suppose two propositions entail a third, and r is sufficient reason (given one’s background beliefs and standards) to believe the first proposition. Then, if the second proposition is entrenched (part of those background beliefs and standards), r is a also sufficient reason (given one’s background beliefs and standards) to believe the third proposition. The underlying valid argument relevant to this is CLOSURE-S, which makes this, in symbols:

  • NEWCLOSURE+ \forall a,b,c\forall x ([a \Rightarrow_x \forall y B_y(b)]\wedge [E_x(\forall y[b \Rightarrow_y c])]\supset [a\Rightarrow_x \forall yB_y(c)])

NEWCLOSURE+ seems to me pretty well motivated. NEWSYMMETRY+ just as good as anything we’ve worked to so far. STANDARDS now replaces ITERATION. Unlike ITERATION, there’s no chance of deriving it from principles about counterfactuals and the transparency of whatever B stands for. Instead, it simply represents it’s own transparency assumption: that true propositions about the epistemic standards and background beliefs of an agent are themselves part of an agent’s epistemic background. It is weaker than a transparency assumption about beliefs or reasons to believe used in motivating ITERATION since it has a more restricted domain of application. It is stronger than earlier transparency assumptions insofar as it requires that the propositions to which it applies are not merely believed (or things we have reason to believe) but have the stronger status of being entrenched.

NEWCLOSURE+ is quite close in form to Cubitt and Sugden’s A6, except their principle used (what I notate as) the B operator throughout, where at a crucial point I have an instance of the E operator. An advantage that this gives me is that the E-operator doesn’t feature in the conclusion of the argument, so we are free to reinterpret it however we like to get the premises to come out true—trying to do reinterpret B would change the meaning of the conclusions we are deriving. So, for example, I complained against theirs that crucial principles seemed bad because some of your beliefs or reasons for beliefs might not be resilient under learning new information. But we are free to simply build into E that it applies only to propositions that are resiliently part of one’s background beliefs/standards (or maybe being resilient in that way is part of what it is for something to be treated as a standard/be background).

Having walked through this, let me illustrate the fuller form of the derivation, using all the premises.

  1. A\Rightarrow_u \forall y B_y A. Premise 2.
  2. A\Rightarrow_u q. Premise 3.
  3. E_u(A\Rightarrow_u q). From line 2 via ENTRENCHMENT.
  4. E_u \forall y [u\sim y]. Premise 4.
  5. E_u \forall z(A\Rightarrow_z q). From lines 3,4 by NEWSYMMETRY+.
  6. A\Rightarrow_u\forall z B_z q. From 1,5 by NEWCLOSURE+.
  7. E_u(A\Rightarrow_u\forall z B_z q). From line 6 via ENTRENCHMENT.
  8. E_u \forall y(A\Rightarrow_y \forall z B_z q). From lines 4,7 by NEWSYMMETRY+.
  9. A\Rightarrow_u\forall y B_y \forall z B_z q. From 1,8 by NEWCLOSURE+.
  10. ….

The pattern of the last few lines loops to get that A indicates each of the iterations of B-operator applied to q. And we can then appeal to Premise 1, A and CLOSURE to “detach” the consequents of lines 6,9, etc.

But for our purposes here and now, the more significant thing is lines 6 and 9 (and 12, 15 etc) prior to detachment. For these tell us that a sufficient reason for believing A is itself a sufficient reason for believing each of these iterated B propositions.

So to sum up: if we are content to work with weak indication relations, we can get away with the premises I used in other posts, including ITERATION and previous versions of SYMMETRY+ and CLOSURE+. If we want to work with strong indication, and get information about what is a reason for what, then we need to make changes, and the above is my best shot (especially in the light of the utter mess we got into in the last post!). Interestingly, while NEWSYMMETRY+ and NEWCLOSURE+ it seems to me are more or less equally plausible with the older analogues, the replacement for ITERATION (the principle I’m here calling ENTRENCHMENT) isn’t directly comparable to the earlier, though it’s still broadly a principle of transparency.

There is a delicate dialetical interplay between ENTRENCHMENT and the analysis of the indication relation. The stronger and more demanding indication is, the more plausible ENTRENCHMENT becomes, since the fewer instances fall under it. If we read indication as weak indication throughout, then ENTRENCHMENT would say that every counterfactual relating reasons for belief to reasons for other beliefs is part of the background beliefs/epistemic standards. That’s wildly strong! It’s pretty strong in strong indication version too. It becomes much more plausible if this were restricted to, for example, epistemic connections between propositions that are obvious to the agent.

In the settings I have considered in the previous posts, the counterfactual analysis earned its keep in part because ITERATION (which is here replaced by ENTRENCHMENT) could be treated as an iterated counterfactual. That’s no longer a consideration. The other advantage of having the counterfactual analysis is that it made CLOSURE an instance of modus ponens. But that’s not a reason for accepting the analysis of indication as a counterfactual—it’s just a reason for accepting that indication entails the counterfactual. The final reason for offering the counterfactual analysis is simply that it allows a reduction in the number of primitive notions around: in the original setting, it allows a reduction to just the B operator. That’s a consideration, but in the current context we’re having to work with E’s as well as B’s, so ideological purity is lost.

Once we need ENTRENCHMENT, it seems to me that it would be easier to defend the package presented here if we abandoned the counterfactual analysis of indication, and used it as a primitive notion, while adding as a premise the validity of the following principle which links a now-primitive indication relation to what we were previously calling strong indication:

  • p\Rightarrow^s_x q \supset \exists rR_x(r, p)\rightarrow \forall r(R_x(r,p)\supset R_x(r,q))

The soundness of the overall argument now turns on whether there exists a triple: of reason-relation, indication relation and entrenchment relation that makes true all the premises.

As a final note: the link between the counterfactual and primitive indication has two roles. One is simply a matter of reading off the significance of the final results. The other is to make CLOSURE valid. But it only makes CLOSURE valid if the B-operator is defined in the Lewisian way as having-reason-to-believe. As per that earlier post, a different counterfactual–concerning commitments to believe–matters for CLOSURE in that setting. So one would add that entailment as an extra premise about the now-primitive indication relation.

Strong and weak indication relations

[warning: it’s proving hard to avoid typos in the formulas here. I’ve caught as many as I can, but please exercise charity in reading the various subscripts].

In the Lewisian setting I’ve been examining in the last series of posts, I’ve been using the following definition of indicates-to-x (I use the same notation as in previous posts, but add a w-subscript to distinguish it from an alternative I will shortly introduce):

  • p\Rightarrow^w_x q =_{def} B_x p\rightarrow B_x q

The arrow on the right is the counterfactual conditional, and the intended interpretation of the B-operator is “has a reason to believe”. This fitted Lewis’s informal gloss “if x had reason to believe p, then x would thereby have reason to believe q”, except for one thing: the word thereby. Let’s call the reading above weak indication. Weak indication, I submit, gives an interesting version of the Lewisian derivation of iterated reason-to-believe from premises that are at least plausibly true in many paradigmatic situations of common belief.

But there is a cost. Lewis’s original gloss, combined with the results he derives, entail that each group member’s reasons for believing A obtains (say: the perceptual experience they undergo) are at the same time reasons for them to believe all the higher order iterations of reason-to-believe. That is a pretty explanatory and informative epistemology–we can point to the very things that (given the premises) justify us in all these apparently recherche comments. If we derive the same formal results on a weak reading of indication, we leave this open. We might suspect that the reasons for believing A are the reasons for believing this other stuff. But we haven’t yet pinned down anything that tells us this is the case.

I want to revisit this issue of the proper understanding of indication. I use R_x(r, p) to formalize the claim that r is a sufficient reason for x to believe that p (relative to x’s epistemic standards and background beliefs).  With this understood, B_x(p) can be defined as \exists r B(r,p).  Here is an alternative notion of indication—my best attempt to capture Lewis’s original gloss:

  • p\Rightarrow^s_x q =_{def} \exists rR_x(r, p)\rightarrow \forall r(R_x(r,p)\supset R_x(r,q))

In words: p strongly indicates q to x iff were x to have a sufficient reason for believing p, then all the sufficient reasons x has for believing p are sufficient reasons for x to believe q. (My thinking: in Lewis’s original the “thereby” introduces a kind of anaphoric dependence in the consequent of the conditional on the reason that is introduced by existential quantification in the antecedent. Since this sort of scoping isn’t possible given standard formation rules, what I’ve given is a fudged version of this).

Notice that the antecedent of the counterfactual here is identical to that used in the weak reading of indication. So we’re talking about the same “closest worlds where we have reason to believe p”. The differences only arise in what the consequent tells us. And it’s easy to see that, at the relevant closest worlds, the consequent of weak indication is entailed by the consequent of strong indication. So overall, strong indication entails weak indication.

If all the premises of my Lewis-style derivation were true under the strong reading, then the strong reading of the conclusion would follow. But some of the tweaks that I introduced in fixing up the argument seem to me implausible on the strong reading—more carefully, it is implausible that they are true on this reading in all the paradigms of common knowledge. Consider, for example, the premise:

  • A\Rightarrow_x \forall y (x\sim y)

In some cases the reason one has for believing A would be reason for believing that x and y are relevantly similar (as the conclusion states). I gave an example, I think, of a situation where the relevant manifest event A reveals to us both that we are members of the same conspiratorial sect. But this is not the general case. In the general case, we have independent reasons for thinking we are similar, and all that we need to secure is that learning A, or coming to have reason to believe A, wouldn’t undercut these reasons. (It was the possibility of undercutting in this way that was the source of my worry about the Cubitt-Sugden official reconstruction of Lewis, which doesn’t have the above premise, but rather than premise that x has reason to believe that x is similar to all the others).

So now we are in a delicate situation, if we want to derive the conclusions of Lewis’s argument on a strong reading of indication. We will need to run the argument with a mix of weak and strong indication, and hope that the mixed principles that are required will turn out to be true.

Here’s how I think it goes. First, the first three premises are true on the strong reading, and the final premise on the weak reading.

  1. A \supset B_u(A))
  2. A\Rightarrow^s_u \forall yB_y(A))
  3. A \Rightarrow^s_u q
  4. A\Rightarrow^w_u \forall y [u\sim y]

Of the additional principles, we appeal to strong forms of symmetry and closure:

  • SYMMETRY-S \forall c \forall x ([A \Rightarrow^s_x c]\wedge \forall y [x\sim y]\supset \forall y[A\Rightarrow^s_y c])
  • CLOSURE-S \forall a,c (\forall x B_x (a)\wedge \forall x[a \Rightarrow^s_x c]\supset \forall x B_x(c)))

In the case of closure, strong indication features only in the antecedent of the material conditional, so this is in fact weaker than closure on the original version I presented. These are no less plausible than the originals. As with those, the assumption is really not just that these are true—it is that they are valid (and so correspond to valid inference patterns). That is used in motivating the truth of principles that piggyback upon them are that are also used.

The “perspectival” closure principle can be used in a strong form:

  • CLOSURE+-S \forall a,b,c\forall x ([a \Rightarrow^s_x \forall y B_y(b)]\wedge [a \Rightarrow^s_x(\forall y[b \Rightarrow^s_y c])]\supset [a\Rightarrow^s_x \forall yB_y(c)])

The action in my vierw comes with the remaining principles, and in particular, the “perspectival” symmetry principle. Here it is in mixed form:

  • SYMMETRY+-M \forall a \forall c \forall x\forall z[a\Rightarrow^s_z[A \Rightarrow^s_x c]]\wedge [a \Rightarrow^w_z\forall y[x\sim y]]\supset [a\Rightarrow^s_z \forall y[A\Rightarrow^s_y c]

The underlying thought behind this perspectival principles (as with closure) is that when you have a valid argument, then if you have reason to believe the premises (in a given counterfactual situation), then you have reason to believe the conclusion. That’s sufficient for the weak reading we used in the previous posts. In a version where all the outer indication relations are strong, as with the strong CLOSURE+ above, it relies more specifically on the assumption that where r is a sufficient reason to believe each of the premises of a valid argument, it is sufficient reason to believe the conclusion.

We need a mixed version of symmetry because we only have a weak version of premise (4) to work with, and yet we want to get out a strong version of the conclusion. Justifying a mixed version of symmetry is more delicate than justifying either a purely strong or purely weak version. Abstractly, the mixed version says that if r is sufficient reason to believe one of the premises of a certain valid argument, and there is some reason or other to believe the second premise of that valid argument, then r is sufficient reason to believe the conclusion. This can’t be a correct general principle about all valid arguments. Suppose the reason to believe the second premise is s. Then why think that r alone is sufficient reason to believe the conclusion? Isn’t the most we get that r and s together are sufficient for the conclusion?

So we shouldn’t defend the mixed principle here on general grounds. Instead, the idea will have to be that with the specific valid argument in question (an instance of symmetry), assumptions about who I’m epistemically similar to (in epistemic standards and background beliefs) itself counts as a “background belief”. If that is the case, then we can argue that the reason for believing the first premise of the valid argument (in a counterfactual situation) is indeed sufficient relative to the background beliefs to entail the conclusion. One of the prerequisites of this understanding will be that either we assume that other agents will believe propositions about who they’re epistemically sensitive to in counterfactual situations where they have reason to believe those propositions; or else that talk of “background beliefs” is loose talk for background propositions that we have reason to believe. I think we could go either way.

In order to complete this, we will need iteration, and in the following, strong version:

  • ITERATION-S \forall c \forall x ([A \Rightarrow^s_x c]\supset [A \Rightarrow^s_x [A\Rightarrow^s_x c]]

I’ll come back to this.

Let me exhibit how the utmost core of a Lewisian argument looks in this version. I’ll compress some steps for readability:

  1. A\Rightarrow_u^s \forall y B_y A. Premise 2.
  2. A\Rightarrow^s_u(A\Rightarrow_u^s \forall yB_y A). From 1 via ITERATION-S.
  3. A\Rightarrow^w_u \forall y [u\sim y]. Premise 4.
  4. A\Rightarrow^s_u \forall z(A\Rightarrow_z^s \forall yB_y A). From 2,3 by SYMMETRY+-M.
  5. A\Rightarrow^s_u\forall z B_z \forall yB_y A. From 1,4 by CLOSURE+-S.

This style of argument—which can then be looped—is the basic core of a Lewis-style derivation. You can add in premise 3 and use CLOSURE+, and get something similar with q as the object of iterated B-operators, to get the original. And of course you can appeal to premise 1 and CLOSURE to “discharge” the antecedents of interim conclusions like 5 (this works with strong indication relations because it works for weak indication, and strong indication entails weak).

There’s an alternative way of mixing strong and weak indication relations. On this version we use a mixed form of ITERATION, the original weak SYMMETRY+, and then a mixed form of CLOSURE+

  • ITERATION-M \forall c \forall x ([A \Rightarrow^s_x c]\supset [A \Rightarrow^w_x [A\Rightarrow^s_x c]]
  • SYMMETRY+-W \forall a \forall c \forall x\forall z[a\Rightarrow^w_z[A \Rightarrow^s_x c]]\wedge [a \Rightarrow^w_z\forall y[x\sim y]]\supset [a\Rightarrow^w_z \forall y[A\Rightarrow^s_y c]
  • CLOSURE+-M \forall a,b,c\forall x ([a \Rightarrow^s_x \forall y B_y(b)]\wedge [a \Rightarrow^w_x(\forall y[b \Rightarrow^s_y c])]\supset [a\Rightarrow^s_x \forall yB_y(c)])
  1. A\Rightarrow_u^s \forall y B_y A. Premise 2.
  2. A\Rightarrow^w_u(A\Rightarrow_u^s \forall yB_y A). From 1 via ITERATION-M.
  3. A\Rightarrow^w_u \forall y [u\sim y]. Premise 4.
  4. A\Rightarrow^w_u \forall z(A\Rightarrow_z^s \forall yB_y A). From 2,3 by SYMMETRY+-W.
  5. A\Rightarrow^s_u\forall z B_z \forall yB_y A. From 1,4 by CLOSURE+-M.

The main advantage of this version of the argument would be that the version of ITERATION it requires is weaker. Otherwise, we are simply moving the bump in the rug from mixed SYMMETRY+ to mixed CLOSURE+. And that seems to me a damaging shift. We use mixed SYMMETRY+ many times, but the only belief we have ever to assume is “background” to justify the principle is the belief that all are similar to me. In the revised form, to run the same style of defence, we would have to assume that belief about indication relations of more and more complex contents are backgrounded. And that simply seems less plausible. So I think we should stick with the original if we can. (On the other hand, the principle we would need here is close to the sort of “mixed” principle that Cubitt and Sugden use, and they are officially reading “indication” in a strong way. So maybe this should be acceptable).

So what about the ITERATION-S, the principle that the argument now turns on? As a warm up, let me revisit the motivation for the original, ITERATION-W, which fully spelled out would be:

  • [\exists r R_u(r, A)\rightarrow \exists r R_u(r,c))]
    \supset[\exists s R_u(s,A)\rightarrow
    \exists s R_u(s,[\exists r R_u(r, A)\rightarrow \exists r R_u(r,c)])]

Assume the first line is the case. Then we know that at the worlds relevant for evaluating the second and third lines, we have both \exists r R_u(r,c) and \exists r R_u(r, A). By an iteration principle for reason-to-believe, \exists s_1R_u(s_1,\exists r R_u(r,c)) and \exists s_2 R_u(s_2,\exists r R_u(r, A)). And by a principle of conjoining reasons (which implicitly makes a rather strong consistency assumption about reasons for belief) \exists s R_u(s,\exists r R_u(r,A)\wedge \exists r R_u(r, c)). But a conjunction entails the corresponding counterfactual in counterfactual logics for strong centering, and so plausibly the reason to believe the conjunction is a reason to believe the counterfactual: \exists s R_u(s,\exists r R_u(r,A)\rightarrow \exists r R_u(r, c)). That is the rationale for the original iteration principle.

Unfortunately, I don’t think there’s a similar rationale for the strong iteration principle. The main obstacle is the following: point: one particular sufficient reason for believing A to be the case (call it s) is unlikely to be one’s reason for believing a counterfactual generalization that covers all reasons to believe that A is the case. In the original version of iteration, this wasn’t at issue at all. But the rationale I offered uses a strategy of finding a reason to believe a counterfactual by exhibiting a reason to believe the corresponding conjunction, which entails the counterfactual. In order to find a reason to believe the conjunction of the relevant counterfactual below (the one appearing in the third line) But an essential part of that strategy was arguing that a certain thing was a reason to believe a counterWhen you write down what strong iteration means in detail, you see (in the third line below) that this is going to have to be argued for. I can’t see a strategy for arguing for this, and I the principle itself seems likely to be false to me, as stated.

  • [\exists r R_u(r, A)\rightarrow \forall r (R_u(r, A)\supset  R_u(r,c))]
    \supset[\exists s R_u(s,A)\rightarrow
    \forall s( R_u(s,A)\supset R_u(s,[\exists r R_u(r, A)\rightarrow \forall r (R_u(r, A)\supset R_u(r,c))]]

That’s bad news. Without this principle, the first mixed version of the argument I presented above doesn’t go through. I think there’s a much better chance of mixed iteration being argued for, which is what was needed for the second version of the argument. But that was the version of the argument that required the dodgy mixed closure principle. Perhaps we should revisit that version?

I’m closing this out with one last thought. The universal quantifier in the consequent of the indication counterfactual is the source of the trouble for strong ITERATION. But that was introduced as a kind of fudge for the anaphor in the informal description of the indication relation. One alternative is use a definite description in the conclusion of the conditional—which on Russell’s theory introduces the assumption that there is only one sufficient reason (given background knowledge and standards) for believing the propositions in question. This would give us:

  • p\Rightarrow^d_x q =_{def}
    \exists rR_x(r, p)\rightarrow \exists r(R_x(r,p)\wedge \forall s (R_x(s, p)\supset r=s) \wedge R_x(r,q))

Much of the discussion above can be rerun with this in place of strong indication. And I think the analogue of the strong ITERATION has a good chance of being argued for here, provided that we have a suitable iteration priciple for reason-to-believe. For weak iteration, we needed only to assume that when there is reason to believe p, there is reason to believe that there is reason to believe p. In the rationale for a new stronger version of ITERATION that I have in mind we will need that when s is a reason to believe that p, then s is a reason to believe that s is a reason to believe that p. Whether this will fly, however, turns both on being able to justify that strong iteration principle and on whether indication in the d-version, with its uniqueness assumption, finds application in the paradigmatic cases.

For now, my conclusion is that the complexities involved here justifies the decision to run the argument in the first instance with weak indication throughout. We should only dip our toes into these murky waters if we have very good reason to do so.

Identifying the subjects of common knowledge

Suppose that it’s public information/common belief/common ground among a group G that the government has fallen (p). What does this require about what members of G know about each other?

Here are three possible situations:

  1. Each knows who each of the other group members is, attributing to (de re) to each whatever beliefs (etc) are required for it to be public information that p.
  2. Each has a conception corresponding to each member of the group. One attributes, under that conception, whatever beliefs (etc) are required for it to be public information that p.
  3. Each has a concept of the group as a whole. Each generalizes about the members of the group, to the effect that every one of them has the beliefs (etc) required for it to be public information that p.

Standard formal models of common belief suggest the a type 1 situation (though, as with all formal models, they can be reinterpreted in many ways). The models index accessibility relations by group members. One advantage of this is that once we fix which world is actual, we’re in a position to unambiguously read off the model what the beliefs of any given group member is—one looks at the set of worlds accessible according to their accessibility relation. What it takes in these models for A to believe that B believes that p is for all the A-accessible worlds to be such that all worlds B-accessible from them are ones where p is true. So also: once we as theorists have picked our person (A), it’s determined what B believes about A’s beliefs—there’s no further room in the model for further qualifications or caveats about the “mode of presentation” under which B thinks of A.

Stalnaker argues persuasively this is not general enough, pointing to cases of type 2 in our classification. There are all sorts of situations in which the mode of presentation under which a group member attributes belief to other group members is central. For example (drawing on Richard’s phone booth case) I might be talking to one and the same individual by phone that I also see out the window, without realizing they are the same person. I might attribute one set of beliefs to that person qua person-seen, and a different set of beliefs to them qua person-heard. That’s tricky in the standard formal models, since there will be just one accessibility relation associated with the person, where we need at least two. Stalnaker proposes to handle this by indexing the accessibility relations not to an individual but to an individual concept—a function from worlds to individuals—which will draw the relevant distinctions. This comes at a cost. Fix a world as actual, and in principle one and the same individual might fall under many individual concepts at that world, and those individual concepts will determine different belief sets. So this change needs to be handled with care, and more assumptions brought in. Indeed, Stalnaker adapts the formal model in various ways (e.g. he ultimately ends up working primarily with centred worlds). These details needn’t delay us, since my concern here isn’t with the formal model directly.  Rather, I want to point to the  desiderata that it answers to: that we make our theory of common belief sensitive to the ways in which we think about other individual group-members. It illustrates that the move to type 2 cases is a formally (and philosophically) significant step.

The same goes for common belief of type 3, where the subjects sharing in the common belief are characterized not individually but as members of a certain group. Here is an example of a type-3 case (loosely adapted from a situation Margaret Gilbert discusses in Political Obligation). We are standing in the public square, and the candidate to be emperor appears on the dais. A roar of acclaim goes up from the cloud—including you and I. It is public information among the crowd that the emperor has been elected by acclimation. But the crowd is vast—I don’t have any de re method of identifying each crowd member, nor do I have an individualized conception of each one. This situation is challenging to model in either the standard or Stalnakerian ways. But it seems (to me) a paradigm of common belief.

Though it is challenging to model in the multi-modal logic formal setting, other parts of the standard toolkit for analyzing common belief cover it smoothly. Analyses of common belief/knowledge like Lewis’s approach from Convention (and related proposals, such as Gilbert’s) can take it in their stride. Let me present it using the assumptions that I’ve been exploring in the last few posts. I’ll make a couple of tweaks: I’ll consider instances of the assumptions as they pertain to a specific member of the crowd (you, labeling u). I’ll make explicit the restriction to members of the crowd, C. The first four premises are then:

  1. A \supset B_u(A))
  2. (A\Rightarrow_u [\forall y: Cy] B_y(A))
  3. (A \Rightarrow_u q)
  4. ([A\Rightarrow_u [\forall y: Cy](x\sim y)]

For “A”, we input a neutral description of the state of affairs of the emperor receiving acclaim on the dais in full view of everyone in the crowd. q is the proposition that the emperor has been elected by acclimation. The first premise says that it’s not the case that the following holds: the emperor has received acclaim on the dais in full view of the crowd (which includes you) but you have no reason to believe this to be the case. In situations where you are moderately attentive this will be true. The second assumption says that you would also have reason to believe that everyone in the crowd has reason to believe that the emperor has received acclaim on the dais in full view of the crowd, if you have reason to believe that the emperor has received such acclaim in the first place. That also seems correct. The third says if you had reason to believe this situation had occurred, you would have reason to believe that the emperor had been elected by acclimation. Given modest background knowledge of political customs of your society (and modest anti-sceptical assumptions) this will be true too. And the final assumption says that you’d have reason to believe that everyone in the crowd had relevantly similar epistemic standards and background knowledge (e.g. anti-sceptical, modestly attentive to what their ears and eyes tell them, aware of the relevant political customs), if/even if you have reason to believe that this state of affairs obtained.

All of these seem very reasonable: and notice, they are perfectly consistent with utter anonymity of the crowd. There are a couple of caveats here, about the assumption that all members of the crowd are knowledgable or attentive in the way that the premises presuppose. I come back to that later

Together with five other principles I set out previously (which I won’t go through here: the modifications are obvious and don’t raise new issues) these deliver the following results (adapted to the notation above):

  • A \Rightarrow_u q
  • A\Rightarrow_u [\forall y: Cy] B_y(q)
  • A\Rightarrow_u [\forall z : Cz] B_z([\forall y: Cy] B_y(q))
  • \ldots

And each of these with a couple more of the premises entails:

  • B_u q
  • B_u [\forall y : Cy] B_y(q)
  • B_u [\forall z : Cz] B_z([\forall y: Cy] B_y(q))
  • \ldots

It’s only at this last stage that we then need to generalize on the “u” position, reading the premises as holding not just for you, but schematically for all members of the crowd. We then get:

  • [\forall x : Cx] B_x q
  • [\forall x : Cx] B_x [\forall y :Cy] B_y(q)
  • [\forall x : Cx] B_x [\forall z : Cz] B_z([\forall y\in C] B_y(q))
  • \ldots

If this last infinite list of iterated crowd-reasons-to-believe is taken to characterize common crowd-belief, then we’ve just derived this from the Lewisian assumptions. And nowhere in here is any assumption about identifying crowd members one by one. It is perfectly appropriate for situations of anonymity.

(A side point: one might explore ways of using rather odd and artificial individual concepts to apply Stalnaker’s modelling to this case. Suppose, for example, there is some arbitrary total ordering of people, R. Then there are the following individual concepts: the R-least member of the crowd, the next-to-R-least member of the crowd, etc. And if one knows that all crowd members are F, then in particular one knows that the R-least crowd member is F. So perhaps one can extend the Stalnakerian treatment to the case of anonymity through these means. However: a crucial question will be how to handle cases where we are ignorant of the size of the crowd, so ignorant about whether “the n-th crowd member in the crowd” fails to refer. I don’t have thoughts to offer on this puzzle right now, and it’s worth remembering that nobody’s under any obligation to extend this style of formal modelling to the case of anonymous common belief.)

Type-3 cases allow for anonymity among the subjects of common belief. But remember  that it needs to be assumed that all members of the crowd are knowledgable and attentive. In small group settings, where we can monitor the activities of each other group member, each can be sensitive to whether others have the relevant properties.  But this seems in principle impossible in situations of anonymity. On general grounds, we might expect most of the crowd members to have various characteristics, but as the numbers mount up, the idea that the characteristics are universally possessed would be absurd. We would be epistemically irresponsible not to believe, in a large crowd, that some will be distracted (picking up the coins they just dropped and unsure what the sudden commotion was about) and some will lack the relevant knowledge (the tourist in the wrong place at the wrong time). The Lewisian conditions for common belief will fail; likewise, the first item on the infinite list characterizing common belief itself will fail—the belief that q will not be unanimous.

So we can add to earlier list a fourth kind of situation. In a type-4 situation, the crowd is not just anonymous, but also contains the distracted and ignorant. More generally: it contains unbelievers.

A first thought about accommodating type 4 situations is to replace the quantifiers, replacing the universal quantifiers “all” with “most” (or: a certain specific fraction). We would then require that the state of affairs indicates to most crowd members that the emperor was elected by acclimation; that it indicates to most that most have reason to believe that the emperor was elected by acclimation, and so on. (This is analogous to the kind of hedges that Lewis imposes on the initially unrestricted clauses characterizing convention in his book). But the analogue of the Lewis derivation won’t go through. Here’s one crucial breaking point. One of the background principles that is needed in getting from Lewis’s premises to the infinite lists was the following: If all have reason to believe that A, and for all, A indicates that q, then all have reason to believe that q. Under the intended understanding of “indication”, this is underwritten by modus ponens, applied to an arbitrary member of the group in question–and then universal generalization. But if we replace the “all” by “most”, we have something invalid: If most have reason to believe that A, and for most, A indicates that q, then most have reason to believe that q. The point is that if you pool together those who don’t have reason to believe that A, and those for whom A doesn’t indicate that q, you can find enough unbelievers that it’s not true that most have reason to believe that q.

A better strategy is the analogue of one that Gilbert suggests in similar contexts (in her book Political Obligation). We run the original unrestricted analysis not for the crowd but for some subgroup of the crowd: the attentive and knowledgeable. Let’s call this the core crowd. You are a member of the core crowd, and the Lewisian premises seem correct when restricted to the core crowd (for example: the public acclaim indicates to you that all attentive and knowledgable members of the crowd have reason to believe that he public acclaim occurred). So the derivation can run on as before, and established the infinite list of iterated reason-to-believe among members of the core crowd.

(Aside: Suppose we stuck with the original restriction to members of the crowd, but replaced the quantifiers for “all” not with some “most” or fractional quantifier, but with a generic quantifier. The premises become something like: given A,  crowd members believe A; A indicates to crowd members that crowd members believe A; A indicates to crowd members that q; crowd members have reason to believe that crowd members are epistemically similar to themselves, if/even if they have reason to believe A. These will be true if generically, crowd members are attentive and knowledgable in the relevant respects. Now, if the generic quantifier is aptly represented as a restricted quantifier—say, restricted to “typical” group members—then we can derive an infinite list of iterated reason-to-believe principles by the same mechanism as with any other restricted quantifier that makes the premises true. And the generic presentation makes the principles seem cognitively familiar in ways in which explicit restrictions do not. I like this version of the strategy, but whether it works turns on issues about the representation of generics that I can’t explore here.)

Once we allow arbitrary restrictions into the characterization of common belief, it makes it potentially pretty cheap (I think this is a point Gilbert makes—she certainly emphasizes the group-description-sensitivity of “common knowledge” on her understanding of it). For an example of cheap common belief, consider the group: those in England who have reason to believe sprouts are tasty (the English sprout-fanciers). All English sprout fanciers have reason to believe that sprouts are tasty. That is analytically true! All English sprout fanciers have reason to believe that all English sprout fanciers have reason to believe that sprouts are tasty, since they have reason to believe things that are true by definition. And all English sprout fanciers have reason to believe this last iterated belief claim, since they have reason to believe things that follow from definitions and platitudes of epistemology. So on, all the way up the hierarchy. 

So there seems to be here a cheap common belief among the English sprout fanciers that sprouts are tasty. It’s cheap, but useless, given that I, as an English sprout fancier, am not in a position to coordinate with another English sprout fancier—we can meet one in any ordinary context and not have a clue that they are one of the subjects involved in this common belief is shared. (Contrast if the information that sprouts are tasty were public among a group of friends going out to dinner). It seems very odd to call the information that sprouts are tasty public among the English sprout fanciers, since all that’s required on my part to acquire all the relevant beliefs in this case is one idiosyncractic belief and a priori reflection. Publicity of identification of subjects among whom public information is possessed seems part of what’s required for information to be public in the first place. Type 1 and type 2 common beliefs build this in. Type 3 common beliefs, if applied to groups membership of which is easy to determine on independent grounds, don’t raise many concerns about this. But once we start using artificial, unnatural, restrictions under pressure from type 4 situations, the lack of any publicity constraint on identification becomes manifest, dramatized by the cases of cheap common belief.

Minimally, we need to pay attention to whether the restrictions that we put into the quantifiers that characterize type 3 or 4 common belief undermine the utility of attributing common belief among the group so-conceivedBut it’s hard to think of general rules here. For example, in the case characterized above of the emperor-by-acclamation, the restriction to the core crowd–the attentitive and knowledgeable crowd members—seems to me harmless, illuminating and useful. On the other hand, the same restrictions in the case in the next paragraph gives us common belief that while not as cheap as the sprout case earlier, is prima facie just as useless.

Suppose that we’re in a crowd milling in the public square, and someone stands up and shouts a complex piece of academic jargon that implies (to those of us with the relevant background) that the government has fallen. This event indicates to me that the government has fallen, because I happened to be paying attention and speak academese. I know that the vast majority of the crowd either weren’t paying attention to this speech, and haven’t wasted their lives obtaining the esoteric background knowledge to know what it means. Still, I could artificially restrict attention to the “core” crowd, again defined as those that are attentive and knowledgable in the right ways. But now this “core” crowd are utterly anonymous to me, lost among the rest of the crowd in the way that English sprout fanciers are lost among the English more generally. The core crowd might be just me, or it could consist of me and one or two others. I don’t have a clue. Again: it is hardly public between all the core crowd (say, three people) that they share this belief, if for all each of them know, they might be the only one with the relevant belief. And again: this case illustrates that the same restriction that provides useful common belief in one situation gives useless common belief in another.

The way I suggest tackling this is to start with the straightforward analysis of common belief that allows for cheap common belief, but then start building in suitable context-specific anti-anonymity requirements as part of an analysis of an account of the conditions under which common belief is useful. In the original crowd situation for example, it’s not just that the manifest event of loud acclaim indicated to all core crowd members that all core crowd members have reason to believe that the emperor was elected by acclaim. It’s also that it indicated to all core crowd members that most of the crowd are core crowd. That means that in the circumstances, it is public among the core crowd that they are the majority among the (easily identifiable) crowd. Even though there’s an element of anonymity, all else equal each of us can be pretty confident  that a given arbitrary crowd member is a member of the core crowd, and so is a subject of the common belief. In the second scenario given in the paragraph above, where the core crowd is a vanishingly small proportion of the crowd, it will be commonly believed among the core that they are a small minority, and so, all else equal, they have no ability to rationally ascribe these beliefs to arbitrary individuals they encounter in the crowd.

We can say: a face to face useful common belief is one that where there are face-to-face method of categorizing the people we encounter (independently of their attitudes to the propositions in question) within a certain context as a G*, where we know that most G*s are members of the group among which common belief prevails.

(To tie this back to the observation about generics I made earlier: if generic quantifiers allow the original derivation to go through, then there may be independent interest in generic common belief among G*s, where this only requires the generic truth that G* members belief p, believe that G* members belief p, etc. The truth of the generic then (arguably!) licenses default reasoning attributing these attitudes to an arbitrary G*. So generic common belief among a group G*, where G*-membership is face-to-face recognizable, may well be a common source of face-to-face useful common belief).

Perhaps only face-to-face useful common beliefs are decent candidates to count as information that is “public” among a group. But face-to-face usefulness isn’t the only kind of usefulness. The last example I discuss brings out a situation in which the characterization we have of a group is purely descriptive and detached from any ability to recognize individuals within the group as such, but is still paradigmatically a case in which common beliefs should be attributed.

Suppose that I wield one of seven rings of power, but don’t know who the other bearers are (the rings are invisible so there’s no possibility of visual detection–and anyway, they are scattered through the general population). If I twist the ring in a particular way, then in the case that all other ring bearers do likewise, then the dark lord will be destroyed, if he has just been reborn. If he has not just been reborn, or if not all of us twist the ring in the right way, everyone will suffer needlessly. Luckily, there will be signs in the sky and in the pit of our stomachs that indicate to a ring bearer when the dark lord has been reborn. All of us want to destroy the dark lord, but avoid suffering. All of us know these rules. When the distinctive feelings and signs arise, it will be commonly believed among the ring bearers that the dark lord has been reborn. And this then sets us up for the necessary collective action: we twist each ring together, and destroy him. This is common belief/knowledge among an anonymous group where there’s no possibility of face-to-face identification. But it’s useful common belief/knowledge, exactly because it sets us up for some possible coordinated action among the group so-characterized.

I don’t know whether I want to say that the common knowledge among the ring-bearers is public among them (if we did, then clearly face to face usefulness can’t be a criterion for publicity…). But the case illustrates that we should be interested in common beliefs in situations of extreme anonymity—after all, there’s no sense in which I have de re knowledge even potentially of the other ring-bearers. Nor have I even any way getting an informative characterization of larger subpopulations to which they belong, or even of raising my credence in the answer to such questions. But despite all this, it seems to be a paradigmatic case of common belief subserving coordinated action—one that any account of common belief should provide for. Many times, cooperative activity between a group of people requires they identify each other face-to-face, but not always, and the case of the ring bearers reminds us of this.

Stepping back, the upshot of this discussion I take to be the following:

  • We shouldn’t get too caught up in the apparent anti-anonymity restrictions in standard formal models of common belief, but we should recognize that they directly handle on a limited range of cases.
  • Standard iterated characterizations generalize to anonymous groups directly, as do Lewisian ways of deriving these iterations from manifest events.
  • We can handle worries about inattentive and unknowledgable group members by the method of restriction (which might include as as special case: generic common belief).
  • Some common belief will be very cheap on this approach. And cheap common belief is a very poor candidate to be “public information” in any ordinary sense.
  • We can remedy this by analyzing the usefulness of common belief (under a certain description) directly. Cheap common belief is just a “don’t care”.
  • Face-to-face usefulness is one common way in which common belief among a restricted group can be useful. This requires that it be public among the restricted group that they are a large part (e.g. a supermajority, or all typical members) of some broader easily recognizable group.
  • Face-to-face usefulness is not the only form of usefulness, as illustrated by the extreme anonymity of cases like the ringbearers.

 

 

 

 

Reinterpreting the Lewis-Cubitt-Sugden results

In the last couple of posts, I’ve been discussing Lewis’s derivation of iterated “reason to believe” q from the existence of a special kind of state of affairs A. I summarize my version of this derivation as follows, with the tilde standing for “x and y are similar in epistemic standards and background beliefs”.

We start from four premises:

  1. \forall x (A \supset B_x(A))
  2. \forall x (A\Rightarrow_x \forall yB_y(A))
  3. \forall x (A \Rightarrow_x q)
  4. \forall x ([A\Rightarrow_x \forall y [x\sim y]]

Five additional principles are either used, or are implicit in the motivation for principles that are used:

  • ITERATION \forall c \forall x ([A \Rightarrow_x c]\supset [A \Rightarrow_x [A\Rightarrow_x c]]
  • SYMMETRY \forall c \forall [A \Rightarrow_x c]\wedge \forall y[x\sim y]]\supset [\forall y[A\Rightarrow_y c]]
  • CLOSURE \forall a,c (\forall x B_x (a)\wedge \forall x[a \Rightarrow_x c]\supset \forall x B_x(c)))
  • SYMMETRY+ \forall a \forall c \forall x\forall z[a\Rightarrow_z[A \Rightarrow_x c]]\wedge [a \Rightarrow_z\forall y[x\sim y]]\supset [a\Rightarrow_z [\forall y[A\Rightarrow_y c]]
  • CLOSURE+ \forall a,b,c\forall x ([a \Rightarrow_x \forall y B_y(b)]\wedge [a \Rightarrow_x(\forall y[b \Rightarrow_y c])]\supset [a\Rightarrow_x \forall yB_y(c)])

In the last post, I gave a Lewis-Cubitt-Sugden style derivation of the following infinite series of propositions, using (2-4), SYMMETRY+, CLOSURE+, ITERATION:

  • A \Rightarrow_x q
  • A\Rightarrow_x \forall y B_y(q)
  • A\Rightarrow_x (\forall z B_z(\forall y B_y(q)))
  • \ldots

A straightforward extension of this assumes (1) and CLOSURE, obtaining the following results in situations where A is the case:

  • \forall x B_x(q)
  • \forall x B_x(\forall y B_y(q))
  • \forall _x B_x(\forall z B_z(\forall y B_y(q)))
  • \ldots

The proofs are valid, so each line in these two infinite sequences hold no matter how one reinterprets the primitive symbols, so long as the premises are true under that reinterpretation.

As we’ve seen in the last couple of posts, for Lewis, “indication” was a kind of shorthand. He defined it as follows:

  • p\Rightarrow_x q := B_x(p)\rightarrow B_x(q)

where \rightarrow is the counterfactual conditional.

Now, this definition is powerful. It means that CLOSURE needn’t be assumed as a separate premise—it follows from the logic of counterfactuals. And if “reason to believe” is closed under entailment, then we also get CLOSURE+ for free. As noted in edits to the last post, it means that we can get ITERATION from the logic of counterfactuals and a transparency assumption, viz. B_x(p)\supset B_x(B_x(p)).

The counterfactual gloss was also helpful in interpreting what (4) is saying. The word “indication” might suggest that when A indicates p, A must be something that itself gives the reason to believe p. That would be a problem for (4), but the counterfactual gloss on indication removes that implication.

Where Lewis’s interpretation of the primitives is thoroughly normative, we might try running the argument in a thoroughly descriptive vein (see the Stanford Encyclopedia for discussion of an approach to Lewis’s results like this.).

To read the current argument descriptively, we might start by reinterpreting B_x(p) as saying: x believes that p, and indication to be defined out of this notion counterfactually just as before. The trouble with this is some of the premises look false, read this way. For example, CLOSURE+ asks us to consider scenarios where x’s beliefs are thus-and-such, where the propositions x believes in that scenario entails (by CLOSURE) the proposition that the conclusion tells us x believes. unless the agent actually believes all the consquences of things she believes, it’s not clear why we should assume the condition in the consequent of CLOSURE+ holds. Similar issues arise for SYMMETRY+ and ITERATION.

One reaction at this point is to argue for a “coarse grained” conception of belief that makes it closed under entailment. That’s a standard modelling assumption in the formal literature on this topic, and something that Lewis and Stalnaker both (to a first approximation) accept. It’s extremely controversial, however.

If we don’t like that way of going, then we need to revisit our descriptive reinterpretation of the primitives. We could define them so as to make closure under such principles automatic. So, rather than have B_x(p) say that x believes p, we might read it as saying that x is committed to believe p, where x is committed to believe something when it follows from the things they believe (in a fuller version, I’d refine this characterization to allow for circumstances in which a person’s beliefs are inconsistent, without her commitments being trivial, but for now, let’s idealize away that possibility and work with the simpler version). Indication becomes: were x to be committed to believe that p, then they would be committed to believe that q.

If you read through the premises under this descriptive reinterpretation, then I contend that you’ll find they’ve got as good a claim to be true as the analogous premises on the original normative interpretation.

These interpretations need not compete. Lewis’s normative interpretation of the argument may be sound, and the commitment-theoretic reinterpretation may also be sound. In paradigmatic cases where there is a basis for common knowledge in Lewis’s sense, we may have an infinite stack of commitments-to-believe, and a parallel infinite stack of reasons-to-believe.

But notice! What the first Lewis argument gives us is reason to believe that others have reason to believe such-and-such. It doesn’t tell us that we have reason to believe that others are committed to believe so-and-so. So for some of the commitments that people take on in such situations (commitments about what others are committed to believe) might be unreasonable, for all these two results tell us. This will be my focus in the rest of this post, since I am particularly interested in the derivation of infinite commitment-to-believe. I think that the normative question: are these commitments epistemically reasonable? is a central one for a commitment-theoretic way of understanding what “public information” or “common belief” consists in.

Let me first explore and expose a blind alley. Lewis himself extracts descriptive predictions about belief from his account of iterated reasons for belief in situations of common knowledge, he adds assumptions about all people being rational, i.e. believing what they have reason to believe. He further adds assumptions about us having reason to believe each other to be rational in this sense, and so on. Such principles of iterated rationality are thought by Lewis to only be true for the first few iterations. They generate, for a few iterations, that we believe that q, believe that we believe q, believe that we believe that we believe q, etc. And in parallel, we can show that we have reason to believe each of these propositions about iterated belief—so all the belief we in fact have will be justified.

But while (per Lewis) these predictions are by designed supposed to run out after a few iterations, we need to show how everything we are committed to believing we have reason to believe. One might try to parallel Lewis’s strategy here, adding the premise that people are committed to believing what they have reason to believe. One might hope that such bridge principles will be true “all the way up”, and so allow us to derive the analogue of Lewis’s result for all levels of iteration. But this is where we hit the end of this particular road. If someone (perhaps irrationally) fails to believe that the ball is red despite having reason to believe that the ball is red, the ball being red need not follow from what they believe. So we do not have the principles we’d need to to convert Lewis’s purely normative result into one that speaks to the epistemic puzzle about commitment to believe.

Now for a positive proposal. To address the epistemic puzzle, I propose a final reinterpretation of the primitives of Lewis’s account. This time, we split the interpretation of indication and of the B-operator. The B-operator will express commitment-to-believe, just as above. But the indicates-for-x relation does not simply express counterfactual commitment, but has in addition a normative aspect. p will indicate q, for x, iff (i) were x to be committed to believing p, then x would be committed to believing q; and (ii) if x had reason to believe p, then x would have reason to believe q.

Before we turn to evaluating the soundness of the argument, consider the significance of the consequences of this argument under the new mixed-split reinterpretation. First, we would have infinite iterated commitment-to-believe, just as in the pure descriptive interpretation (that’s fixed by our interpretation of B). But second, for each level of iteration of mutual commitment-to-believe, we can derive that A indicates (for each x) that proposition. But indication on this reading, unlike on the pure descriptive reading,  has normative implications. It says that when the group members have reason to believe that A, they will have reason to believe that all are committed to believe that all are committed… that all are committed to believe q. So on the split reading of the argument, we derive both infinite iterated commitment to believe, and also that group members have reason to believe that propositions that they are are committed to believe.

An instance of indication, on this reading, is a conjunction of what indication signified on the two earlier readings of the argument. This means that, for example, ITERATION on the new split reading is good just in case ITERATION on those two earlier readings was good. It can be argued for from the logic of counterfactual conditionals so long as we have transparency both of reasons for belief and of commitment to believe. It requires no separate discussion, therefore. SYMMETRY has the same status, and CLOSURE on the split reading follows from CLOSURE on the descriptive reading alone. I contend that SYMMETRY+ AND CLOSURE+ on the split reading are also perfectly acceptable. You might think there is a special issue with CLOSURE+, since it features the B-operator within an indication operator–but really, what really drives CLOSURE+ is not the details of the particular B-operator, but the general principle that indication is closed under valid arguments. And that motivates CLOSURE on the new reading as well as both of the old ones. Of all the premises, it’s only (2-4) that make claims that are stronger than those on the earlier readings. For example, (2) now says, in part, that were one to have reason to believe that A, then one would have reason to believe that everyone is committed to believe A. This is genuinely new, compared to the assumptions made in the earlier readings. But these are assumptions that are pretty plausibly met by paradigms of bases for common knowledge.

What I’ve argued is that if the pure-descriptive version of Lewis’s argument is sound, and the pure-normative version of Lewis’s argument is sound, then the mixed-split-interpretation version of Lewis’s argument is sound. The conclusion of the argument under this mixed reading scratches an epistemological itch that neither the pure descriptive reading nor the pure epistemological reading (even supplemented with assumptions of iterated rationality) could help with.

That matters to me, in particular, because I’m interested in iterated commitment-to-believe as an analysis of public information/common belief, and I take the epistemological challenge as a serious one. At first, I thought that I could wheel in the Lewis-Cubitt-Sugden proof to address my concerns. But I had two worries. One was about the soundness of that proof, given its reliance on the dubious premise (A6). That worry was expressed two posts ago, and addressed in the last post. But another was the worry raised in the current post: that on the intended reading, the Lewis-Cubitt-Sugden proof really doesn’t show that we have reason to believe all those propositions we are committed to, if we have common belief in the commitment-theoretic sense. But—I hope–all is now well, since the split reinterpretation of the fixed up proof delivers everything I need: both infinite iterated commitment to believe, and the reasonability of believing each of those propositions we are committed to believing.

 

 

An alternative derivation of common knowledge

In the last post I set out a puzzling passage from Lewis. That was the first part of his account of “common knowledge”. If we could get over the sticking point I highlighted, we’d find the rest of the argument would show us how individuals confronted with a special kind of state of affairs A—a “basis for common knowledge that Z”—would end up having reason to believe that Z, reason to believe that all others have reason to believe Z, reason to believe that all others have reason to believe that all others have reason to believe Z, and so on for ever.

My worry about Lewis in the last post was also a worry about the plausibility of a principle that Cubitt and Sugden appeal to in reconstructing his argument. What I want to do now is give a slight tweak to their premises and argument, in a way that avoids the problem I had.

Recall the idea was that we had some kind of “manifest event” A—in Lewis’s original example, a conversation where one of us promises the other they will return (Z).

The explicit premises Lewis cited are:

  1. You and I have reason to believe that A holds.
  2. A indicates to both of us that you and I have reason to believe that A holds.
  3. A indicates to both of us that you will return.

I will use the following additional premise:

  • A indicates to me that we have similar standards and background beliefs.

On Lewis’s understanding of indication, this says that if I had reason to believe that A obtained, I’d have reason to believe we are similar in the way described. It is compatible with my not having any reason to believe, antecedent to encountering A, that we are similar in this way. On the other hand, if I have antecedent and resilient reason to believe that we are similar in the relevant respects, the counterfactual will be true

That the reason to believe needs to be resilient is an important caveat. It’s only when the reasons to believe we’re similar are not undercut by coming to have reason to believe that A that my version of the premise will be true. So Lewis’s premise can be true in some cases mine is not.

But mine is also true in some cases his is not, and that seems to me a particular welcome feature, since these include cases that are paradigms of common knowledge. Assume there is a secret handshake known only to members of our secret society. The handshake indicates membership of the society, and allegiance to its defining goal: promotion of the growing of large marrows. But the secret handshake is secret, so this indication obtains only for members of the society. Once we share the handshake, and intuitively, establish common knowledge that each of us intends to to promote the growing of large marrows. But we lacked reason to believe that we were similar in the right way independent of the handshake itself.

Covering these extra paradigmatic cases is an attractive feature. And I’ve explained that we can also cite it in the other paradigmatic cases, the cases where our belief in similarity is independent of A, so this looks to me strictly preferable to Lewis’s premise.

(I should note one general worry however. Lewis’s official definition of indication wasn’t just that when one had reason to believe the antecedent, one would have reason to believe the consequent. It is that one would thereby have reason to believe the consequent. You might read into that a requirement that the reason one has to believe the antecedent has to be a reason you have for believing the consequent. That would mean that in cases where one coming to have reason to believe that A was irrelevant to your reason to believe that you were similar, we did not have an indication relation. I’m proposing to simply strike out the “thereby” in Lewis’s definition to avoid this complication–if that leads to trouble, at least we’ll be able to understand better why he stuck it in).

I claim that my premise allows us to argue for the following, for various relevant p:

  • If A indicates to me that p then A indicates to me that (A indicates to you that p).

The case for this is as follows. We start by appealing to the inference pattern that I labelled I in the previous post, and that Lewis officially declared his starting point:

  1. A indicates to x that p
  2. x and y share similar standards and background beliefs.
  3. Conclusion: A indicates to y that p.

I claim this supports the following derived pattern:

  1. A indicates to x that A indicates to x that p
  2. A indicates to x that x and y share similar standards and background beliefs
  3. Conclusion: A indicates to x that A indicates to y that p.

This seems good to me, in light of the transparent goodness of I.

A bit of rearrangement gives the following version:

  1. A indicates to x that x and y share similar standards and background beliefs
  2. Conclusion: if A indicates to x that A indicates to x that p, then A indicates to x that A indicates to y that p.

The premise here is my first bullet point. Given Lewis’s counterfactual gloss on indication, the conclusion is equivalent to my second bullet point, as required. To elaborate on the equivalence: “If x had reason to believe that A, then if x had reason to believe A, then…” is equivalent to “If x had reason to believe that A, then…”, just because in standard logics of counterfactuals “if were p, then if were p, then…” is generally equivalent to “if were p, then…”. In the present context, that means that “A indicates to x that A indicates to x that…” is equivalent to “A indicates to x that”.

[edit: wait… that last move doesn’t quite work does it? “A indicates that (A indicates B)” translates to: “If x had reason to believe A, then x would have reason to believe (if A had reason to believe A, then A would have reason to believe B)”. It’s not just the counterfactual move, because there’s an extra operator running interference. Still, it’s what I need for the proof….

But still, the counterfactual gloss may allow the transition I need. For consider the closest worlds where x has reason to believe that B. And let’s stick in a transparency assumption: that in any situation x has reason to believe p, x has reason to believe x has reason to believe p. Given transparency, at these closest worlds, x has reason to believe that she has reason to believe A, ie reason to believe that the closest world where she has reason to believe A is the world in which she stands. But in the world in which she stands Transparency entails she has reason to believe she has reason to believe p. So she has reason to believe the relevant counterfactual is true, in those worlds. And that means we have derived the double iteration of indication from the single iteration. Essentially, suitable instances of transparency for “reason to believe” gets us analogous instances of transparency for “indication”.  ]

The final thing I want to put on the table is the good inference pattern VI from the previous post. That is:

  1. A indicates that [y has reason to believe that A holds] to x.
  2. A indicates that [A indicates Z to y] to x.
  3. Conclusion: A indicates that [y has reason to believe that Z] to x.

This looked good, recall, because the embedded contents are just an instance of modus ponens when you unpack them, and it’s pretty plausible in worlds where x has reason to believe the premises of modus ponens, then x has reason to believe the conclusion—which is what the above ends up saying. (As you’ll see, I’ll actually use a form of this in which the embedded clauses are generalized, but I think that doesn’t make a difference).

This is enough to run a variant of the Lewis argument. Let me give it to you in a formalized version. I use \Rightarrow_x for the “indicates-to-x” relation, and B_x for “x has reason to believe”.  I’ll state it not just for the two-person case, but more generally, with quantifiers x and y ranging over members of some group, and a,b,c ranging over propositions. Then we have:

  1. \forall x (A\Rightarrow_x \forall yB_y(A)) (the analogue of Lewis’s second premise, above).
  2. \forall x (A \Rightarrow_x Z) (the analogue of Lewis’s third premise, above)
  3. \forall x ([A \Rightarrow_x Z]\supset [A \Rightarrow_x(\forall y[A\Rightarrow_y Z])] (an instance of the formalization of the bullet point I argued for above).
  4. \forall x [A \Rightarrow_x(\forall y[A\Rightarrow_y Z])] (by logic, from 2,3).
  5. \forall x [A \Rightarrow_x(\forall yB_y (Z))] (by inference pattern VI, from 1,4).

Line 5 tells us that not only does A indicate to each of us that Z (as Lewis’s premise 2 assures us) but that A indicates to each of us that each has reason to believe Z. The argument now loops, by further instances of the bullet assumption and inference pattern VI, showing that A indicates to each of us that each has reason to believe that each has reason to believe that Z, and so on for arbitrary iterations of reason-to-believe.

As in Lewis’s original presentation, the analogue of premise 1 allows us to detach the consequent of each of these indication relations, so that in situations where we all have reason to believe that A holds, we have arbitrary iterations of reason to believe Z.

(To quickly report the process by which I was led to the above. I was playing around with versions of Cubitt and Sugden’s formalization of Lewis, which as mentioned used the inference pattern that I objected to in the last post. Inference pattern VI is what looked to me the good inference pattern in the vicinity—the thing that they label A6, and the bullet pointed principle is essentially the adjustment you have to make to another premise they attribute to Lewis—one they label C4—in order to make their proof go through with VI rather than the problematic A6. From that point, it’s simply a matter of figuring out whether the needed change is a motivated or defensible one). 

So I commend the above as a decent way of fixing up an obscure corner of Lewis’s argument. To loop around to the beginning, the passage I was finding obscure in Lewis, had him endorsing the following argument (II):

  1. A indicates that [y has reason to believe that A holds] to x.
  2. A indicates that Z to x.
  3. x has reason to believe that x and y share standards/background information.
  4. Conclusion: A indicates that [y has reason to believe that Z] to x.

The key change is to replace II.3 with the cousin of it introduced above: that A indicates to x that x and y share standards/background information. Once we’ve done this, I think the inference form is indeed good. Part of the case for this is indeed the argument that Lewis cites, labelled I above. But as we’ve seen, there’s seems to be quite a lot more going on under the hood.