Our beliefs about the specific beliefs and desires of others are not formed directly on the basis of manifest behaviour or circumstances, simply because in general individual beliefs and desires are not paired up in a one-to-one fashion with specific behaviour/circumstances (that is what I took away from the circularity objection to behaviourism). And with Plantinga, let’s set aside the suggestion we base such attributions in an inference by IBE. As discussed in the last post, the Plantingan complaint is that IBE is only somewhat reliable, and (on a Plantingan theory) this means it could only warrant a rather tenuous, unfirm belief that the explanation is right.
(Probably I should come back to that criticism—it seems important to Plantinga’s case that he thinks there would be close competitors to the other-minds hypothesis, if we were to construe attributions as the result of IBE, as the case for the comparative lack of reliability of IBE is very much stronger when we’re considering picking one out of a bunch of close competitor theories, than when e.g. there’s one candidate explanation that stands out a mile from the field, particularly when we remember we are interested only in reliability in normal circumstances. But surely there are some scientific beliefs that we initially form tentatively by an IBE which we end up believing very firmly, when the explanation they are a part of has survived a long process of testing and confirmation. So this definitely could do with more examination, to see if Plantinga’s charge stands up. It seems to me that Wright’s notion of wide vs. narrow cognitive roles here might be helpful—the thought being that physicalistic explanatory hypothesis we might arrive at by IBE tend to have multiple manifestations and so admit of testing and confirmation in ways that are not just “more of the same” (think: Brownian motion vs statistical mechanical phenomenon as distinct manifestations of an atomic theory of matter.)
What I’m now going to examine is a candidate solution to the second problem of other minds that can sit within a broadly Plantingan framework. Just as with criterion-based inferential rules that on the Plantingan account underpin ascriptions of pain, intentions, perceivings, and the like, the idea will be that we have special purpose belief forming mechanisms that generate (relatively firm) ascriptions of belief and desire. Unlike the IBE model, we’re not trying to subsume the belief formations within some general purpose topic-neutral belief forming mechanism, so it won’t be vulnerable in the way IBE was.
What is the special purpose belief forming mechanism? It’s a famous one: charitable interpretation. The rough idea is that one attributes the most favourable among the available overall interpretations that fits with the data you have about that person. In this case, the “data” may be all those specific criterion-based ascriptions—so stuff like what the person sees, how they are intentionally acting, what they feel, and so on. In a more full-blown version, we would have to factor in other factors (e.g. the beliefs they express through language and other symbolic acts; the influence of inductive generalizations made on the basis of previous interpretations, etc).
What is it for an interpretation to be “more favourable” than another? And what is it for a belief-desire interpretation to fit with a set of perceivings, intentions, feelings etc? For concreteness, I’ll take the latter to be fleshed out in terms of rational coherence between perceptual input and belief change and means-end coherence of beliefs and desires with intentions, and the like—structural rationality constraints playing the role that in IBE, formal consistency might play. And I’ll take favourability to be cashed out as the subject being represented as favourably as is possible—believing as they ought, acting on good reasons, etc.
Now, if this were to fit within the Plantingan project, it has to be the case that there is a component of our cognitive system that goes for charitable interpretation and issues in (relatively firm) ascriptions of mental states to others. Is that even initially plausible? We all have experience of being interpreted uncharitably, and complaining about it. We all know, if we’re honest, that we are inclined to regard some people as stupid or malign, including in cases where there’s no very good direct evidence for that.
I want to make two initial points here. The first is that we need to factor in some of the factors mentioned earlier in order to fairly evaluate the hypothesis here. Particularly relevant will be inductive generalizations from previous experience. If your experience is that everyone you’ve met from class 22B is a bully who wants to cause you pain, you might reasonably not be that charitable to the next person you meet from class 22B, even if the evidence about that person directly is thin on the ground. I’d expect the full-dress version of charity to instruct us to form the most favourable attributions consistent with those inductive generalizations we reasonably hold onto (clearly, there’ll be some nuance in spelling this out, since we will want to allow that sufficient acquaintance with a person allows us to start thinking of them as a counterexample to generalizations we have previously held). For similar reasons, an instruction to be as charitable as possible won’t tell you to assume that every stranger you meet is saintly and omnisicient, and merely behaving in ways that do not manifest this out of a concern not to embarrass you (or some such reason). For starters, it’s somewhat hard to think of decent ideas why omniscient saints would act as everyday people do (just ask those grappling with the problem of evil how easy this is), and for seconds, applied to those people with whom we have most interaction, such hypotheses wouldn’t stand much scrutiny. We have decent inductive grounds for thinking, generically people’s motives and information lie within the typical human band. What charity tells us to do is pick the most favourable interpretation consistent with this kind of evidence. (Notice that even if these inductive generalizations eventually take most of the strain in giving a default interpretation of another, charity is still epistemically involved insofar as (i) charity was involved in the interpretations which form the base from which the inductive generalization was formed; and (ii) insofar as are called on-the-fly to modify our inductively-grounded attributions when someone does something that doesn’t fit with them).
Further, the hypothesis that we have a belief-attributing disposition with charity as its centrepiece is quite consistent with this being defeasible, and quite often defeated. For example, here’s one way human psychology might be. We are inclined by default to be charitable in interpreting others, but we are also set up to be sensitive to potential threats from people we don’t know. Human psychology incorporates this threats-detection system by giving us a propensity to form negative stereotypes of outgroups on the basis of beliefs about bad behaviour or attitudes of salient members of those outgroups. So when these negative stereotypes are triggered, this overrides our underlying charitable disposition with some uncharitable default assumptions encoded in the stereotype. (In Plantingan terms, negative stereotype formation would not be a part of our cognitive structure aimed at truth, but rather one aimed at pragmatic virtues, such as threat-avoidance). Only where the negative stereotypes are absence would we then expect to find the underlying signal of charitable interpretation.
So again: is it even initially plausible that we actually engage in charitable interpretation? The points above suggest we should certainly not test this against our practice in relation to members of outgroups that may be negatively stereotyped. So we might think about this in application to friends and family. As well as being in-groups rather than out-groups, these are also cases where we have a lot of direct (criterion-based) evidence about their perceivings, intendings, feelings over time, so cases where we would expect to be less reliant on inductive generalizations and the like. I think in those cases charity is at least an initially plausible candidate as a principle constraining our interpretative practice. As some independent evidence of this, we might note Sarah Stroud’s account of the normative commitments constitutive of being a friend, which includes an epistemic bias towards charitable interpretation. Now, her theory of this says that it is the special normatively significant relation of friendship that places an obligation of charity upon us, and that is not my conjecture. But insofar as she is right about the phenomenology of friendship as including an inclination to charity, then I think this supports the idea that the idea that charitable interpretation is at least one of our modes of belief attribution. It’s not the cleanest case—because the very presence of the friendship relation is a potential confound—but I think it’s enough to motivate exploring the hypothesis.
So suppose that human psychology does work roughly along the lines just sketched, with charitable-ascription the default, albeit defeasible and overridable. If this is to issue in warranted ascriptions within a Plantigian epistemology, then not only does charitable interpretation have to be a properly-functioning part of our cognitive system, but it would have to be a part that’s aimed at truth, and which reliably issues in true beliefs. Furthermore, it’d have to very reliably issue in true beliefs, if it is, by Plantingan lights, to warrant our firm beliefs about the mental lives of others.
Both aspects might raise eyebrows. There are lots of things one could say in praise of charitable interpretation that are fundamentally pragmatic in character. Assuming the best of others is a pro-social thing to do. Everyone is the hero in their own story, and they like to learn that they are heroes in other people’s stories too. So expressing charitable interpretations of others is likely to strengthen relationships, enable cooperation, and prompt reciprocal charity. All that is good stuff! It might be built up into an ecological rationale for building charitable interpretation into one’s dealing with in-group members (more generally, positive stereotypes), just as threat-avoidance might motivate building cynical interpretation into one’s dealing with out-group members (more generally, negative stereotypes). But if we emphasize this kind of benefit of charitable interpretation, we are building a case for a belief forming mechanism that aims at sociability, not one aimed at truth. (We’re also undercutting the idea that charity is a default that is overridden by e.g. negative stereotypes–it suggests instead different stances in interpretation are tied to the different relationships).
It’s easiest to make the case that an interpretative disposition that is charitable is aimed at truth if we can make the case that it is reliable (in normal circumstances). What do we make of that?
Again, we shouldn’t overstate what it takes for charity to be reliable. We don’t have to defend the view that it’s reliable to assume that strangers are saints, since charity doesn’t tell us to do that (it wouldn’t get make it to the starting blocks of plausibility if it did). The key question will be whether charitable interpretation will be a reliable way of interpreting those with whom we have long and detailed acquaintance (so that the data that dominates is local to them, rather than inductive generalizations). The question is something like the following: are humans generally such that, among the various candidate interpretations that are structurally rationally compatible with their actions, perceptions, feelings (of the kind that friends and family would be aware of) the most favourable is the truest?
Posed that way, that’s surely a contingent issue—and something to which empirical work would be relevant. I’m not going to answer it here! But what I want to say is that if this is a reliable procedure in the constrained circumstances envisaged, then the prospects start to look good for accommodating charity within a Plantingan setup.
Now, even if charity is reliable, there remains the threat it won’t be reliable enough to vindicate the firmness of the confidence I have that family and strangers on the street believe that the sun will rise tomorrow, and so forth. (This is to avoid the analogue of the problem Plantinga poses for inference to the best explanation). This will guide the formulation of exactly how we characterize charity—it better not just say that we endorse the most charitable interpretation that fits the relevant data, with the firmness of that belief unspecified, but also says something about the firmness of such beliefs. For example, it could be that charity tells us to distribute our credence over interpretations in a way that respects how well they rationalize the evidence available so far. In that case, we’d predict that beliefs and desires common to almost all favourable candidates are ascribed much more firmly than beliefs and desires which are part of the very best interpretation, but not on nearby candidates. And we’d make the case that e.g. a belief that the sun will rise tomorrow is going to be part of almost all such candidates. (If we make this move, we need to allow the friend of topic-neutral IBE to make a similar one. Plantinga would presumably say that many of the candidates to be “best explanations” of data, when judged on topic neutral grounds, are essentially sceptical scenarios with respect to other minds. So I think we can see how this response could work here, but not in the topic-neutral IBE setting).
Three notes before I finish. The first is that even if charity as I categorized it (as a kind of justification-and-reason maximizing principle) isn’t vindicated as a special purpose interpretive principle, it illustrates the way that interpretive principles with very substantial content could play an epistemological role in solving the other problem of other minds. For example, a mirror-image principle would be to pick the most cynical interpretation. Among a creatures who are naturally malign dissemblers, that may reliable, and so a principle of cynicism vindicated on exactly parallel lines. And if in fact all humans are pretty similar in their final desires and general beliefs, then a principle of projection, where one by default assumes that other creatures have the beliefs and desires that you, the interpretor, have yourself, might be reliable in the same way. And so that too could be given a backing (Note that this would not count as a topic-neutral inference by analogy. It would be to a topic-specific inference concerned with psychological attribution alone, and so which could in principle issue in much firmer beliefs than a general purpose mechanism which has to avoid false positives in other areas).
Second, the role for charity I have set out above is very different from the way that it’s handled by e.g. Davidson and the Davidsonians (in those moments where they are using it as a epistemological principle, rather than something confined to the metaphysics of meaning). This kind of principle is contingent, and though we could insist that it is somehow built into the very concept of “belief”, that would just be to make the concept of belief somewhat parochial, in ways that Davidsonians would not like.
The third thing I want to point out is that if we think of epistemic charity as grounded in the kind of considerations given above, we should be very wary about analogical extensions of interpretative practices to creatures other than humans. For it could be that epistemic charity is reliable when restricted to people, but utterly unreliable when applied—for example–to Klingons. And if that’s so, then extending our usual interpretative practice to a “new normal” involving Klingons won’t give us warranted beliefs at all. More realistically, there’s often a temptation to extend belief and desire attrributions to non-human agents such as organizations, and perhaps, increasingly, AI systems. But if the reliance on charity is warranted only because of something about the nature of the original and paradigmatic targets of interpretation (humans mainly, and maybe some other naturally occurring entities such as animals and naturally formed groups) that makes it reliable, then it’ll continue to be warranted in application to these new entities if they have a nature which also makes it reliable. It’s perfectly possible that the incentive structures of actually existing complex organizations are just not such that we should “assume the best” of them, as we perhaps should of real people. I don’t take a stand on this—but I do flag it up as something that needs seperate evaluation.