Monthly Archives: August 2020

A simple formal model of how charity might resolve underdetermination

To a first approximation, decision theoretic representation theorems take a bunch of information about (coherent) choices of an agent x, and spit out probability-utility pairs that (structurally) rationalize each of those choices. Call that the agential candidates for x’s psychology.

Problems arise if there are too many agential candidates for x’s psychology—if we cannot, for example, rule out hypotheses where x believes that the world beyond her immediate vicinity is all void, and where her basic desires solely concern the distribution of properties in that immediate bubble. And I’ve argued in other work that we do get bubble-and-void problems like these.

I also argued in that work that you could resolve some of the resulting underdetermination by appealing to substantive, rather than structural rationality. In particular, I said we make a person more substantively rational by representing her IBE inferences by inferences to genuinely good explanations (like the continued existence of things when they leave her immediate vicinity) than some odd bubble-and-void surrogate.

So can we get a simple model for this? One option is the following. Suppose there are some “ideal priors” that encode all the good forms of inference to the best explanation Pr_i. And suppose we’re given total information about the total evidence E available to x (just as we were given total information about her choice-dispositions). Then we can construct an ideal posterior probability, Pr_i(\cdot|E), which are the ideal doxastic attitudes to have in x’s evidential situation. Now, we can’t simply assume that x is epistemically ideal–there’s no guarantee that there’s any probability-utility pair among the agential candidates for x’s psychology whose first element matches Pr_i(\cdot|E). But if we spot ourselves a metric of closeness between probability functions, we can consider the following way of narrowing down the choice-theoretic indeterminacy: the evidential-and-agential candidates for x’s psychology will be those agential candidates for x’s psychology whose first component is maximally close to the probability function Pr_i(\cdot|E).

(One warning about the closeness metric we need—I think you’ll get the wrong results if this were simply a matter of measuring the point-wise similarity of attitudes. Roughly—if you can trace the doxastic differences between two belief states to a single goof that one agent made that the other didn’t, those can be similar even if there are lots of resulting divergences. And a belief state which diverged in many different unrelated ways—but where the resulting differences are less far reaching—should in the relevant sense be less similar to one of the originals than either is from each other. A candidate example: the mashed up state which agrees with both where they agree, and then where they diverge agrees with one or the other at random. So a great deal is packed into this rich closeness ordering. But also: I take it to be a familiar enough notion that is okay to use in these contexts)

So, in any case, that’s my simple model of how evidential charity can combine with decision-theoretic representation to yield the results—with the appeals to substantive rationality packed into the assumption of ideal priors, and the use of the closeness metric being another significant theoretic commitment.

I think we might want to add some further complexity, since it looks like we’ve been appealing to substantive rationality only as it applies to the epistemic side of the coin, and one might equally want to appeal to constraints of substantive rationality on utilities. So along with the ideal priors you might posit ideal “final values” (say, functions from properties of worlds to numbers, which we’d then aggregate—e.g. sum—to determine the ideal utilities to assign to a world). By pairing that with the ideal posterior probability we get an ideal probability-utility pair, relative to the agents evidence (I’m assuming that evidence doesn’t impact the agent’s final values—if it does in a systematic way, then that can be built into this model). Now, given an overall measure of closeness between arbitrary probability-utility pairs (rather than simply between probability pairs) we can replicate the earlier proposal in a more general form: the the evidential-and-agential candidates for x’s psychology will be those agential candidates which are maximally close to the pair Pr_i(\cdot|E), U_i.

(As before, this measure of closeness between psychologies will have to do a lot of work. In this case, it’ll have to accommodate rationally permissible idiosyncratic variation in utilities. Alternatively—and this is possible either for the ideal priors or the ideal final values/utilities—we could start from a set of ideal priors and ideal final values, and do something a bit more complex with the selection mechanism—e.g. pick out the member(s) of the set of ideal psychologies and the set of agential candidates psychologies which are closest to one another, attribute the latter to agent as their actual psychology, and the former as the proper idealization of their psychology. This allows different agents to be associated systematically with different ideal psychologies.

This is a description of interpretation-selection that relies heavily on substantive rationality. It is an implementation of the idea that when interpreting others we maximize how favourable a psychology we give them—this maximizing thought is witnessed in the story above by the role played by closeness to an ideal psychology.

I also talked in previous posts about a different kind of interpretation-selection. This is interpretation selection that maximizes, not objective favourability, but similarity to the psychology of the interpreter themself. We can use a variant of the simple model to articulate this. Rather than starting with ideal priors, we let the subscript “i” above indicate that we are working with the priors of the flesh and blood interpreter. We start with this prior, and feed it x’s evidence, in order to get a posterior probability tailored to x’s evidential situation (though processed in the way the interpreter would do). Likewise, rather than working with ideal final values, we start from the final values of the flesh and blood interpreter (if they regard some of their values as idiosyncratic, perhaps this characterizes a space of interpreter-sanctioned final values—that’s formally like allowing the set of ideal final values in the earlier implementation). From that point on, however, interpretation selection is exactly as before. The selected interpretation of x is that one among the agential candidates to be her psychology that is closest the interpreter’s psychology as adjusted and tailored to x’s evidential situation. This is exactly the same story as before, except with the interpreter’s psychology playing the role of the ideal.

Neither of these are yet in a form in which they could be a principle of charity implementable by a flesh and blood agent themselves (neither are principles of epistemic charity). They presuppose, in particular, that one has total access to x’s choice dispositions, and to her total evidence. In general, one will only have partial information at best about each. One way to start to turn it into a simple model of epistemic charity would be to think of there being a set of possible choice-dispositions that for all we flesh-and-blood interpreters know, could be the choice-dispositions of x. Likewise for her possible evidential states. But relative to each set of complete choice-dispositions and evidence pair characterizing our target x, either one of the stories above could be run, picking out a “selecting interpretation” for x in that epistemic possibility (if there’s a credal weighting given to each choice-evidence pair, the interpretation inherits that credal weighting).

In order for a flesh and blood interpreter—even one with insane computational powers—to implement the above, they would need to have knowledge of the starting psychologies on the basis of which the underdetermination is to be resolved (also the ability to reliably judge closeness). If the starting psychology is the interpreter’s own psychology, as on the second, similarity-maximizing reading of the story, then what we need to act is massive amounts of introspection. If the starting point is the an ideal psychology, however, then in order for the recipe to be usable by a flesh and blood interpreter with limited information, they would need to be aware of what the ideal was—what ideal priors are, and what the ideal final values are. If part of the point is to model interpretation by agents who are flawed in the sense of having non-ideal priors and final values (somewhat epistemically biased, somewhat immoral agents) then this is a interesting but problematic thing to credit them with. If the are aware of the right priors, what excuse do they have for the wrong ones? If they know the right final values, why aren’t they valuing things that way?

An account—even an account with this level of abstraction built in—should I think allow for uncertainty and false belief about what the ideal priors and final values are, among the flesh and blood agents who are deploying epistemic charity. So as well as giving our interpreter a set of epistemic possibilities for x’s evidence and choices, we will add in a set of epistemic possibilities for what the ideal priors and values in fact are. But the story is just the same: for any quadruple of x’s evidence, x’s choices, the ideal priors and ideal values, we run the story as given to select an interpretation. And credence distributions on an interpreter’s part across these valuations will be inherited as a credence distribution across the interpretations.

With that as our model of epistemic charity, we can then identify two ways of understanding how an “ideal” interpreter would interpret x, within the similarity-maximization story.

The first idealized similarity-maximization model says that the ideal interpreter knows the total facts of an interpreter, y’s psychology, and also total information about x’s evidence and choices. You feed all that information into the story as given, and you get one kind of result for what the ideal interpretion of x is (one that is relative to y, and in particular, y’s priors and values).

The second idealized similarity-maximization model says that the ideal interpeter knows the total facts about her own psychology, as well as total informationa bout x’s evidence and choices. The ideal interpreter is assumed to have the ideal priors and values, and so maximizing similarity to that psychology just is to maximizing closeness to the ideal. So if we feed all this information into the story as given, and we get a characterization of the ideal interpretation of x that is essentially the same as the favourability-maximization model that I started with.

Ok, so this isn’t yet to argue for any of these models as the best way to go. But if the models are good models of the ways that charity would work, then they might help to fix ideas and explore the relationships among them.

Maximizing similarity and charity: redux

This is a quick post (because it’s the last beautiful day of the year). But in the last post, I was excited by the thought that a principle of epistemic charity that told you to maximize self-similarity in interpretation would correspond to a principle of metaphysical charity in which the correct belief/desire interpretation of an individual maximized knowledge, morality, and other ideal characteristics.

That seemed nice, because similarity-maximization seemed easier to defend as a reliable practical interpretative principle than maximizing morality/knowledge directly. The similarity-maximization seems to presuppose only that interpreter and interpretee are (with high enough objective probability) cut from the same cloth. A practical knowledge/morality maximization version of charity, on the other hand, looks like it has to get into far more contentious background issues.

But I think this line of thought has a big problem. It’s based on the thought that if the facts about belief and desire are those that the ideal interpreter would attribute. If the ideal interpreter is an omniscient saint (and let’s grant that this is built into the way we understand the idealization) then similarity-maximization will make the ideal interpreter choose theories of any target that make them as close to an omniscient saint as possible—i.e. maximize knowledge and morality.

Alright. But the thing is that similarity maximization as practiced by ordinary human beings is reliable, if it is, because (with high enough probability) we resemble each other in our flaws as well as our perfections. My maximization of Sally’s psychological similarity to myself may produce warranted beliefs because I’m a decent sample of human psychology. But a hypothetical omniscient saint is not even hypothetically a decent sample of human psychology. The ideal interpreter shouldn’t be maximizing Sally’s psychological similarity to themself, but rather her similarity to some representative individual (like me).

Now, you might still get an interesting principle of metaphysical charity out of similarity-maximization, even if you have to make it agent-relative by having the ideal interpeter maximizing similarity to x, for some concrete individual x (if you like, this ideal interpreter is x’s ideal interpretive advisor). If you have this relativization built into metaphysical charity, you will have to do something about the resulting dangline parameter—maybe go for a kind of perspectival relativism about psychological facts, or try to generalize this away as a source of indeterminacy. But it’s not the morality-and-knowledge maximization I originally thought resulted.

I need to think about this dialectic some more: it’s a little complicated. Here’s another angle to approach the issue. You could just stick with characterizing “ideal interpreter” as I originally did, as omniscient saints going through the same de se process as we ourselves do in interpreting others, and stipulate that belief/desire facts are what they those particular ideal interpreters say they are. A question, if we do this, is whether this would undercut a practice of flesh and blood human beings (FAB) interpreting others by maximizing similarity to themselves. Suppose FAB recognizes two candidate interpretations available of a target—and similarity-to-FAB ranks interpretation A over B, whereas similarity-to-an-omniscient-saint ranks B over A. In that situation, won’t the stipulation about what fixes the belief/desire facts mean that FAB should go for B, rather than A? But similarity-maximization charity would require the opposite.

One issue here is whether we could ever find a case instantiating this pattern which doesn’t have a pathological character. For example, if cases of this kind needed FAB to identify a specific thing that the omniscient agent knows, that they do not know—then they’d be committed to the Moorean proposition “the omnsicient saint knows p, but I do not know p”. So perhaps there’s some more room to explore whether the combination of similarlity-maximization and metaphysical charity I originally put forward could be sustained as a package-deal. But for now I think the more natural pairing with similarity-maximization is the disappointingly relativistic kind of metaphysics given above.

From epistemic to metaphysical charity

I’ll start by recapping a little about epistemic charity. The picture was that we can get some knowledge of other minds from reliable criterion-based rules. We become aware of the behaviour-and-circumstances B of an agent, and form the belief that they are in S, in virtue of a B-to-S rule we have acquired through nature or nuture. But this leaves a lot of what we think we ordinarily know about other minds unexplained (mental states that aren’t plausibly associated with specific criteria). Epistemic charity is a topic-specific rule (a holistic one) which takes us from the evidence acquired e.g. through criterion-based rules like the above, to belief and desire ascriptions. The case for some topic-specific rule will have to be made by pointing to problems with topic-neutral rules that might be thought to do the job (like IBE). Once that negative case is made we can haggle about the character of the subject-specific rule in question.

If we want to make the case that belief-attributions are warranted in the Plantingan sense, the central question will be whether (in worlds like our own, in application to the usual targets, and in normal circumstances) the rule of interpreting others via the charitable instruction to “maximize rationality” is a reliable one. That’s surely a contingent matter, but it might be true. But we shouldn’t assume that just because a rule like this is reliable in application to humans, that we can similarly extend it to other entities—animals and organizations and future general AI.

There’s also the option of defending epistemic charity as the way we ought to interpret others, without saying it leads to beliefs that are warranted in Plantinga’s sense. One way of doing that would be to emphasize and build on some of the pro-social aspects of charity. The idea is that we maximize our personal and collective interests by cooperating, and defaulting to charitable interpretation promotes cooperation. One could imagine charity being not very truth-conducive, and these points about its pragmatic benefits obtaining—especially if we each take advantage of others’ tendancy to charitably interpret us by hiding our flaws as best we can. Now, if we let this override clear evidence of stupidity or malignity, then the beneficial pro-social effects might be outweighed by constant disappointment as people fail to meet our confident expectations. So this may work best as a tie-breaking mechanism, where we maximize individual and collective interest by being as pro-social as possible under constraints of respecting clear evidence.

I think the strongest normative defence of epistemic charity will have to mix and match a bit. It maybe that some aspects of charitable interpretation (e.g. restricting the search space to “theories” of other minds of a certain style, e.g. broadly structurally rational) look tempting targets to defend as reliable, in application to creatures like us. But as we give the principles of interpretation-selection greater and greater optimism bias, they get harder to defend as reliable, and it’s more tempting to reach for a pragmatic defence.

All this was about epistemic charity, and is discussed in the context of flesh and blood creatures forming beliefs about other minds. There’s a different context in which principles of charity get discussed, and that’s in the metaphysics of belief and desire. The job in that case is to take a certain range of ground-floor facts about how an agent is disposed to act and the perceptual information available to them (and perhaps their feelings and emotions too) and then selecting the most reason-responsive interpretation of all those base-level facts. The following is then proposed as a real definition of what it is for an agent to believe that p or desire that q: it is for that belief or desire to be part of the selected interpretation.

Metaphysical charity says what it is for someone to believe or desire something in the first place, doesn’t make reference to any flesh and blood interpreter, and a fortiori doesn’t have its base facts confined to those to which flesh and blood interpreters have access. But the notable thing is that (at this level of abstract definition) it looks like principles of epistemic and metaphysical charity can be paired. Epistemic charity describes, inter alia, a function from a bunch of information about acts/intentions and perceivings to overall interpretations (or sets of interpretations, or credence distributions over sets of interpretations). It looks like you can generate a paired principle of metaphysical charity out of this by applying that function to a particular rich starting set: the totality of (actual and counterfactual) base truths about the intentions/perceivings of the target. (We’ll come back to slippage between the two on the way).

It’s no surprise, then, that advocates of metaphysical charity have often framed the theory in terms of what an “ideal interpreter” would judge. We imagine a super-human agent whose “evidence base” were the totality of base facts about our target, and ask what interpretation (or set of interpretations, or credences over sets of interpretations) they would come up with. An ideal interpeter implementing a maximize-rationality priciple of epistemic charity would pick out the interpretation which maximizes rationality with respect to the total base facts, which is exactly what metaphysical charity selected as the belief-and-desire fixing theory. (What happens if the ideal interpreter would deliver a set of interpretations, rather than a single? That’d correspond to a tweak on metaphysical charity where agreement among all selected interpretations suffices for determinate truth. What if it delivers a credence distribution over such a set? That’d correspond to a second tweak, where the degree of truth is fixed by the ideal interpreters’ credence).

You could derive metaphysical charity from epistemic charity by adding (some refinement of) an ideal-interpreter bridge principle: saying that what it is for an agent to believe that p/desire that q is for it to be the case that an ideal interpreter, with awareness of all and only a certain range of base facts, would attribute those attitudes to them. Granted this, and also the constraint that they any interpreter ought to conform to epistemic charity, anything we say about epistemic charity will induce a corresponding metaphysical charity. The reverse does not hold. It is perfectly consistent to endorse metaphysical charity, but think that epistemic charity is all wrong. But with this ideal-interpreter bridge set up, whatever we say about epistemic charity will carry direct implications for the metaphysics of mental content.

Now metaphysical charity relates to the reliability of epistemic charity in one very limited respect. Given metaphysical charity, epistemic charity is bound to be reliable in one very restricted range of cases: a hypothetical case where a flesh and blood interpreter has total relevant information about the base facts, and so exactly replicates the ideal interpreter counterfactuals about whom fixes the relevant facts. Now, these cases are pure fiction–they do not arise in the actual world. And they cannot be straightforwardly used as the basis for a more general reliability principle.

Here’s a recipe that illustrates this, that I owe to Ed Elliott. Suppose that our total information about x is Z, which leaves open the two total patterns of perceivings/intendings A and B. Ideal interpretation applied to A delivers interpretation 1, the same applied to B delivers interpretation 2. 1 is much more favourable than 2. Epistemic charity applied to limited information Z tells us to attribute 1. But there’s nothing in the ideal interpreter/metaphysical charity picture that tells us A/1 is more likely to come about than B/2.

On the other hand, consider the search-space restrictions—say to interpretations that make a creature rational, or rational-enough. If we have restricted the search space in this way for any interpreter, then we have an ex ante guarantee that whatever the ideal interpreter comes up with, it’ll be an interpretation within their search space, i.e. one that makes the target rational, or rational-enough. So constraints on the interpretive process will be self-vindicating, if we add metaphysical charity/ideal interpeter bridges to the package, though as we saw, maximizing aspects of the methodology will not be.

I think it’s very tempting for fans of epistemic charity to endorse metaphysical charity. It’s not at all clear to me whether fans of metaphysical charity should taken on the burden of defending epistemic charity. If they do, then the key question will be the normative status of any maximizing principles they embrace as part of the characterization of charity.

Let me just finish by emphasizing both the flexibility and the limits to this package deal. The flexibility comes because you can understand “maximize reasonableness within search-space X” or indeed “maximize G-ness within search-space X” in all sorts of ways, and the bulk of the above discussion will go through. That means we can approach epistemic charity by fine-tuning for the maximization principle that allows us the best chance of normative success. On the other hand, there are some approaches that are very difficult to square with metaphysical charity or ideal interpreters. I mentioned in the previous post a “projection” or “maximize similarity to one’s own psychology” principle, which has considerable prima facie attraction—after all, the idea that humans have quite similar psychologies looks like a decent potential starting point. It’ll be complex translating that into a principle of metaphysical charity. What psychology would the ideal interpreter have, similarity of which must be maximized?

Well, perhaps we can make this work: perhaps the ideal interpreter, being ideal, would be omnsicient and saintly? If so, perhaps this form of epistemic charity would predict a kind of knowledge-and-morality-maximization principle in the metaphysical limit. So this is a phenomenon worth noting: metaphysical knowledge-and-morality maximization could potentially be derived either from epistemic similarity-maximization or epistemic knowledge-and-morality maximization. The normative defences these epistemologies of other minds call for would be very different.

Epistemic charity as proper function.

Our beliefs about the specific beliefs and desires of others are not formed directly on the basis of manifest behaviour or circumstances, simply because in general individual beliefs and desires are not paired up in a one-to-one fashion with specific behaviour/circumstances (that is what I took away from the circularity objection to behaviourism). And with Plantinga, let’s set aside the suggestion we base such attributions in an inference by IBE. As discussed in the last post, the Plantingan complaint is that IBE is only somewhat reliable, and (on a Plantingan theory) this means it could only warrant a rather tenuous, unfirm belief that the explanation is right.

(Probably I should come back to that criticism—it seems important to Plantinga’s case that he thinks there would be close competitors to the other-minds hypothesis, if we were to construe attributions as the result of IBE, as the case for the comparative lack of reliability of IBE is very much stronger when we’re considering picking one out of a bunch of close competitor theories, than when e.g. there’s one candidate explanation that stands out a mile from the field, particularly when we remember we are interested only in reliability in normal circumstances. But surely there are some scientific beliefs that we initially form tentatively by an IBE which we end up believing very firmly, when the explanation they are a part of has survived a long process of testing and confirmation. So this definitely could do with more examination, to see if Plantinga’s charge stands up. It seems to me that Wright’s notion of wide vs. narrow cognitive roles here might be helpful—the thought being that physicalistic explanatory hypothesis we might arrive at by IBE tend to have multiple manifestations and so admit of testing and confirmation in ways that are not just “more of the same” (think: Brownian motion vs statistical mechanical phenomenon as distinct manifestations of an atomic theory of matter.)

What I’m now going to examine is a candidate solution to the second problem of other minds that can sit within a broadly Plantingan framework. Just as with criterion-based inferential rules that on the Plantingan account underpin ascriptions of pain, intentions, perceivings, and the like, the idea will be that we have special purpose belief forming mechanisms that generate (relatively firm) ascriptions of belief and desire. Unlike the IBE model, we’re not trying to subsume the belief formations within some general purpose topic-neutral belief forming mechanism, so it won’t be vulnerable in the way IBE was.

What is the special purpose belief forming mechanism? It’s a famous one: charitable interpretation. The rough idea is that one attributes the most favourable among the available overall interpretations that fits with the data you have about that person. In this case, the “data” may be all those specific criterion-based ascriptions—so stuff like what the person sees, how they are intentionally acting, what they feel, and so on. In a more full-blown version, we would have to factor in other factors (e.g. the beliefs they express through language and other symbolic acts; the influence of inductive generalizations made on the basis of previous interpretations, etc).

What is it for an interpretation to be “more favourable” than another? And what is it for a belief-desire interpretation to fit with a set of perceivings, intentions, feelings etc? For concreteness, I’ll take the latter to be fleshed out in terms of rational coherence between perceptual input and belief change and means-end coherence of beliefs and desires with intentions, and the like—structural rationality constraints playing the role that in IBE, formal consistency might play. And I’ll take favourability to be cashed out as the subject being represented as favourably as is possible—believing as they ought, acting on good reasons, etc.

Now, if this were to fit within the Plantingan project, it has to be the case that there is a component of our cognitive system that goes for charitable interpretation and issues in (relatively firm) ascriptions of mental states to others. Is that even initially plausible? We all have experience of being interpreted uncharitably, and complaining about it. We all know, if we’re honest, that we are inclined to regard some people as stupid or malign, including in cases where there’s no very good direct evidence for that.

I want to make two initial points here. The first is that we need to factor in some of the factors mentioned earlier in order to fairly evaluate the hypothesis here. Particularly relevant will be inductive generalizations from previous experience. If your experience is that everyone you’ve met from class 22B is a bully who wants to cause you pain, you might reasonably not be that charitable to the next person you meet from class 22B, even if the evidence about that person directly is thin on the ground. I’d expect the full-dress version of charity to instruct us to form the most favourable attributions consistent with those inductive generalizations we reasonably hold onto (clearly, there’ll be some nuance in spelling this out, since we will want to allow that sufficient acquaintance with a person allows us to start thinking of them as a counterexample to generalizations we have previously held). For similar reasons, an instruction to be as charitable as possible won’t tell you to assume that every stranger you meet is saintly and omnisicient, and merely behaving in ways that do not manifest this out of a concern not to embarrass you (or some such reason). For starters, it’s somewhat hard to think of decent ideas why omniscient saints would act as everyday people do (just ask those grappling with the problem of evil how easy this is), and for seconds, applied to those people with whom we have most interaction, such hypotheses wouldn’t stand much scrutiny. We have decent inductive grounds for thinking, generically people’s motives and information lie within the typical human band. What charity tells us to do is pick the most favourable interpretation consistent with this kind of evidence. (Notice that even if these inductive generalizations eventually take most of the strain in giving a default interpretation of another, charity is still epistemically involved insofar as (i) charity was involved in the interpretations which form the base from which the inductive generalization was formed; and (ii) insofar as are called on-the-fly to modify our inductively-grounded attributions when someone does something that doesn’t fit with them).

Further, the hypothesis that we have a belief-attributing disposition with charity as its centrepiece is quite consistent with this being defeasible, and quite often defeated. For example, here’s one way human psychology might be. We are inclined by default to be charitable in interpreting others, but we are also set up to be sensitive to potential threats from people we don’t know. Human psychology incorporates this threats-detection system by giving us a propensity to form negative stereotypes of outgroups on the basis of beliefs about bad behaviour or attitudes of salient members of those outgroups. So when these negative stereotypes are triggered, this overrides our underlying charitable disposition with some uncharitable default assumptions encoded in the stereotype. (In Plantingan terms, negative stereotype formation would not be a part of our cognitive structure aimed at truth, but rather one aimed at pragmatic virtues, such as threat-avoidance). Only where the negative stereotypes are absence would we then expect to find the underlying signal of charitable interpretation.

So again: is it even initially plausible that we actually engage in charitable interpretation? The points above suggest we should certainly not test this against our practice in relation to members of outgroups that may be negatively stereotyped. So we might think about this in application to friends and family. As well as being in-groups rather than out-groups, these are also cases where we have a lot of direct (criterion-based) evidence about their perceivings, intendings, feelings over time, so cases where we would expect to be less reliant on inductive generalizations and the like. I think in those cases charity is at least an initially plausible candidate as a principle constraining our interpretative practice. As some independent evidence of this, we might note Sarah Stroud’s account of the normative commitments constitutive of being a friend, which includes an epistemic bias towards charitable interpretation. Now, her theory of this says that it is the special normatively significant relation of friendship that places an obligation of charity upon us, and that is not my conjecture. But insofar as she is right about the phenomenology of friendship as including an inclination to charity, then I think this supports the idea that the idea that charitable interpretation is at least one of our modes of belief attribution. It’s not the cleanest case—because the very presence of the friendship relation is a potential confound—but I think it’s enough to motivate exploring the hypothesis.

So suppose that human psychology does work roughly along the lines just sketched, with charitable-ascription the default, albeit defeasible and overridable. If this is to issue in warranted ascriptions within a Plantigian epistemology, then not only does charitable interpretation have to be a properly-functioning part of our cognitive system, but it would have to be a part that’s aimed at truth, and which reliably issues in true beliefs. Furthermore, it’d have to very reliably issue in true beliefs, if it is, by Plantingan lights, to warrant our firm beliefs about the mental lives of others.

Both aspects might raise eyebrows. There are lots of things one could say in praise of charitable interpretation that are fundamentally pragmatic in character. Assuming the best of others is a pro-social thing to do. Everyone is the hero in their own story, and they like to learn that they are heroes in other people’s stories too. So expressing charitable interpretations of others is likely to strengthen relationships, enable cooperation, and prompt reciprocal charity. All that is good stuff! It might be built up into an ecological rationale for building charitable interpretation into one’s dealing with in-group members (more generally, positive stereotypes), just as threat-avoidance might motivate building cynical interpretation into one’s dealing with out-group members (more generally, negative stereotypes). But if we emphasize this kind of benefit of charitable interpretation, we are building a case for a belief forming mechanism that aims at sociability, not one aimed at truth. (We’re also undercutting the idea that charity is a default that is overridden by e.g. negative stereotypes–it suggests instead different stances in interpretation are tied to the different relationships).

It’s easiest to make the case that an interpretative disposition that is charitable is aimed at truth if we can make the case that it is reliable (in normal circumstances). What do we make of that?

Again, we shouldn’t overstate what it takes for charity to be reliable. We don’t have to defend the view that it’s reliable to assume that strangers are saints, since charity doesn’t tell us to do that (it wouldn’t get make it to the starting blocks of plausibility if it did). The key question will be whether charitable interpretation will be a reliable way of interpreting those with whom we have long and detailed acquaintance (so that the data that dominates is local to them, rather than inductive generalizations). The question is something like the following: are humans generally such that, among the various candidate interpretations that are structurally rationally compatible with their actions, perceptions, feelings (of the kind that friends and family would be aware of) the most favourable is the truest?

Posed that way, that’s surely a contingent issue—and something to which empirical work would be relevant. I’m not going to answer it here! But what I want to say is that if this is a reliable procedure in the constrained circumstances envisaged, then the prospects start to look good for accommodating charity within a Plantingan setup.

Now, even if charity is reliable, there remains the threat it won’t be reliable enough to vindicate the firmness of the confidence I have that family and strangers on the street believe that the sun will rise tomorrow, and so forth. (This is to avoid the analogue of the problem Plantinga poses for inference to the best explanation). This will guide the formulation of exactly how we characterize charity—it better not just say that we endorse the most charitable interpretation that fits the relevant data, with the firmness of that belief unspecified, but also says something about the firmness of such beliefs. For example, it could be that charity tells us to distribute our credence over interpretations in a way that respects how well they rationalize the evidence available so far. In that case, we’d predict that beliefs and desires common to almost all favourable candidates are ascribed much more firmly than beliefs and desires which are part of the very best interpretation, but not on nearby candidates. And we’d make the case that e.g. a belief that the sun will rise tomorrow is going to be part of almost all such candidates. (If we make this move, we need to allow the friend of topic-neutral IBE to make a similar one. Plantinga would presumably say that many of the candidates to be “best explanations” of data, when judged on topic neutral grounds, are essentially sceptical scenarios with respect to other minds. So I think we can see how this response could work here, but not in the topic-neutral IBE setting).

Three notes before I finish. The first is that even if charity as I categorized it (as a kind of justification-and-reason maximizing principle) isn’t vindicated as a special purpose interpretive principle, it illustrates the way that interpretive principles with very substantial content could play an epistemological role in solving the other problem of other minds. For example, a mirror-image principle would be to pick the most cynical interpretation. Among a creatures who are naturally malign dissemblers, that may reliable, and so a principle of cynicism vindicated on exactly parallel lines. And if in fact all humans are pretty similar in their final desires and general beliefs, then a principle of projection, where one by default assumes that other creatures have the beliefs and desires that you, the interpretor, have yourself, might be reliable in the same way. And so that too could be given a backing (Note that this would not count as a topic-neutral inference by analogy. It would be to a topic-specific inference concerned with psychological attribution alone, and so which could in principle issue in much firmer beliefs than a general purpose mechanism which has to avoid false positives in other areas).

Second, the role for charity I have set out above is very different from the way that it’s handled by e.g. Davidson and the Davidsonians (in those moments where they are using it as a epistemological principle, rather than something confined to the metaphysics of meaning). This kind of principle is contingent, and though we could insist that it is somehow built into the very concept of “belief”, that would just be to make the concept of belief somewhat parochial, in ways that Davidsonians would not like.

The third thing I want to point out is that if we think of epistemic charity as grounded in the kind of considerations given above, we should be very wary about analogical extensions of interpretative practices to creatures other than humans. For it could be that epistemic charity is reliable when restricted to people, but utterly unreliable when applied—for example–to Klingons. And if that’s so, then extending our usual interpretative practice to a “new normal” involving Klingons won’t give us warranted beliefs at all. More realistically, there’s often a temptation to extend belief and desire attrributions to non-human agents such as organizations, and perhaps, increasingly, AI systems. But if the reliance on charity is warranted only because of something about the nature of the original and paradigmatic targets of interpretation (humans mainly, and maybe some other naturally occurring entities such as animals and naturally formed groups) that makes it reliable, then it’ll continue to be warranted in application to these new entities if they have a nature which also makes it reliable. It’s perfectly possible that the incentive structures of actually existing complex organizations are just not such that we should “assume the best” of them, as we perhaps should of real people. I don’t take a stand on this—but I do flag it up as something that needs seperate evaluation.

Plantinga on the original problem of other minds and IBE

The other problem of other minds was the following. Grant that we have justification for ascribing various “manifested” mental states to others. Specifically, we have a story about how we are justified in ascribing at least the following: feelings like pain, emotions like joy or fear, perceivings, intendings. Many of these have intentional contents, and we suppose that our story shows how we can be justified (in the right circumstances) in ascribing states of these types for a decent range of contents, though perhaps not all. But such a story, we are assuming, is not yet an epistemic vindication of the total mental states we ascribe to others. Specifically, we ascribe to each other general beliefs about matters beyond the here and now, final desires for rather abstractly described states of affairs (though these two examples are presumably just the tip of the iceberg). So the other problem of other minds is that we need to explain how our justification for ascribing feelings, perceivings, intendings, emotions, extends to justification for all these other states.

The epistemic puzzle is characterized negatively: they are the mental states for which a solution to the original problem of other minds does not apply. And in approaching the other problem of other minds, ascriptions of mental states that are covered by whatever solution we have to the original problem of other minds will be a resource for us to wield. So before going on the second problem, I want to fill in one solution to the first problem so we can see its scope and limits.

In Warrant and Proper Function, Plantinga addresses the epistemic problem of other minds. In the first section of chapter 4, he casts the net widely, as a problem of accounting for the “warrant” of beliefs ascribing everything from “being appeared to redly” to “believing that Moscow, Idaho, is samller than its Russian namesake”. So the official remit covers both the original problem and the other problem of other minds, in my terms. But by the time he gets to section D, where his own view is presented, the goalposts have been shifted (mostly in the course of discussing Wittgensteinian “criteria”. By that point, the point is made in terms of a pair of a mental state S and a description of associated criteria, “behaviour-and-circumstances” B that constitute good but defeasible evidence for the mental states in question. After discussing this, Plantinga comments “So far [the Wittgensteinians] seem to be quite correct; there are criteria or something like them”. And so the question that Plantinga sets himself is to explain how an inference from B to S can leave us warranted in ascribing S, given that he has argued against backing it up with epistemologies based on analogy, abduction, or whatever the Wittgensteinians said.

Plantinga’s account is the following. First, “a human being whose appropriate faculties are functioning properly and who is aware of B will find herself making the S ascription (in the absence of defeaters)… it is part of the human design-plan to make these ascriptions in these circumstances… with very considerable firmness”. And so “if the part of the design plan governing these processes is successfully aimed at truth, then ascriptions of mental states to others will often have high warrant for us; if they are also true, they will constitute knowledge”. Here Plantinga is simply applying his distinctive “proper function” reliabilism. In short: for a belief to be warranted (=such that if its content is true, then it is known) is for it to be produced/sustained by a properly functioning part of a belief-forming system which has the aim of producing true beliefs, and which (across its designed-for circumstances) reliably succeeds in that aim.

Plantinga’s claims that our beliefs about other minds are warranted rely on various contingencies obtaining (on this he is very explicit). It will have to be that the others we encounter are on occasion in mental states like S. It will have to be that B is reliably correlated with S. It will have to be that human minds exhibit certain functions, that they are functioning properly, and that we are in the circumstances they are designed for. The teleology of the inferential disposition involved will have to be right, and the inferential disposition (and its defeating conditions) will have to be set up so as to extract reliably true belief formation out of the reliable correlations between B and S. Any of that can go wrong; but Plantinga invites us to accept that in actual, ordinary cases it is all in place.

The specific cases Plantinga discusses when defending the applicability of this proper function epistemology to the problem of other minds are those where our cognitive structure includes a defeasible inferential disposition taking us from awareness of behaviour-and-circumstances B to ascribing mental state S. That particular account has no application to any ascriptions of mental states S* that do not fit this bill: where there there is no correlation with specific behaviour and circumstances B or no inferential disposition reflecting that correlation (after all, to apply the Plantigan story, we need some “part” of our mental functioning which we can feed into the rest of the Plantingan story, e.g. evaluate whether that part of the overall system is aimed at truth). We can plausibly apply Plantinga’s account to pain-behaviour (pain); to someone tracking a red round object in their field (seeing a red round object); to someone whose arm goes up in a relaxed manner (raising their arm/intending to raise their arm). It applies, in other words, to the kind of “manifestable” mental states that in the last post I took to be in the scope of the other problem of other minds. But, again as mentioned there, it’s hard to fit general beliefs and final desires (not to mention long-term plans and highly specific emotions) into this model. If you tried to force them into the model, you’d have to identify specific behaviour-and-circumstantial “criteria” for attributing e.g. the belief that the sun will rise tomorrow to a person. But (setting aside linguistic behaviour, of which more in future posts) I say: there are no such criteria. Now, one might try to argue against me at this point, attempting to construct some highly conditional and complex disjunctive criteria of the circumstances in which it’d be appropriate to ascribe a belief that the sun will rise tomorrow, thinking through all the possible ways in which one might ascribe total belief-and-desire states which inter alia include this belief. But then I’ll point out that it would seem wild to assume that an inference with conditional and complex disjunctive antecedents will be in the relevant sense a “part” of our mental design. A criterial model is just the wrong model of belief and desire ascription, and I see little point in attempting to paper over that fact.

(Let me note as an aside the following: there may well be behavioural criteria which leads us to classify a person as a believer or desirer, someone who possesses general beliefs and final desires which inform her actions. That is quite different from positing behavioural criteria for specific general beliefs and final desires. It’s the latter I’m doubtful of.)

On the other hand, the Plantingan approach to the problem of other minds doesn’t have to be tied to the B-to-S inferences. Indeed, Plantinga says “Precisely how this works—just what our inborn belief-forming mechanisms here are like, precisely how they are modified by maturation and by experience and leanrign, precisely what role is played by nature as oppposed to nuture–these matters are not (fortunately enough) the objects of this study”. So he’s clearly open to generalizing the account beyond the criterion-based inferential model.

But to leave things open at this point is more or less to simply assert that the other problem of other minds has a solution, without saying what that solution is. For example, you might think at this point that what’s going on is that we form beliefs about the manifest states of others on the basis of behavioural criteria, understood in the Plantigan way, and then engage in something like an inference to the best explanation in embedding these mentalistic “data” within a simple, strong overall theory of what the minds of others are like. One would then give a Plantigan “proper function” defence of inferring to (what is in fact) the best explanation of one’s data as a defeasible belief-forming method producing warranted beliefs. It would have to be a belief-forming method that was the proper functioning of a part of our cognitive systems, a part aimed at truth, a part that reliably secures truth in the designed-for circumstances, etc.

As it happens, Plantinga himself argues that inference to the best explanation will be unsuccesful in solving the the problem of other minds. Let’s take a look at them. The main claim is that “A child’s belief, with respect to his mother, that she has thoughts and feelings, is no more a scientific hypothesis, for him, than the belief that he himself has arms or legs; in each case we come to the belief in question in the basic way, not by way of a tenuous inference to the best explanation or as a sort of clever abductive conjecture. A much more plausible view is that we are constructed… in such a way that these beliefs naturally arise upon the sort of stimuli … to which a child is normally exposed.” Now, Plantinga offers to his opponents a fallback position, whereby they can claim that the child’s beliefs are warranted by the availability of an IBE inference that they do not actually perform (I guess that Plantinga himself ties questions of warrant more closely to the actual genesis of beliefs, but he’s live to the possibility that others might not do so). But he thinks this won’t work, because what we need to explain is the very strong warrant (strong enough for knowledge) that we have in ascriptions of mental states to others, and he thinks that the warrant extractable from an IBE won’t be nearly so strong. He thinks that there are “plenty of other explanatory hypothesis [i.e. other than the hypothesis that other persons have beliefs, desires, hopes, fears, etc] that are equally simple or simpler”. The example given is the explanatory hypothesis that I am the only embodied mind, and that a Cartesian demon gives me strong inclination to believe in the existence of other bodies have minds. I think the best way of construing Plantinga’s argument here is that he’s saying that even if the Cartesian demon hypothesis is not as good as the other-minds hypothesis, if our only reason for dismissing it is the theoretical virtues of the latter hypothesis beyond simplicity and fitting-with-the-data, we’d be irreponsible unless we were live to new evidence coming in that’d turn the tables. So while we might have some kind of warrant in some kind of belief by IBE (that’s to be argued over by a comparison of the relative theoretical virtues of the explanatory hypothesis), we can already see we would be warranted only in “tenuous” and not very firm belief that others have minds, comparable to the kind of nuanced and open-to-contrary-evidence kinds of beliefs we properly take to scientific theories.

Let’s assume that this is a good criticism (I think it’s at least an interesting one). Does it extend to the second problem of other minds? Per Plantinga, we assume a range of criteria-based inferences to firm specific beliefs in a range of perceivings, intendings, emotions, and feelings, as well as to general classifications of them as believers and desirers. That leaves us with the challenge of spelling out how we get to specific general beliefs and final desires, and similar kind of states. Could we see these as a kind of tenuous belief, like a scientific hypothesis? The thesis would not now be that a child would go wrong in the firmness of his beliefs that his mother has feelings, is a thinker, etc–for those are criterion-backed judgements. But he would go wrong if he were comparably firm in his ascription of general beliefs and final desires to her. I take it that while some of our (and a child’s) ascriptions of general beliefs and desires to others will be tenuously held, others, and particularly negative ascriptions, are as firm as any other. I’m as firmly convinced that my partner harbours no secret final desire to corrupt my soul, and that she believes that the sun will rise tomorrow, as I do in her being a believer at all, or someone who feels pain, emotions, and sees the things around her. So I think if there’s merit to Plantinga’s criticism of the IBE model as a response to the original problem of other minds, it extends to using it as a response to the other problem of other minds.

The nice thing about appealing to a topic-neutral belief forming method like inference to the best explanation would be that we’d know exactly what we’d need to do to show that the ascriptions we arrive at are warranted, by Plantigan lights (the one sketched a couple of paras earlier). But the Plantigan worry about IBE is that it does not vindicate the kind of ascriptions that we in fact indulge in. This shows, I think, why Plantingans cannot ignore the problem of saying something more about the structures that underpin ascriptions of general beliefs, final desires and the like. The need there to be some part of our cognitive system issuing in the relevant mental states ascriptions which (like IBE) is reliable in normal circumstances but which (unlike IBE) reliably issues in the very ascriptions we find ourselves with. It’s not at all obvious what will fit the bill, and without that, we don’t have a general Plantingan answer to the problem of other minds.

Postscript: A question that arises, about the way I’m construing Plantinga’s criticism of IBE: suppose that we devised a method of belief formation IBE*, which is just like IBE but issues in *firmer* beliefs. What would go wrong? I tihnk the Plantingan answer must be that IBE* isn’t a reliable enough method to count as producing warranted beliefs, if set up this way. In the intro to Warrant and Proper Function, Plantinga says: “The module of the design plan governing the production of that belief must be such that the statistical or objective probability of a belief’s being true, given that it has been produced in accord with that module in a congenial cognitive environment, is high. How high, precisely? Here we encounter vagueness again; there is no precise answer. It is part of the presumption, however, that the degree of reliability varies as a function of degree of belief. The things we are most sure of—simple logical and arithmetical truths, such beliefs as that I now have a mild ache in my knee (that indeed I have knees) obvious perceptual truths–these are the sorts of beliefs we hold most firmly, perhaps with the maximum degree of firmness, and the ones such that we associate a very high degree of reliability with the modules of the design plan governing their production”. And so the underlying criticism here of an IBE approach to the problem of other minds is that IBE isn’t reliable enough in our kind of environment to count as warranting very firm degrees of belief in anything. And so when we find very firm beliefs, we must look for some other warranting mechanism.

The other epistemic problem of other minds

The classic epistemic problem of other minds goes something like this. I encounter a person in the street, writhing on the floor, exhibiting paradigmatic pain-behaviour. Now, you might run up to help. But for me, whose mind naturally turns to higher things, it poses a question. Sure, I know that the person writhing on the ground is moving their limbs in a certain distinctive way. And I find myself forming the belief on that basis that they are in pain. But with what right do I form the belief? What justifies the leap from pain-behaviour to pain?

You might try to answer by pointing to past experience: on previous occasions I’ve seen someone exhibiting pain-behaviour, they’ve turned out to be in pain. So I’ve got good inductive grounds for thinking that pain-behaviour signals pain. That sounds reasonable, except—what justified me on those earlier occasions in thinking the pain-behaviours were accompanied by pain? I didn’t directly feel the pain myself, after all (like I might have checked to see if smoke was generated by fire). If I’m to be justified in believing that all Fs are G on the basis of a belief that all past observed Fs were G, I better have been justified on those past occasions in thinking that the observed F was a G. So this line of thought just generalizes the question: how am I ever justified in moving from the direct observations (pain behaviour) to pain.

There’s one particular response to this question I’m going to mention and set aside entirely, for now. That is that I was justified in the past in thinking that someone was in pain on the basis of first person testimony—the person telling me that they are in pain. (First person testimony seems more interesting than second person testimony—someone else telling me the person is in pain—for that would just pushes the question back to how they knew). If first-person testimony can (without circularity) play this kind of foundational role in grounding our knowledge of the state of mind of another, that’ll be super-significant. But a competing line of thought, which I’ll be running with for now, is that we can in principle have knowledge that people are in pain, without this being based on their use of language. This is a very natural picture. It is one on which, for example, we learn the meaning of the word “pain” by noting that it’s applied to people who, we know, are in pain.

Now comes the standard framing move of this first epistemic problem of other minds. We spot that there is one case in which we have knowledge that a thing is in pain (and that this is accompanied by pain-behaviour) where our belief that it’s in pain isn’t based on its observable pain behaviour. That happens when the thing in pain is ourselves. Our knowledge that we ourselves are in pain is introspective, rather than observational. This looks like it helps! It gives us access to a set of cases where pain is correlated with pain behaviour. Whenever, then, we are in a position to directly observe whether or not someone is in pain, we see that indeed, normally, pain behaviour is accompanied by pain.

But the framing was a trap. At this point the other-minds sceptic can point to inadequacies in generalizing from pain-behaviour/pain links in the case of a single individual, to a general correlation. Suppose you can only extract balls from a single urn. You notice all the blue balls are heavy, and all the red balls are light. Are you then justified in concluding that all blue balls in any urn are heavy? It seems not: you have no reason for thinking that you’ve taken a fair sampling of the blue balls overall; you have randomly sampled only a certain restricted population: the balls in this urn. At a minimum, we’d need to supplement your egocentric pain/pain-behaviour information with some explanation of why you should take your own case to be representative. But what would that explanation be?

The challenge might be met. After all, on the traditional inductive model of justifying generalizations, we move from local patterns occurring in the region of space-time we inhabit to global generalizations even though we do not “randomly sample” what happens in space and time. Somehow, induction (or something like it) takes us beyond the interpolations of patterns holding through the population randomly sampled, to the unrestricted extrapolation of certain patterns. Whatever secret sauce makes extrapolation beyond the sampled population work in everyday inductions, maybe it is also present in the case of pain and pain behaviour, allowing extrapolation for my case to all cases. But pending some specific account of how the challenge could be met, I think it’s reasonable to look for alternatives.

So that’s the first problem of other minds. It’s a problem of how we even get started in justifiedly attributing (=having justified beliefs about) the mental states of others. And though I’ve run through this for the case of pain, the usual stock example, you could run through the same challenge for many other mental states. Here are some candidates: that x sees a rock, or x sees a red round thing, or sees that the red round thing is on the floor. That x intends to hail a taxi, or x intends to raise her arm. That x is afraid, that x is afraid of that snake. In each case, there’s a characteristic kind of behaviour or relation to the environment that we could in principle describe in non-mentalistic terms, and which would be a basis for justifiedly attributing the various feelings/perceiving/intendings/emotions to the other.

What’s the other problem of other minds then? Well, it’s the problem of how we are justified in the rest of what we believe about the minds of others. The examples I’ve mentioned are quite different from each other (as Bill Wringe recently emphasized, some of centrally involve intentional content, which may pose particular issues), but they are well-represented by pain in the following sense: they are all states which are “specific and manifestable” in a certain sense. Pain is tied to pain-behaviour. Fear of a sanke tied to fear-behaviour targetting the snake. An intention to raise one’s arm is tied to one’s arm going up in a distinctive fashion. Seeing a red round thing is tied to having an unobstructed view of the thing (while awake etc). Those “ties” to the manifest circumstances of a person may be defeasible and contingent, but they’re clearly going to be central to the epistemological story. But there are plenty of mental states that are not like that. The two cases that occupy me the most are: general beliefs or beliefs about things beyond the here-and-now (x’s belief that she is not a brain in a vat; her belief that the sun will rise tomorrow) and final desires (a desire for security, or for justice).

There are plenty of “lines to take” on the first problem of other minds that won’t generalize to these cases. Perhaps we can make sense of simply “perceiving” what others feel, or see, or intend, when they instanatiate the manifestations associated with those (maybe I point you to a story about mirror neurons, or the like which could give you the empirical underpinnings of such a process). Maybe, following Plantinga, we think of the belief formation involved as a defeasible but reliable form of inference—the accurate executation of a belief-forming system successfully aimed at truth, producing in this instance a belief about another’s mind triggered by seeing the manifestation. But general and relatively abstract beliefs have no direct characteristic manifestations (at least setting aside first-person testimony, as we have done), and the same goes for final desires. If argument against characteristic manifestations is needed, I’d point to the famous circularity objections to behaviouristic analyses of individual belief and desire states. Essentially: pair a general and abstract belief up with screwy enough desires, and it fits with almost any behaviour; pair a final desire up with screwy enough belief, and the desire fits with almost any behaviour. So if anything is manifested in behaviour, it would seemingly have to be belief-desire states as a whole. But even then, there are many belief-desire states that would fit with any given piece of behaviour. The idea of direct manifestations in behaviour (or relation to the environments) just seems the wrong model to apply to these states.

If this is an epistemic problem of other minds, then it’s a different problem from the first. But is it a problem? Here’s what I’m imagining. Imagine that we’d solved the first epistemic problem of other minds to our own satisfaction. We have satisfied ourselves, at last, that we’re justified in believing that the person writhing on the floor is indeed in pain—and indeed, that he sees us, and is attracting attention by raising his arm, etc. All of the various manifestation-to-mental state ties discussed earlier, we’ll assume, produced justified beliefs (for specificity, suppose the Plantigian story is correct). But now, given all this as a basis, what justifies us in thinking that he wants help, that he believes that we are able to help him, and so on? Of course, we would naturally attribute all this to a person in those circumstances. We think people in pain would like help, as a general rule. But we need to spell out what justifies this second layer of description of others.

At this point, we start to parallel what went before. So: we might point to past experience with people who are in pain. In the past, people in pain wanted help, and so…. but once again, that pushes the question back to how we knew in those historical cases that help was wanted. We might have been told by others that those historical cases wanted help; but how did our informants know? Because there’s no general tie between abstract and non-immediate beliefs/desires and anything immediately manifested, we can’t credibly say we simply perceive these states of the other, nor that we defeasibly infer them from some observable basis. So the problem is: what to do?

In future posts, I want to say something about how answering this other problem of other minds might go. Essentially, I want to explore a model on which the epistemology of other minds involves an contingent and topic-specific rules for belief formation about the belief-desires of others (“epistemic charity”), whose epistemic standing will have to be assessed and defended. Other than other contingent/topic-specific rules, the main alternatives that I’ll be considering are topic-neutral rules of belief formation (e.g. inference to the best explanation) and also, if I find something useful to say about it, an epistemology which gives language the starring role as a direct manifestation of otherwise hidden beliefs and desires. We’ll see how far I get!