The classic epistemic problem of other minds goes something like this. I encounter a person in the street, writhing on the floor, exhibiting paradigmatic pain-behaviour. Now, you might run up to help. But for me, whose mind naturally turns to higher things, it poses a question. Sure, I know that the person writhing on the ground is moving their limbs in a certain distinctive way. And I find myself forming the belief on that basis that they are in pain. But with what right do I form the belief? What justifies the leap from pain-behaviour to pain?
You might try to answer by pointing to past experience: on previous occasions I’ve seen someone exhibiting pain-behaviour, they’ve turned out to be in pain. So I’ve got good inductive grounds for thinking that pain-behaviour signals pain. That sounds reasonable, except—what justified me on those earlier occasions in thinking the pain-behaviours were accompanied by pain? I didn’t directly feel the pain myself, after all (like I might have checked to see if smoke was generated by fire). If I’m to be justified in believing that all Fs are G on the basis of a belief that all past observed Fs were G, I better have been justified on those past occasions in thinking that the observed F was a G. So this line of thought just generalizes the question: how am I ever justified in moving from the direct observations (pain behaviour) to pain.
There’s one particular response to this question I’m going to mention and set aside entirely, for now. That is that I was justified in the past in thinking that someone was in pain on the basis of first person testimony—the person telling me that they are in pain. (First person testimony seems more interesting than second person testimony—someone else telling me the person is in pain—for that would just pushes the question back to how they knew). If first-person testimony can (without circularity) play this kind of foundational role in grounding our knowledge of the state of mind of another, that’ll be super-significant. But a competing line of thought, which I’ll be running with for now, is that we can in principle have knowledge that people are in pain, without this being based on their use of language. This is a very natural picture. It is one on which, for example, we learn the meaning of the word “pain” by noting that it’s applied to people who, we know, are in pain.
Now comes the standard framing move of this first epistemic problem of other minds. We spot that there is one case in which we have knowledge that a thing is in pain (and that this is accompanied by pain-behaviour) where our belief that it’s in pain isn’t based on its observable pain behaviour. That happens when the thing in pain is ourselves. Our knowledge that we ourselves are in pain is introspective, rather than observational. This looks like it helps! It gives us access to a set of cases where pain is correlated with pain behaviour. Whenever, then, we are in a position to directly observe whether or not someone is in pain, we see that indeed, normally, pain behaviour is accompanied by pain.
But the framing was a trap. At this point the other-minds sceptic can point to inadequacies in generalizing from pain-behaviour/pain links in the case of a single individual, to a general correlation. Suppose you can only extract balls from a single urn. You notice all the blue balls are heavy, and all the red balls are light. Are you then justified in concluding that all blue balls in any urn are heavy? It seems not: you have no reason for thinking that you’ve taken a fair sampling of the blue balls overall; you have randomly sampled only a certain restricted population: the balls in this urn. At a minimum, we’d need to supplement your egocentric pain/pain-behaviour information with some explanation of why you should take your own case to be representative. But what would that explanation be?
The challenge might be met. After all, on the traditional inductive model of justifying generalizations, we move from local patterns occurring in the region of space-time we inhabit to global generalizations even though we do not “randomly sample” what happens in space and time. Somehow, induction (or something like it) takes us beyond the interpolations of patterns holding through the population randomly sampled, to the unrestricted extrapolation of certain patterns. Whatever secret sauce makes extrapolation beyond the sampled population work in everyday inductions, maybe it is also present in the case of pain and pain behaviour, allowing extrapolation for my case to all cases. But pending some specific account of how the challenge could be met, I think it’s reasonable to look for alternatives.
So that’s the first problem of other minds. It’s a problem of how we even get started in justifiedly attributing (=having justified beliefs about) the mental states of others. And though I’ve run through this for the case of pain, the usual stock example, you could run through the same challenge for many other mental states. Here are some candidates: that x sees a rock, or x sees a red round thing, or sees that the red round thing is on the floor. That x intends to hail a taxi, or x intends to raise her arm. That x is afraid, that x is afraid of that snake. In each case, there’s a characteristic kind of behaviour or relation to the environment that we could in principle describe in non-mentalistic terms, and which would be a basis for justifiedly attributing the various feelings/perceiving/intendings/emotions to the other.
What’s the other problem of other minds then? Well, it’s the problem of how we are justified in the rest of what we believe about the minds of others. The examples I’ve mentioned are quite different from each other (as Bill Wringe recently emphasized, some of centrally involve intentional content, which may pose particular issues), but they are well-represented by pain in the following sense: they are all states which are “specific and manifestable” in a certain sense. Pain is tied to pain-behaviour. Fear of a sanke tied to fear-behaviour targetting the snake. An intention to raise one’s arm is tied to one’s arm going up in a distinctive fashion. Seeing a red round thing is tied to having an unobstructed view of the thing (while awake etc). Those “ties” to the manifest circumstances of a person may be defeasible and contingent, but they’re clearly going to be central to the epistemological story. But there are plenty of mental states that are not like that. The two cases that occupy me the most are: general beliefs or beliefs about things beyond the here-and-now (x’s belief that she is not a brain in a vat; her belief that the sun will rise tomorrow) and final desires (a desire for security, or for justice).
There are plenty of “lines to take” on the first problem of other minds that won’t generalize to these cases. Perhaps we can make sense of simply “perceiving” what others feel, or see, or intend, when they instanatiate the manifestations associated with those (maybe I point you to a story about mirror neurons, or the like which could give you the empirical underpinnings of such a process). Maybe, following Plantinga, we think of the belief formation involved as a defeasible but reliable form of inference—the accurate executation of a belief-forming system successfully aimed at truth, producing in this instance a belief about another’s mind triggered by seeing the manifestation. But general and relatively abstract beliefs have no direct characteristic manifestations (at least setting aside first-person testimony, as we have done), and the same goes for final desires. If argument against characteristic manifestations is needed, I’d point to the famous circularity objections to behaviouristic analyses of individual belief and desire states. Essentially: pair a general and abstract belief up with screwy enough desires, and it fits with almost any behaviour; pair a final desire up with screwy enough belief, and the desire fits with almost any behaviour. So if anything is manifested in behaviour, it would seemingly have to be belief-desire states as a whole. But even then, there are many belief-desire states that would fit with any given piece of behaviour. The idea of direct manifestations in behaviour (or relation to the environments) just seems the wrong model to apply to these states.
If this is an epistemic problem of other minds, then it’s a different problem from the first. But is it a problem? Here’s what I’m imagining. Imagine that we’d solved the first epistemic problem of other minds to our own satisfaction. We have satisfied ourselves, at last, that we’re justified in believing that the person writhing on the floor is indeed in pain—and indeed, that he sees us, and is attracting attention by raising his arm, etc. All of the various manifestation-to-mental state ties discussed earlier, we’ll assume, produced justified beliefs (for specificity, suppose the Plantigian story is correct). But now, given all this as a basis, what justifies us in thinking that he wants help, that he believes that we are able to help him, and so on? Of course, we would naturally attribute all this to a person in those circumstances. We think people in pain would like help, as a general rule. But we need to spell out what justifies this second layer of description of others.
At this point, we start to parallel what went before. So: we might point to past experience with people who are in pain. In the past, people in pain wanted help, and so…. but once again, that pushes the question back to how we knew in those historical cases that help was wanted. We might have been told by others that those historical cases wanted help; but how did our informants know? Because there’s no general tie between abstract and non-immediate beliefs/desires and anything immediately manifested, we can’t credibly say we simply perceive these states of the other, nor that we defeasibly infer them from some observable basis. So the problem is: what to do?
In future posts, I want to say something about how answering this other problem of other minds might go. Essentially, I want to explore a model on which the epistemology of other minds involves an contingent and topic-specific rules for belief formation about the belief-desires of others (“epistemic charity”), whose epistemic standing will have to be assessed and defended. Other than other contingent/topic-specific rules, the main alternatives that I’ll be considering are topic-neutral rules of belief formation (e.g. inference to the best explanation) and also, if I find something useful to say about it, an epistemology which gives language the starring role as a direct manifestation of otherwise hidden beliefs and desires. We’ll see how far I get!