The other problem of other minds was the following. Grant that we have justification for ascribing various “manifested” mental states to others. Specifically, we have a story about how we are justified in ascribing at least the following: feelings like pain, emotions like joy or fear, perceivings, intendings. Many of these have intentional contents, and we suppose that our story shows how we can be justified (in the right circumstances) in ascribing states of these types for a decent range of contents, though perhaps not all. But such a story, we are assuming, is not yet an epistemic vindication of the total mental states we ascribe to others. Specifically, we ascribe to each other general beliefs about matters beyond the here and now, final desires for rather abstractly described states of affairs (though these two examples are presumably just the tip of the iceberg). So the other problem of other minds is that we need to explain how our justification for ascribing feelings, perceivings, intendings, emotions, extends to justification for all these other states.
The epistemic puzzle is characterized negatively: they are the mental states for which a solution to the original problem of other minds does not apply. And in approaching the other problem of other minds, ascriptions of mental states that are covered by whatever solution we have to the original problem of other minds will be a resource for us to wield. So before going on the second problem, I want to fill in one solution to the first problem so we can see its scope and limits.
In Warrant and Proper Function, Plantinga addresses the epistemic problem of other minds. In the first section of chapter 4, he casts the net widely, as a problem of accounting for the “warrant” of beliefs ascribing everything from “being appeared to redly” to “believing that Moscow, Idaho, is samller than its Russian namesake”. So the official remit covers both the original problem and the other problem of other minds, in my terms. But by the time he gets to section D, where his own view is presented, the goalposts have been shifted (mostly in the course of discussing Wittgensteinian “criteria”. By that point, the point is made in terms of a pair of a mental state S and a description of associated criteria, “behaviour-and-circumstances” B that constitute good but defeasible evidence for the mental states in question. After discussing this, Plantinga comments “So far [the Wittgensteinians] seem to be quite correct; there are criteria or something like them”. And so the question that Plantinga sets himself is to explain how an inference from B to S can leave us warranted in ascribing S, given that he has argued against backing it up with epistemologies based on analogy, abduction, or whatever the Wittgensteinians said.
Plantinga’s account is the following. First, “a human being whose appropriate faculties are functioning properly and who is aware of B will find herself making the S ascription (in the absence of defeaters)… it is part of the human design-plan to make these ascriptions in these circumstances… with very considerable firmness”. And so “if the part of the design plan governing these processes is successfully aimed at truth, then ascriptions of mental states to others will often have high warrant for us; if they are also true, they will constitute knowledge”. Here Plantinga is simply applying his distinctive “proper function” reliabilism. In short: for a belief to be warranted (=such that if its content is true, then it is known) is for it to be produced/sustained by a properly functioning part of a belief-forming system which has the aim of producing true beliefs, and which (across its designed-for circumstances) reliably succeeds in that aim.
Plantinga’s claims that our beliefs about other minds are warranted rely on various contingencies obtaining (on this he is very explicit). It will have to be that the others we encounter are on occasion in mental states like S. It will have to be that B is reliably correlated with S. It will have to be that human minds exhibit certain functions, that they are functioning properly, and that we are in the circumstances they are designed for. The teleology of the inferential disposition involved will have to be right, and the inferential disposition (and its defeating conditions) will have to be set up so as to extract reliably true belief formation out of the reliable correlations between B and S. Any of that can go wrong; but Plantinga invites us to accept that in actual, ordinary cases it is all in place.
The specific cases Plantinga discusses when defending the applicability of this proper function epistemology to the problem of other minds are those where our cognitive structure includes a defeasible inferential disposition taking us from awareness of behaviour-and-circumstances B to ascribing mental state S. That particular account has no application to any ascriptions of mental states S* that do not fit this bill: where there there is no correlation with specific behaviour and circumstances B or no inferential disposition reflecting that correlation (after all, to apply the Plantigan story, we need some “part” of our mental functioning which we can feed into the rest of the Plantingan story, e.g. evaluate whether that part of the overall system is aimed at truth). We can plausibly apply Plantinga’s account to pain-behaviour (pain); to someone tracking a red round object in their field (seeing a red round object); to someone whose arm goes up in a relaxed manner (raising their arm/intending to raise their arm). It applies, in other words, to the kind of “manifestable” mental states that in the last post I took to be in the scope of the other problem of other minds. But, again as mentioned there, it’s hard to fit general beliefs and final desires (not to mention long-term plans and highly specific emotions) into this model. If you tried to force them into the model, you’d have to identify specific behaviour-and-circumstantial “criteria” for attributing e.g. the belief that the sun will rise tomorrow to a person. But (setting aside linguistic behaviour, of which more in future posts) I say: there are no such criteria. Now, one might try to argue against me at this point, attempting to construct some highly conditional and complex disjunctive criteria of the circumstances in which it’d be appropriate to ascribe a belief that the sun will rise tomorrow, thinking through all the possible ways in which one might ascribe total belief-and-desire states which inter alia include this belief. But then I’ll point out that it would seem wild to assume that an inference with conditional and complex disjunctive antecedents will be in the relevant sense a “part” of our mental design. A criterial model is just the wrong model of belief and desire ascription, and I see little point in attempting to paper over that fact.
(Let me note as an aside the following: there may well be behavioural criteria which leads us to classify a person as a believer or desirer, someone who possesses general beliefs and final desires which inform her actions. That is quite different from positing behavioural criteria for specific general beliefs and final desires. It’s the latter I’m doubtful of.)
On the other hand, the Plantingan approach to the problem of other minds doesn’t have to be tied to the B-to-S inferences. Indeed, Plantinga says “Precisely how this works—just what our inborn belief-forming mechanisms here are like, precisely how they are modified by maturation and by experience and leanrign, precisely what role is played by nature as oppposed to nuture–these matters are not (fortunately enough) the objects of this study”. So he’s clearly open to generalizing the account beyond the criterion-based inferential model.
But to leave things open at this point is more or less to simply assert that the other problem of other minds has a solution, without saying what that solution is. For example, you might think at this point that what’s going on is that we form beliefs about the manifest states of others on the basis of behavioural criteria, understood in the Plantigan way, and then engage in something like an inference to the best explanation in embedding these mentalistic “data” within a simple, strong overall theory of what the minds of others are like. One would then give a Plantigan “proper function” defence of inferring to (what is in fact) the best explanation of one’s data as a defeasible belief-forming method producing warranted beliefs. It would have to be a belief-forming method that was the proper functioning of a part of our cognitive systems, a part aimed at truth, a part that reliably secures truth in the designed-for circumstances, etc.
As it happens, Plantinga himself argues that inference to the best explanation will be unsuccesful in solving the the problem of other minds. Let’s take a look at them. The main claim is that “A child’s belief, with respect to his mother, that she has thoughts and feelings, is no more a scientific hypothesis, for him, than the belief that he himself has arms or legs; in each case we come to the belief in question in the basic way, not by way of a tenuous inference to the best explanation or as a sort of clever abductive conjecture. A much more plausible view is that we are constructed… in such a way that these beliefs naturally arise upon the sort of stimuli … to which a child is normally exposed.” Now, Plantinga offers to his opponents a fallback position, whereby they can claim that the child’s beliefs are warranted by the availability of an IBE inference that they do not actually perform (I guess that Plantinga himself ties questions of warrant more closely to the actual genesis of beliefs, but he’s live to the possibility that others might not do so). But he thinks this won’t work, because what we need to explain is the very strong warrant (strong enough for knowledge) that we have in ascriptions of mental states to others, and he thinks that the warrant extractable from an IBE won’t be nearly so strong. He thinks that there are “plenty of other explanatory hypothesis [i.e. other than the hypothesis that other persons have beliefs, desires, hopes, fears, etc] that are equally simple or simpler”. The example given is the explanatory hypothesis that I am the only embodied mind, and that a Cartesian demon gives me strong inclination to believe in the existence of other bodies have minds. I think the best way of construing Plantinga’s argument here is that he’s saying that even if the Cartesian demon hypothesis is not as good as the other-minds hypothesis, if our only reason for dismissing it is the theoretical virtues of the latter hypothesis beyond simplicity and fitting-with-the-data, we’d be irreponsible unless we were live to new evidence coming in that’d turn the tables. So while we might have some kind of warrant in some kind of belief by IBE (that’s to be argued over by a comparison of the relative theoretical virtues of the explanatory hypothesis), we can already see we would be warranted only in “tenuous” and not very firm belief that others have minds, comparable to the kind of nuanced and open-to-contrary-evidence kinds of beliefs we properly take to scientific theories.
Let’s assume that this is a good criticism (I think it’s at least an interesting one). Does it extend to the second problem of other minds? Per Plantinga, we assume a range of criteria-based inferences to firm specific beliefs in a range of perceivings, intendings, emotions, and feelings, as well as to general classifications of them as believers and desirers. That leaves us with the challenge of spelling out how we get to specific general beliefs and final desires, and similar kind of states. Could we see these as a kind of tenuous belief, like a scientific hypothesis? The thesis would not now be that a child would go wrong in the firmness of his beliefs that his mother has feelings, is a thinker, etc–for those are criterion-backed judgements. But he would go wrong if he were comparably firm in his ascription of general beliefs and final desires to her. I take it that while some of our (and a child’s) ascriptions of general beliefs and desires to others will be tenuously held, others, and particularly negative ascriptions, are as firm as any other. I’m as firmly convinced that my partner harbours no secret final desire to corrupt my soul, and that she believes that the sun will rise tomorrow, as I do in her being a believer at all, or someone who feels pain, emotions, and sees the things around her. So I think if there’s merit to Plantinga’s criticism of the IBE model as a response to the original problem of other minds, it extends to using it as a response to the other problem of other minds.
The nice thing about appealing to a topic-neutral belief forming method like inference to the best explanation would be that we’d know exactly what we’d need to do to show that the ascriptions we arrive at are warranted, by Plantigan lights (the one sketched a couple of paras earlier). But the Plantigan worry about IBE is that it does not vindicate the kind of ascriptions that we in fact indulge in. This shows, I think, why Plantingans cannot ignore the problem of saying something more about the structures that underpin ascriptions of general beliefs, final desires and the like. The need there to be some part of our cognitive system issuing in the relevant mental states ascriptions which (like IBE) is reliable in normal circumstances but which (unlike IBE) reliably issues in the very ascriptions we find ourselves with. It’s not at all obvious what will fit the bill, and without that, we don’t have a general Plantingan answer to the problem of other minds.
Postscript: A question that arises, about the way I’m construing Plantinga’s criticism of IBE: suppose that we devised a method of belief formation IBE*, which is just like IBE but issues in *firmer* beliefs. What would go wrong? I tihnk the Plantingan answer must be that IBE* isn’t a reliable enough method to count as producing warranted beliefs, if set up this way. In the intro to Warrant and Proper Function, Plantinga says: “The module of the design plan governing the production of that belief must be such that the statistical or objective probability of a belief’s being true, given that it has been produced in accord with that module in a congenial cognitive environment, is high. How high, precisely? Here we encounter vagueness again; there is no precise answer. It is part of the presumption, however, that the degree of reliability varies as a function of degree of belief. The things we are most sure of—simple logical and arithmetical truths, such beliefs as that I now have a mild ache in my knee (that indeed I have knees) obvious perceptual truths–these are the sorts of beliefs we hold most firmly, perhaps with the maximum degree of firmness, and the ones such that we associate a very high degree of reliability with the modules of the design plan governing their production”. And so the underlying criticism here of an IBE approach to the problem of other minds is that IBE isn’t reliable enough in our kind of environment to count as warranting very firm degrees of belief in anything. And so when we find very firm beliefs, we must look for some other warranting mechanism.