This is one of a series of posts setting out my work on the Nature of Representation. You can view the whole series by following this link.
In previous posts, I’ve explored what radical interpretation (plus assumptions about cognitive architecture, plus epistemological assumptions) predicts about the denotation of logical concepts, conjunction and quantification in particular. I want to now tackle a different sort of concept: concepts of material objects.
I’m not going to attempt a general characterization of singular thought, any more than I covered all the interesting logical concepts previously. I’ll again pick an illustrative case: perceptual demonstrative concepts. There is, for example, a plate on the table to my left, and I can train my attention upon it, and think
- that is round
- there is a biscuit lying on that,
- that would survive being dropped onto carpet.
These are perceptual demonstrative thoughts, and the concept that in each is a token of a perceptual demonstrative concept.
Following the pattern established before, I will make some architectural assumptions about the way that perceptual demonstrative thoughts figure in our psychology, and explore what radical interpretation will predict under those assumptions. By making everything conditional in this way, I avoid taking a stand in the rich literature concerning the correct characterization of our perceptual demonstrative thought. But I will again borrow my architectural assumptions from another’s theory, so interested readers can look to them for a defence of these assumptions as true of us. In this case, the resource is Imogen Dickie’s terrific book Fixing Reference, and my strategy is to show how we can undergird her theory of reference for perceptual demonstrative concepts within radical interpretation.
The first three architectural assumptions I will make are now familiar—that belief states are structured, that facts about syntax are grounded prior to questions of content arising, and that we can pick out inferential dispositions interrelating belief-types, again prior to questions of content being determined (for uniformity, I will continue to use the Peacockian idea of inferences that are treated as primitively compelling).
It is worth pulling out from these assumptions one kind of “syntactic” fact that is load-bearing here: we will be assuming, when it comes to perceptual demonstratives, that we have access to the facts that a pair of token beliefs and feature the same perceptual demonstrative. Dickie assumes (in line with “mental file” assumptions about cognitive architecture) that a signature of this is that we find an inference from this pair to the generalization primitively compelling, even in the absence of side-premises. Now, prima facie, this sort of file-individuation constraint is grounded in certain facts about content, in particular, the identification of particular concepts as quantification and conjunction. Whether there’s some looming circularity here is something I’ll come back to later.
The fourth architectural assumption, as before, concerns a characteristic conceptual role that a perceptual demonstrative concept d plays. Thoughts of the form “d is F” (for fixed perceptual demonstrative d and variable F), form a unified body—in part, this unity consists in the identity-free generalizations already mentioned as a signature of token beliefs containing the same concept d in the first place. Another aspect of unity is that the architecture seeks to resolve or eliminate inconsistencies within the body of beliefs. But the key assumption is that there is a hierarchical structure in how we resolve such inconsistencies, based on the way the states were formed. Let us look at this in more detail.
When Sally is in a position to token a perceptual demonstrative concept d, there must be, says Dickie, a “perceptual link” between Sally and a certain object (say, a plate lying on the table), “a perceptual feed which carries information as to some or all of colour, size, shape, state of motion or rest, and so on across the range of observable properties”. The link involves some causal connections between subject and the scene in front of them, and also internal “feature-to-property” processing carried out subpersonally by the visual system, processing initiated by the subject’s perceptual attention. Beliefs are formed by “uptake” of the information made available through this link, and all the beliefs produced by a given concrete link to an object are fed into the same body of beliefs—that is, the resulting beliefs , , feature the same perceptual demonstrative d. Further d-involving beliefs can be added to this body by various means—by inference, by testimony, etc—but those beliefs formed by uptake from the original perceptual link have a privileged role. Specifically, if there is an inconsistency between a belief [d is F] and a belief [d is not-F], where the first is formed via the perceptual link the second is not (but instead, e.g. through testimony) then the former is retained and the latter thrown out.
Just as before, whatever the empirical status of this cognitive architecture, we can consider possible creatures who do work this way. So what does radical interpretation would say about creatures with this architecture?
Radical interpretation (again!) tells us that the correct interpretation of mental states is one that maximizes the rationality of the individual concerned. Given the assumptions about cognitive architecture, we need our interpretation of the concept d to make rational the practice of belief-management.
The following auxiliary normative assumptions will suffice to generate the prediction that d denotes whatever object O is at the end of the perceptual link associated with d:
- A substantively rational agent would be disposed to have a belief that O is F (with a perceptual-demonstrative mode of presentation), formed by uptake from a perceptual link to O, override a belief that O is not F (with the same perceptual-demonstrative mode of presentation), when that is formed by some other means.
- For no object X other than O would a substantively rational agent be disposed to have a belief that X is F, formed by uptake from a perceptual link to O override a belief that O is not F, when that is formed by some other means.
Clearly, it will be crucial to explain why we should believe these auxiliary normative premises—and in fact, I’ll be qualifying them shortly. But before turning to that, I want to fill in (in a now-familiar way!) the remainder of the story by just running through how the assumptions just articulated, together with what we have on the table already, deliver the result that the perceptual demonstrative d denotes the object at the far end of the perceptual link.
First, we have the a posteriori assumption that d plays the stated role in Sally’s cognitive architecture. Second, we have substantive radical interpretation which tells us that the correct interpretation of c is one that maximizes (substantive) rationality of the agent. We need again a third, “localizing”, assumption, inferential role determinism for d, which says that the interpretation on which Sally is most rational overall is one which also makes most rational the particular belief-management tendencies just listed. Putting these three together we have the following: the correct interpretation of Sally is one that makes the belief-management tendencies associated with d most rational. The final element to add to this is the pair of normative premises introduced above, which tell us that O—the thing at the far end of the perceptual link—is the thing that (uniquely) makes those belief-management tendencies rational. We then derive that Sally’s demonstrative concept d denotes O.
The pair of normative assumptions is clearly crucial to this. But why believe them? The first sounds very plausible, once we unpack it a bit. Since we’re concerned with belief formation, the substantively rational agent is the one whose beliefs are justified. And what we’re specifically concerned with is a case where an agent has testimonial reasons to believe that is not F (where they’re picking out that via a perceptual link to thing in question), but then perceives that that is F. The compelling thought is that in such a circumstance, the testimonial reasons one had to believe that the thing in question is not F are defeated, but that one is justified in trusting and endorsing the perception, and so coming to believe that that is F.
In response, someone might wonder whether there might be cases where background knowledge and testimony are sufficiently weighty that one should distrust the deliverances of perception, rather than drop the other beliefs. Cases of perceptual illusion provide examples where it perceptually seems that pair comprises lines of different lengths but memory of similar lines, plus theoretical understanding of the source of the illusions, plus testimony that the lines are in fact of the same length, conspire to make endorsing the seeming irrational. In general, we can view scenes in which there are misleading cues so that ordinary visual processing represents the objects to which we attend as having properties other than those they in fact have.
But as well as showing that sometimes we are justified in hanging on to the belief that is F when the perceptual seemings are that the thing in question is not F, this sort of case also shows that *our* belief management policies are not as simple as those in the model with which we’ve been working. We don’t always let beliefs formed via the perceptual link override those formed by other means. A better approximation would be this: that by default, and when we have no evidence that we are in circumstances where perception is systematically unreliable, perceptually-based belief trumps beliefs with other sources. But where we have specific reason to believe doubt the reliability of the perceptual link in a particular respect, we do not allow it to be trump beliefs from other sources. And this more nuanced belief-management policy does seem to produce justified beliefs.
(This is no accident! Insofar as you think our actual belief-management practices involving perceptual demonstratives are pretty good at producing justified beliefs, then to the extent that we describe those practices successfully, you should end up agreeing that under the correct interpretation those practices issue in justified beliefs. Of course, simple models like the one’s we are working with might fail to produce justified beliefs, but when we discover an issue, complicating the model to make it more closely approximate the actual case is a decent device).
But what of the second assumption? This is more involved, and I delay it for a follow-up post.