This is one of a series of posts setting out my work on the Nature of Representation. You can view the whole series by following this link.
This post compares what I have said so far about the concepts of conjunction (which should be the easiest case) and unrestricted universal quantification (which is a notoriously hard case) to what Christopher Peacocke says on the same topics.
Peacocke’s account of logical concepts is something of a moving target. There is a “partial” treatment given in his 1986 book Thought. There are some important elaborations in his 1987 British Academy lecture “Understanding logical constants”. And there’s yet more on this in his 1992 book A Study of Concepts (ASOC). His thinking continued to develop after this, but one theme of his later work is the appeal to “implicit understandings” of concepts which make him less comparable to what I’m doing here. So I’ll concentrate on the the remarks he makes in these three early pieces, concentrating in particular in the account presented in the 1992 work, and using the earlier two works to fill in the account where the 1992 book is inexplicit.
One immediate contrast between the ideas I have been presenting and ASOC is the intended subject matter. Peacocke is assuming a Fregean account of thought, on which thoughts are structured entities whose components are Fregean senses. He calls these thought-components concepts. Each concept then determines (perhaps in context) a referent. He makes some standard Fregean assumptions about the division of labour between sense and reference. So we’d expect to see, for example, differences in cognitive significance of thoughts explained by difference in some component sense.
Peacocke’s project in ASOC is to give an account of what it is for a person to possess a concept. And he does this by setting out “possession conditions” for a target concept (be that conjunction or universal quantification). In the cases that concern us, these possession conditions consist simply in the subject finding relevant inferences primitively compelling (and the subject needs to be finding the token inferences primitively compelling because they have the right form). With this account of what it is for a subject to possess the concept conjunction, or quantification, or whatever, Peacocke then goes on to offer what he calls a “determination theory”, for each individual concept i.e. an account of what the concept refers to (or “has as its semantic value”). For conjunction he offers the following:
- The truth function that is the semantic value of conjunction is the function that makes transitions of the form mentioned in its possession conditions always truth-preserving. (p.10).
The relevant transitions are of course the familiar conjunction-introduction and conjunction elimination rules.
Though each determination theory is a determination theory of one particular concept, Peacocke says we should expect such determination theories to have a general form.
- The determination theory for a given concept (together with the world in empirical cases) assigns semantic values in such a way that the belief-forming practices mentioned in the concept’s possession condition are correct. That is, in the case of belief formation, the practices result in true beliefs, and in the case of principles of inference, they result in truth-preserving inferences, when semantic values are assigned in accordance with the determination theory”. (p.19)
Let me work this through for the various things that Peacocke says about quantifiers. In ASOC, the leading example is quantification restricted to the natural numbers C. At p.7, he states that the possession conditions are that the thinker find suitable instances of the form primitively compelling:
- from Cx: Fx derive Fn
The suitable instances are are “those involving a concept n for which the content n is a natural number is uninformative”. In addition, the thinker is (a) required to find these inferences compelling because of the given form; and (b) the thinker is not required to find any other inferences essentially involving C primitively compelling. The determination theory is then
- The semantic value of this quantifier is the function from extensions of concepts (true or false of natural numbers) to truth values that makes elimination inferences of the form mentioned in the possession condition always truth-preserving.
Now, clearly validity is playing a major role in Peacocke’s determination theory for conjunction and (numerical) quantification. But notice also that in both cases the requirement to make the inferences valid selects a semantic value from restricted pool of candidates (truth functions in the first case, quantifiers over natural numbers in the second). The general form of determination theory he gives doesn’t say anything about how we narrow down to this particular pool, and (as later authors have emphasized) this is an extremely substantive step. For example, in the case of quantification over the natural numbers, the elimination rule will be valid on any domain that includes the natural numbers, whether it includes also the integers, or is absolutely unrestricted.
This is one place where Peacocke’s 1987 work may be relevant. There, for cases of concepts whose possession conditions are spelled out in terms of elimination rules only (/introduction rules only) he suggests that the concept denotes the weakest (/strongest) semantic value that makes the rules valid (p.161). In application to the possession conditions for numerical quantification, for example, interpreting the quantifier as ranging over a more inclusive domain than the naturals would be, intuitively, to give it a stronger interpretation than is required to make the elimination rules valid. That would allow us to drop the ad hoc constraint to quantifiers over natural numbers in the determination theory for numerical quantification, but it wouldn’t bring that determination theory into line with Peacocke’s “general formulation” of determination theories, since that doesn’t provide for constraints of strength or weakness. So this is an area where it’s a little unclear how to mesh the different eras of Peacocke).
Let me mention one last thing. In ASOC, Peacocke explicitly discusses radical interpretation a few times. He is thinking of it, though, as a rival account of concept possession, and though he allows that radical interpretation may say nothing false, he complains that it doesn’t have the right formal features to provide the kind of illumination of concept possession he is targetting. So Peacocke’s conception of radical interpretation at this point contrasts with my own, where radical interpretation is conceived as a theory of reference-fixing, not (Fregean) concept-possession.
I’ll now compare and contrast this setup to my own.
First, unlike Peacocke, I say nothing about Fregean sense. I do talk about “concepts” as components of structured mental states, but for me these are vehicles of mental content, not constituents of content (they are more like Fodorian concepts than Fregean ones, and indeed, thinking of them as words in the language of thought is one model I’ve suggested). So the whole enterprise of articulating “possession conditions” and worrying about them having the right form is just absent from my story. What I have said is consistent with additional assertions that the “inferential roles” I have been appealing to are possession conditions in Peacocke’s sense. But I’ve left open that enacting these inferential roles might be either unnecessary or insufficient for possessing a concept of conjunction, of numerical or unrestricted quantification.
Second, and connectedly, there may be nothing terribly natural about the inferential roles which figure in my explanations of why our concepts have the reference that they do. They may be carved out of our cognitive economy in quite artificial ways. For my purposes, I’m just saying that finding (at least!) those rules primitively compelling is sufficient (given the rest of the background) to explain why the concept denotes what it does.
Third, Peacocke’s determination theories are stated in an unqualified way—e.g. “the semantic value of C *is* that truth-function which makes the conjunction rules valid”. What you get from my framework at this point will have to be caveated with a “ceteris paribus” clause, since everything depends on the assumption that the particular patterns encapsulated in the conceptual role aren’t “overridden” by the way the concept figures elsewhere in the cognitive economy. For some reason I don’t understand, the Peacockian privileges the way that the conjunction-concept figures in belief, rather than other kinds of thought, so that if a concept figures in the conjunction-way in beliefs, and in the disjunction-way in desires, it would determinately be a concept of conjunction. My account (correctly I think) makes no strong predictions about such conflicted cases. The caveats in my theory are a feature, not a bug.
Fourth, what I say is consistent with the thesis that there is a only a small range of concepts which have neatly specifiable inferential/conceptual roles of the kind I’ve been talking about. For all I’ve said there’s no such patterns to be found for many of our concepts. This relaxed attitude is possible since I already have my story about what fixes the content of thought, and so for me the Peacocke-style theorizing is not an essential part of articulating what grounds representation, but a matter of illuminating some special cases.
Fifth, Peacocke alludes to the “general form” of determination theories, which raises the question: why should determination theories have a common form? Radical interpretation answers this by starting with an account of what it takes, generally, for a thought to have content, and then the analogues we get to his “determination theories” are accounts of what the account requires when applied to a particular case. This feels to me like a better direction of explanation than a bottom-up approach on which the convergence on a common form would appear as a cosmic coincidence
Sixth, even when working through cases where inferential rules are central, the emphasis on their validity I think ties our hands. I mentioned above Peacocke’s 1987 constraints of strength and weakness, so it appears that more than validity is going on—and if this is not to be mere monster-barring, we need a sense of why such constraints turn up in theories of reference. Radical Interpretation is well equipped to give principled underpinnings for these extra sorts of constraints.
To illustrate. In response to my earlier post on conjunction, my colleague Jack Woods asked me what ruled out hyperintentionally deviant interpretations of the conjunction c. I asserted that no rational agent would find the conjunction rules for c primitively compelling unless they were working with conjunction. But what if the c they had in mind was something where we read “pcq” as p and q and I am here now? Or indeed, conjunction supplemented by any tautology? The answer I want to give to this sort of challenge is that such an interpretation would not make the agent’s practices justified. Let’s accept that “I am here now” is itself a priori. Still, there is a story to be told about what one has to be like to be justified in believing it (we just know that these conditions aren’t empirical ones). If we believe a thought—even one that is a priori justifiable— merely “en passant” on irrelevant grounds, then we are not forming a justified belief. So the crucial feature is that the raw data is that the inference from p,q to pcq is felt to be primitively compelling. The requirement that we make the primitively compelling rules valid isn’t fine grained enough, since there are many valid arguments we are not justified in taking as primitive. So it’s crucial that the story really incorporates the constraint of making the practice a justified one (which is indeed something that features in Peacocke’s “general formulations” of determination theories) and don’t cash it in too quickly for constraints of making-the-inference-valid (as Peacocke himself does when citing particular determination theories).
A second illustration will open up some further complex issues in Peacocke interpretation. My discussion of unrestricted quantification made essential appeal to epistemological claims about the conditions under which non-enthymetic universal instantiation is justified. Validity alone, I said, wouldn’t knock out skolemite interpretations of the quantifier, but nevertheless, considerations of maximizing justification do so. So I take this to illustrate the same moral of the importance of full justificatory structure in getting the story about reference-fixing right.
Now, Peacocke’s discussion spirals around the issue of what fixes unrestricted interpretation, without ever (in what I’ve read) really nailing it. Let me outline what I’ve found him saying on this topic.
Peacocke discusses unrestricted quantification in the 1986 book Thought. But he acknowledges there (p.36) that he has not given a full account of what determines the denotation of an unrestricted quantifier. More generally, I see no new resources in what he says there or in the later works to rule out skolemite interpretations, given only the resources of validity and completeness.
It could be that he envisages the possession conditions of unrestricted “everything” as tying the subject to every instance of the elimination scheme, for every individual concept whatsoever—even those currently beyond the subject’s conceptual repertoire and which it might be physically impossible for the subject to possess. On the (substantive!) assumption that there is an individual concept for every object whatsover, the constraint to make all such instances valid would “peg out” the domain to be absolutely universal. There is an echo of that in the way he talks of “open ended” inferential dispositions (pp.33-34). But insofar as Peacocke has to appeal to what we’re disposed to do in counterfactual circumstances where our conceptual repertoire is expanded, this account is vulnerable to an interpretation on which our quantifier has an always-restricted but counterfactually-varying domain, just as McGee and Lavine’s appeals to open-endedness are.
In the later work ASOC (particularly in chapter 5) there is material that may speak to the issue, but ultimately not enough is said to resolve it. Let’s spot ourselves that Sally both treats particular dated transitions from the thought “Everything is physical” to the thought “Roger is physical” as primitively compelling, and treats as primitively compelling the inference-type inferring Roger is physical from everything is physical (perhaps one or other is more basic, but it won’t matter for our purposes). Peacocke insists, further, that part of the explanation for why we find such inferences primitively compelling is that they have a certain form: inferences just mentioned are said to have the form “Cx:Fx” to “Fa”. Peacocke’s idea is that the determination theories for “C” will say (I think!) insist that the semantic value assigned to C make every (token or type) instance of this form always truth-preserving, i.e. valid.
Now, if this “form” is one shared by absolutely all instances of universal instantiation, including those involving singular concepts outside Sally’s conceptual repertoire, then you might think that Peacocke here has just pulled the rabbit out of the hat. For even in a single case, Sally is related to a particular inference form of universal instantiation (since it part of the explanation of why she finds the token/type inference primitively compelling). And making all instances of the form valid will pin down the quantifier to be the absolutely unrestricted one—the instances “peg out” the unrestricted domain. But of course, this is only good if we can undergird the claim that this the “form” which plays a causal explanatory role in Sally’s psychology is general in this way, rather than a more restricted “form” whose instances are only those individual concepts within Sally’s present ken. If she’s related to the latter “form”, or its indeterminate which “form” is playing the role in her psychology, we get no new leverage here.
In fact, Peacocke raises exactly the relevant issue at ASOC p.137—not in the case of unrestricted quantification, but in connection to the question about whether certain inferential rules for numerical quantification have a form that ranges over all singular concepts for natural numbers, or only over “surveyable” ones. Unfortunately, all he says there is that the issue is “far beyond” the scope of that book to resolve. We can at least take it that he doesn’t view the resources he’s provided in ASOC as giving an easy resolution of the sort of form-determination issue which would be utterly central to the success of this strategy.
Despite these differences, it is striking that the story that falls out of radical interpretation (modulo empirical and normative assumptions) is a recognizable variation of what Peacocke says about his parade cases, which after all were not developed with (my kind of) radical interpretation in mind. Obviously something in this ballpark has independent appeal—and so it’s a big bonus that radical interpretation can predict and explain why something like Peacockian determination theories are in force for logical concepts such as conjunction and quantification.