NoR 2.5a: Inductive concepts

This is one of a series of posts setting out my work on the Nature of Representation. You can view the whole series by following this link

The previous sections have had a common pattern. I’ve picked out a certain concept, laid out some assumptions about the way it is deployed, and shown how particular theses about its denotation fall out. Crucial premises in the derivation (in addition to the “cognitive architecture” are radical interpretation and specific normative (epistemological, practical) normative theses. We could continue this themes for descriptive concepts—starting perhaps, from concepts of primary and secondary observable properties presupposed in the account of perceptual demonstrative concepts.

However, in this section I’m aiming for something a little different. Rather than arguing for a specific denotation for a specific concept for a specific conceptual role, I’ll be discussing a feature that characterizes our deployment of many concepts, and showing how (given radical interpretation) this constrains what they denote. I’ll call these concepts inductive, and the assumption about cognitive architecture I’ll be making (in addition to the usual generic ones) is that we’re disposed to indulge in inductive generalizations using such concepts. Observable (primary and secondary) concepts such as green and square are within the class, as are natural kind concepts such as being an emerald, being a tree or being positively charged. The class isn’t restricted to descriptive concepts: normative concepts (immoral, just, imprudent) are deployed in induction.

But there are concepts that don’t feature in inductive generalization. Famous foils are concepts like grue (=green and first observed before 2050, or blue and not first observed before 2050), or observed by me. Since so many concepts of interest plausibly are deployed in induction, any conclusions about their denotation we can draw from that feature will have wide application.

My focus in this post is developing a positive view of the relevance of inductive generalization fixing the denotation of general concepts.

The starting point is a particular view of inductive generalization: a view on which it is a special case of “inference to the best explanation”. For Sally, a highly reflective thinker, the formation of a justified general belief might go as follows:

  1. All observed emeralds have been green (and those observations were carried out in thus-and-such a manner).
  2. All emeralds are green best explains (1).
  3. So: All emeralds are green.

“Observed” here should be read “observed by Sally”. Premise (1) includes the note about the manner in which observations were carried out because the fact that all observed Fs are green may require a very different explanation if the observations were carried out in an unbiased and controlled sampling, from the explanation that suggests itself if the observations were conducted in the museum of green things. The grounds on which Sally may endorse (1) can be various, but in the most basic case will be based in memories of individual episodes in which she has observed a green emerald and failure to recollect any countervailing instance.

Premise (2) appeals to facts about best explanation. What determines whether an explanation is best will be very important, but it’ll help for the comparisons that follow to follow a certain tradition and assume that this is a matter of a trade-off between features like: being consistent with (1), strength (entailing as much of what (1) entails as possible), and simplicity of the hypothesis. The grounds on which Sally may endorse (1) are again various, but presumably she casts her mind over a range of salient rival hypothesis consistent with (1) and evaluates them for relative simplicity and strength, judging (3) the winner.

The assumption about cognitive architecture that we make, then, is that Sally finds the transition from (1) and (2) to (3) primitively compelling.

This is all very highly reflective. And surely we inductively generalize on the basis of experience without running through all this story. So perhaps what goes on is something like this (this is not an inference that Sally carries out, but a description of her psychology as she forms a general belief):

  • (A1.1) Sally remembers seeing emerald 1 in circumstances C1, and it was green.
  • (A1.2) Sally remembers seeing emerald 2 in circumstances C2, and it was green.
  • (A1.n) Sally remembers seeing emerald n in circumstances Cn, and it was green.
  • (A1.n+1) Sally tries and fails to remember seeing any non-green emerald.
    On that basis:
  • (A3) Sally forms the general belief that all emeralds are green.

So far as the psychology goes, this looks much more like a classic case of “enumerative induction”. And the 1.x facts are exactly the grounds on which on more reflective occasions Sally might endorse the original (1). But this formulation is not the whole epistemological story, since it doesn’t capture the epistemological significant difference between Sally’s good reasoning and Goodman’s famous variant, where grue=either green and first observed before 2050, or blue and not first observed before 2050:

  • (G1.1) Sally remembers seeing emerald 1 in circumstances C1, and it was grue.
  • (G1.2) Sally remembers seeing emerald 2 in circumstances C2, and it was grue.
  • (G1.n) Sally remembers seeing emerald n in circumstances Cn, and it was grue.
  • (G1.n+1) Sally tries and fails to remember seeing any non-grue emerald.
    On that basis:
  • (G3) Sally forms the general belief that all emeralds are grue.

The belief formed in (G3) is inconsistent with the belief formed in (A3), while (so long as the circumstances Ci entail the fact that the observation took place before 2050) the content of the memories reported in each pair (A1.x) and (G1.x) are each equivalent. Looking back to the original reflective case, what suggests itself is the following contrast:

  • A2: All emeralds are green best explains (A1.1)-(A.n+1).
  • not-G2:  All emeralds are green does not best explain (G1.1)-(G.n+1)

On the epistemology I consider, IBE-dogmatism, Sally is by default (i.e. in the absence of defeaters and undercutters) justified in believing a generalization such as (A3) when it is in fact the best explanation of A1.1-A.1.n+1. The sort of thing that would undercut this justification would be a not-obviously-worse candidate-explanation being salient to Sally. So it’s because A2 obtains that the A-inference produces a justified belief. Because G2 does not hold, the G-inference does not.

Moving from epistemology to features of cognitive architecture, what I’ll be assuming is that Sally is default-disposed to find the inference A1.1-A1.n+1 to A3 primitively compelling. She is similarly default-disposed to find other instances with the same form primitively compelling, so long as the concepts in the “green” and “emerald” positions are taken from a certain stock of concepts (which includes the usual observational concepts, natural kind concepts, normative concepts, etc). Let’s call that our stock of inductive concepts.

So just as in previous cases, we have assumptions about cognitive architecture and normative theory. We turn now to draw out their significance for reference-fixing, given radical interpretation.

One thing to note immediately is that interpreting Sally’s concept “green” as picking out the property grue will make her default-disposition to induct on green unjustified, since on that interpretation in generalizing using the concept “green” she will be making the bad G-inference, rather than the good A-inference. And that moral generalizes: for every pair of inductive concepts c, d, all else equal, the best interpretation F, G, will all else equal be one which makes all Fs are G the best explanation of the fact that all observed Fs have been G. 

If there are general features common to properties that figure in best explanations, then we could conclude at this point: all else equal, inductive concepts will denote properties with those required features. What might those be?

Well, consider what makes for something being the best explanation of data. Among those rivals consistent with the data, the best explanation needs to be optimally simple and strong. All else equal, it needs to be the simplest. So here’s a feature that properties featuring in best explanations will have: all else equal, they will be no less simple than those that feature in rival candidate explanations.

Notice:

(1) I’m assuming that it makes sense to talk of a property being more or less simple, as well as the propositions that ascribe that property.
(2) What’s important is not the absolute level of simplicity/complexity of a property, but its relative simplicity compared to rivals.

A treatment of simplicity that underwrites (1) is to be found on Lewis’s work on laws of nature. There, he suggests we treat simplicity (of an interpreted theory, which we can think of as a set of structured propositions) as a matter of what some would call its elegance: how compactly we can express the theory in language. But compactness of expression is sensitive to expressive resources, and so could vary across different languages, so to secure objectivity Lewis posited a “canonical” language in which theories are to be expressed for the purposes of measuring their compactness. Notice that this measure of simplicity applies just as much to properties as to sets of propositions. Simpler properties will be those that are more compactly definable in the canonical language. And the simplicity of an interpreted theory directly depends on the simplicity of the properties it contains—the longer it takes to express the properties, the longer it takes to express the theory that ascribes them.

The upshot for us is the following: all else equal, the referent of an inductive concept will be the simplest of the candidates.  To finish off this post, here are some ways this result matters.

Consider the permuted interpretation of observational concepts introduced earlier. *Being the image under p of something green* is less compactly expressable, for any sensible choice of canonical language, than the property of *being green*. Explanations framed in terms of the former will be less simple, so less good, than the latter. This suggests a diagnosis of the challenge from permuted interpretations left open at the end of the post on demonstratives. The permuted interpretations depict the agent’s inductive dispositions as unjustified, and hence the agent as overall less rational, than the alternatives.

Consider the Kripkensteinian property of being green within region R or blue and outside region R. Again, like permuted-green and grue, this is a less simple property than green, and so interpreting an agent’s green concept as denoting it will make the agent less rational than otherwise.

2.4 supplementary: comparison to Wedgwood

This is one of a series of posts setting out my work on the Nature of Representation. You can view the whole series by following this link

As my treatment of reference-fixing for conjunction and quantification stand to Peacocke’s account, so my treatment of reference-fixing for normative concepts stands to Ralph Wedgwood’s. Here I concentrate on the view he sets out in “Conceptual Role Semantics for Moral Terms” , Philosophical Review 2001. Since Wedgwood himself builds on Peacocke’s approach, this is perhaps not too surprising.

The six differences between Peacocke’s approach and my own that I earlier highlighted are again relevant here, and I pick up on one below.  There are a couple of more specific divergences.

Wedgwood’s paper focuses primarily on giving possession conditions and a determination theory for the concept B: all-things-considered-better-to-perform. And the “possession conditions” he sets out (the assumptions about cognitive architecture, in my terminology) is not like the one I gave, appropriate to the moral case and linking normative judgements to blame. Instead, for Wedgwood B has a specific role in practical reasoning—roughly, a transition from a judgement that such-and-such is better to perform than so-and-so, to a preference for such-and-such over so-and-so (a preference, for Wedgwood, is a certain kind of conditional intention—but that detail need not detain us).

Wedgwood seeks to generalize the kind of “determination theory” we’ve already seen in Peacocke. After positing that each concept is associated with a set of “basic rules”, he initially says that the semantic value of the concept as “makes best sense of the fact that these rules are the basic rules for A”. But he immediately refines this, following Peacocke in saying that this requires making the relevant rules valid and complete. In order for this refined account to apply to the kind of transitions Wedgwood is interested in, he can’t characterize it as necessary truth preservation, since preferences are not the sorts of things that can be true or false. Accordingly, he defines a notion of “validity” for transitions from judgements to preferences—guaranteed correctness-preservation—where an intention is correct, says Wedgwood, if it conforms to the goal of practical reasoning.

With the case now squeezed into the model of valid inference, the question is what would make the inference valid (and complete), i.e. what semantic value for the concept B would mean that a true judgement that B(x,y) would guarantee that a preference for x over y would conform to the goals of practical reason. Wedgwood contends that assigning the normative relation being better to perform uniquely fills this role.

Among the ways that my account differs from Wedgwood’s, the thing I think is most illuminating to highlight is the role that he makes validity play. I think he goes wrong, and opens himself up to criticism unnecessarily, by trying to squeeze his account into the model that Peacocke offers of the logical connectives. So really, I’m not criticizing the spirit of Wedgwood’s account. I think that using radical interpretation in the ways already illustrated, one could reach more or less the same conclusion about what the semantic value of B is, on the architectural assumptions Wedgwood makes. But I think the letter of his own account misfires in instructive ways.

If the moral you take from Peacocke is that validity is central to reference-determination, and you are interested in transitions between beliefs and other states (preferences, intentions, emotions, feelings) rather than belief-belief transitions, the central challenge that looms is to generalize the notion of validity so it has application to such states. That is Wedgwood’s strategy. And Wedgwood proposes, quite generally, that the generalized notion of validity needed is necessarily correctness preservation.

Enter Schroeter and Schroeter 2003. They ask us to consider the content “I am in pain”, and suppose–I think plausibly—that part of its conceptual role is a transition from the state of actually being in pain, to the state of believing one is in pain. Again, when it comes to reference-determination, on a validity-centric model we’ll need to posit a notion of the conditions where it is correct that one is in pain (maybe: that all things considered one deserves to be in pain?). And we will then look for a semantic value for the pain-concept P that guarantees correctness-preservation for the transition. But that someone deserves to be in pain doesn’t guarantee that they are in pain! Nor would pain as the semantic value make the transition-rule complete, since someone being in pain certainly doesn’t entail they deserve to be. The property deserves to be pain, on the other hand, would make the transitions valid and complete, in the Wedgwoodian sense.

Something has obviously gone horribly wrong if we reach this point! But it’s interesting to reflect on what has happened. The point is that the notion of correctness that features in the characterization of “validity” is turning up in the validity-making content-assignment. That is something that is not provided for in the general gloss with which Wedgwood begins, viz that the semantic value of a concept “makes best sense” of the fact that the basic rules for that concept are its basic rules. Assigning pain to the concept pain makes perfect sense of the transition mentioned, as far as I can tell. We only get the odd projection of normativity into the semantic value determined when we move to the “more precise” formulation of this in terms of validity-Wedgwood-style. That is when the normative rabbit is stuffed into this particular hat. When we’re dealing with normative concepts, that has what look to be interesting and good results, since it allows us to easily derive the assignment of normative properties to normative concepts. But—and this I take to be the Schroeters point–we can see the way that this is cheating by noting that we continue to get those results even when we turn to non-normative concepts whose conceptual roles involve more than belief-to-belief transitions.

I think this is instructive of the dangers of fetishing validity’s role in fixing reference. Validity should never have been seen as the primary mechanism for reference-determination. It gets into the account of reference-determination for logical connectives and the like only because it is part of a wider epistemological story about which beliefs are justified. Radical Interpretation, on the other hand, makes us ask the question: what assignment of semantic value would make the transition rational? A notion of “validity” will enter the picture, only if we have some reason to think that validity, so understood, is part of what it is for an agent to rationally manage such transitions. There’s no obvious role for it in the case of the transition from a state of pain to a self-ascription of pain. And—say I—while we might be able to back-engineer a notion of validity for a specific sort of belief-to-preference transition, the explanatory order is from thinking about the rationality of the transition, to constructing such a notion, not vice versa. If we made this modification, and saw Wedgwood’s proposal as backed by radical interpretation, rather than specifically Peacockian theses about the the general form of “determination theories”, then we can recover what’s right about his story, and evade the Schroeter’s objection to it.

NoR 2.4: Wrong.

This is one of a series of posts setting out my work on the Nature of Representation. You can view the whole series by following this link

In the subseries of posts to this point, I’ve derived local reference-fixing patterns for connectives, quantifiers, and singular concepts. In a moment, I’ll discuss certain descriptive general concepts (an example drawn from the class of primary and secondary qualities). But I’m going to start with a different general concept: the normative concept of moral wrongness. (I’m drawing heavily on material I’ve discussed in much more detail in “Normative Reference Magnets”, Philosophical Review forthcoming).

Two things guide this expositional choice. First, what I have to say about this case fits the general pattern we’ve seen, whereas the focus of my discussion of descriptive concepts will be a little different. Second, there’s an odd split in the literature on the metaphysics of representation whereby the theory of reference for normative concepts is hived off into the separate subdiscipline of metaethics, rather than being one of the parade cases that any adequate theory of representation should have in its sights from the get-go. So I want to emphasize that radical interpretation is just as well-placed to predict and explain how normative concepts get their denotations as any other, and by juxtaposing my story of normative concepts with the story singular concepts and connectives, etc, I emphasize that no special pleading is required.

So the pattern will be as before: I’ll ask you to consider some architectural hypotheses about the patterns of deployment of this concept in our cognitive economy. With that in place, radical interpretation together with first-order normative assumptions will predict that any concept so deployed will denote moral wrongness. The discussion here will introduce two new notes. First, for the first time, practical rather than epistemic normativity with have pride of place in the explanation. And second, we will illustrate how radical interpretation can help explain central puzzles in the literature—in this case, the distinctive referential stability of wrongness.

The three generic architectural assumptions are now familiar, so I won’t repeat them. The final such assumption will again concern the particular inferential role associated with the concept wrong, w. What I’ll be assuming is that when a subject believes that x’s A-ing is w, then this makes them blame x for A-ing, and when they disbelieve this, this prevents them from doing the same. The talk of “making them” or “preventing them” plays the same role as the Peacockian notion of “primitively compelling inferences” did before. Surely a cognitive architecture could be disposed to make an immediate transition from judgement to an intentional state of blame, but it is terminologically odd to call this an “inference”—so I won’t do so.

(One might wonder here if a prior fix on some other kinds of content is presupposed in the articulation off this cognitive role. I think there is. This doesn’t lie in the way that “x’s A-ing” turns up as the thing to which wrongness is ascribed. “x’s A-ing” also turns up as the object of the blame-attitude, we could replace it in both places by a variable for some-content-or-other and run the story. But the judgement that x’s A-ing is unexcused can’t be handled in this way.  Just as in the discussion of singular concepts, there is no structural concern here, since we are not at this stage in the business of attempting a reductive analysis of reference, but rather in articulating and explaining patterns of reference-fixing.)

Turning now to first-order normative assumptions, I add the following:

  • that a substantively rational agent would be such that the judgement that x’s A-ing was wrong and unexcused makes them  blame x for A-ing.
  • that a substantively rational agent would be such that the judgement that x’s A-ing was wrong and unexcused makes them  blame x for A-ing.
  • that no substantively rational agent would be such that the judgement that x’s A-ing was F and unexcused makes them  blame x for A-ing, unless F entails wrongness.
  • that no substantively rational agent would be such that the judgement that x’s A-ing was F prevents them blaming x for A-ing, unless wrongness entails Fness.

These are substantive ties between moral judgments and blame attitudes. Elsewhere, I defend the tenability of these normative assumptions against a variety of challenges—for example, that they mistakenly presuppose that wrongness is a reason, or that they are counterexampled by cases of those with obnoxious moral views. I think these charges can be resisted, but they helpfully emphasize the way that this sort of story depends on contestable normative premises. This is a feature, not a bug.

The derivation of the denotation of w follows the same pattern as previously. First, we have the a posteriori assumption that w plays a distinctive cognitive role in Sally’s cognitive architecture, captured by the w-blame link. Second, we have substantive radical interpretation which tells us that the correct interpretation of w is one that maximizes (substantive) rationality of the agent. We add the “localizing” assumption, conceptual role determinism for w, which says that the interpretation on which Sally is most rational overall is one on which the rules just given for w are rational. Putting these three together we have the following: the correct interpretation of Sally is one that makes the conceptual role associated with w most rational. Dropping in the normative premises just set out, we can derive that it is moral wrongness that makes that conceptual role most rational, and hence, it is moral wrongness that is the denotation of w.

To go back to the aspects of this story that I emphasized at the beginning, the conceptual role for w that I cited is not a link between judgements, or between evidence and judgement, as in the previous cases we have looked at. Rather, it is a link between judgements and emotional attitudes. So the kind of normative premise that becomes relevant is not an epistemological one—it is a thesis in practical normativity about what ought to prompt a specific emotive response. That brings out the significance of the conceptual role determinism in these derivations—why shouldn’t patterns of w-belief formation be as significant here as they were in the case of concepts of quantification? The answer is that such patterns are potentially significant, but we expect a well-run cognitive architecture to hold these aspects in sync. In the other work mentioned, I consider specific cases where an architecture “hardwires” specific w-belief-formation methods in addition to the patterns given above, and claim it as a mark in favour of the radical interpretation framework that it does not continue to predict that w denotes wrongness in cases where the cognitive architecture has these extra elements that produces such overall tension.

The framework also shows the power of radical interpretation to explain long-standing puzzles. One of these is that agents can disagree with one another across vast differences in their first-order theories of what features constitute moral wrongness—the so-called “moral twin earth” phenomenon. A convinced Kantian and a convinced Utilitarian are not speaking past one another—one or other or both has an incorrect theory of morality. That apparently means that they must be thinking (sometimes false) thoughts about the same subject matter. Massive and systematic moral error is possible. This requires explanation, since there are plenty of cases—particularly for descriptive concepts—where concepts embedded in such utterly different theories would be properly interpreted as differing in meaning. Radical interpretation predicts that so long as both agents implement the mentioned conceptual role, then ceteris paribus, they will pick out the same property. The conceptual role, since it concerns a link to emotion, not an embedding within other beliefs, allows for great differences what beliefs the agents have.

Now, there are some limits, according to this framework—if interpreting one or other of the disputants’s w as picking out moral wrongness would be to attribute irrationality not just falsity in their “moral” beliefs, then this is the kind of tension that calls into question conceptual role determinism. A Kantian constructivist who is convinced that utilitarian views are deeply irrational might accept the framework I have been laying out, and draw the conclusion that utilitarians after all are not even talking about morality—or at least that it is indeterminate whether they are. But these are special exceptions to the rule of stability (so even from that perspective the framework would still explain how divergent but broadly Kantian theorists could dispute about a common subject matter).

NoR 2.3c: That, redux.

This is one of a series of posts setting out my work on the Nature of Representation. You can view the whole series by following this link

Let me recap the take-home messages of the last two posts.

  1. If Dickie is right about the architecture of demonstrative thought we can derive (something like) the result that the demonstrative “that” refers to an object at the far end of the perceptual link associated with the demonstrative concept.
  2. The proximal reason why the demonstrative will denote that object is that it uniquely makes the resulting belief-management practices associated with the demonstrative justified (here Dickie would agree).
  3. The underlying reason why justification-maximization has this role, in my account, is radical interpretation as a general story about reference-fixing.
  4. As with earlier such derivations, my account is caveated—we are assuming that to make the subject overall most rational involves, inter alia, making the particular structures associated with demonstratives most justified.
  5. Though radical interpretation at the global level is intended as a reductive story about content, there is no obligation to provide a local story about demonstratives in particular that is “reductive” in form, and we have not done so here. Rather, we have concentrated on how the the patterns in the way that reference is fixed identified by other theorists can be predicted and explained by radical interpretation.
  6. I’ve discussed some worries one might have about the extent to which the “bare” demonstrative architecture really locks on to a determinate reference. In particular, I pressed some concerns about “unnatural” objects that overlap in various ways with the “natural unity” at the end of a perceptual link. But, I argued, if bare demonstratives do turn out to be indeterminate in reference between this localized range of referents, that wouldn’t be either intuitively repugnant or theoretically damaging, since the bare demonstrative (now perhaps recast as a plural) can still play an anchoring role.

I want to finish by considering point 6 one last time. I said in the previous post that I wouldn’t be distraught if bare demonstratives turned out to be plural, or somewhat indeterminate in reference. But perhaps others would be distraught. So I want to survey the options, and whether securing determinacy for bare demonstratives would motivate a shift away from radical interpretation.

Here is one proposal: that among a range of candidate interpretations of Sally scoring equally well on “charity” (i.e. making her substantively rational) the correct one is that which assigns the most natural referents to concepts, overall. Woody the tree is a “real object” or a “material substance”, a natural unity that contrasts with Woody’s outer shell or the fusion of Woody and a bug living on his surface. David Lewis is often taken to propose something similar when it comes to assigning properties as the denotation of predicates. Ted Sider has argued for a generalization of this idea to terms in other syntactic categories.

Here is another proposal: Woody is the causal source of the beliefs that are filed away with the demonstrative concept. The shell or fusion, though they massively overlap Woody, and share his macroscopically observable properties, don’t enter into such relations (we’ll assume). So if we build a causal theory of reference, or even a constraint added to radical interpretation, that demonstratives should denote the dominant causal source of the (canonical) information in the associated file, then we get the result that the demonstrative picks out Woody.

These are two ways of securing determinate reference that do not fit with my story about radical interpretation. The first could, just about, be forced into my model. If we could make the case, in general, that one person is more substantively rational than another to the extent that her beliefs are more natural, then this sort of constraint would fall out. Sider, for one, argues that theories are better the more natural they are (the more they are framed in terms of concepts that “carve nature at its joints”, and perhaps the same goes for entire psychologies. Perhaps this could even be held to be part of justification-maximization, if the most justified body of beliefs to have is the one that is best not just by being reliable, based on evidence etc, but also reflective of the joints of nature in Sider’s sense. It’s an intriguing idea, and would fit into the remit of “first-order normative assumptions” that can be consistently and interestingly combined with radical interpretation. But this is not something I myself want to endorse.

Sticking a causal side-constraint into radical interpretation, on the other hand, would go entirely against the spirit of the programme. I would happy to see a causal pattern like this emerge as a prediction of radical interpretation, but that should be a consequence of the sort of derivation we’ve been looking at, not one of the explanatory premises. (I will be critizing the idea of monstrous metaphysics of representation obtaining by combining side constraints with radical interpretation later—but whether this is a feasible metaphysics or not, it is not my metaphysics).

If these are off the table, how might determinate reference be secured?

First idea: deny the problematic objects exist. If there is no such thing as Woody’s outer shell, or the fusion of Woody and a random microscopic bug, we’re okay. That is a way to go that I anticipate some readers will already favour. But I want to keep on board those who accept a more abundant ontology, so I will set it aside and continue to look at options.

Second idea: we might argue that the unnatural objects would not make the belief-forming practices of the bare demonstrative ones that result in justified beliefs. I already endorsed this strategy to rule out Strawsonian twins and temporal slices of our target. We could try taking it further. For example, if the microscopic bug wanders off the tree to find another home, Woody+bug will end up as a scattered object, but beliefs formed through the perceptual link will continue to attribute the property of being contained within a certain confined region. So we might start building a case that the relevant mechanisms justify beliefs about Woody’s location, but not Woody+bug.

But since we’re dealing with unnatural objects, one might now be worried about the object which coincides with Woody+bug while the bug is upon Woody, but which coincides with Woody at other times. And we extend this over counterfactual situations too: the counterparts of Woody+bug will include the bug when its counterpart is microscopic and attached to Woody, and not include it otherwise. Perhaps more readers this time will be prepared to say that such a thing does not exist. I myself am sceptical that specified counterpart relations accurately pick out the de re modal facts about the object. But there are those whose abundant ontology includes not just unnatural objects, but objects with unnatural essences (cf. McGonigal and Hawthorne). A similar dialectic can be traced for our other candidate: Woody’s outer shell.

Third idea: What’s striking about perceptual demonstratives is their closeness to immediate interaction with the world. That insight is reflected in the central role, in Dickie’s account, of the perceptual link that structures bodies of demonstrative belief. But there’s another way in which they’re close to the world: our most basic actions change the properties of objects in our immediate environment. It’s  plausible that the intentions that guide these actions are structured by demonstrative identification of those objects we most directly manipulate in action. It’s interesting that this agential link between the states of mind and an object plays no role in Dickie’s story. If (as one would expect in our actual case) the Janus-faced role of demonstratives in perception and action cohere, and the perceptual link already suffices to fix reference, then there’s no concern: the partial story suffices for explanatory purposes. But if the partial story leaves us open to underdetermination, we might want to revisit the issue. I won’t develop this in detail, for reasons of space (and because it would be pretty speculative). But I do think that the upshot will be that the kind of objects that are of concern won’t only need to share observable properties with Woody, they’ll have to share manipulable properties with Woody: those properties that we can directly change about him. That might help us here! We might not be able to observe the region of space that Woody occupies, which is why Woody’s outer shell was still in the mix as a candidate referent. But arguably (by chopping and gouging) we can change facts about what region he occupies. I can’t see this line will help that much with the bug bug, but it can do some work for us.

Final idea, and the one I think gets maximum effect from the most minimal theoretical assumptions. Consider again the derivations we’ve given. Our assumption has been that the correct interpretation overall is justification-maximizing with respect to the belief-forming and management practices that Dickie picks out for demonstratives. But that doesn’t entail that any interpretation which is justification maximizing in that particular way is correct. It could be that other ways in which the demonstratives figures in our cognitive economy also matters for reference-determination, breaking the ties left by the bare demonstrative structure alone. For example, we might “presuppose” that d is a natural object in later belief formation, for example, in characterizing a natural kind by bare demonstrative identification of exemplars and foils: “the property shared by that and that and that but not that”. Downstream belief forming practices like this will succeed in picking out natural kinds only if the referents of the demonstratives pick out exemplars that fall under natural kinds in the first place. Justification-maximization will then favour interpreting the bare demonstrative as picking out naturally unified item insofar as it falls under a natural kind, over others. Note well that it may be that such practices are attached to some demonstrative files and not others, so this might give a nuanced grip on how we sometimes secure determinate reference to natural unities, without entailing that we can only demonstratively refer to such things. And note also that this doesn’t require that we “have in mind” and attach to the demonstrative some disambiguating sortal (though this is thing that could happen in some cases, and would move us to a discussion of complex demonstrative thought).

The overall upshot: I wouldn’t be too worried if it turned out that bare demonstrative thought a la Dickie turned out to be indeterminate in various respects. I personally wouldn’t be moved to introduce some kind of causal or naturalness-based constraint on interpretation just to secure determinacy. But further, I think that working within the radical interpretation framework there are many routes by which determinacy of reference to natural unities could be established. And if this were the rule (e.g. our agent had a cognitive architecture which always presupposed that bare demonstratives picked out things falling under natural kinds) then we might derive from within the system the sort of generalizations about reference-fixing that others are tempted to introduce as unexplained explainers—e.g. a referential bias towards natural referents, or to the entity with is the causal source of the information received in perception.

Having spent this post advertising ways of securing determinacy, I want to finish by flagging again an underdetermination threat that nothing here speaks to. This is the threat posed by permuted interpretations. Take a permutation of the universe, p. Let p(a) be the image, under the permutation, of a. Let p(F) be the property that applies to something iff that thing is the image, under p, of some object that is F. Notice that necessarily, a is F iff p(a) is p(F). Original and permuted interpretation agree on the truth-conditions of every atomic thought. It turns out that they will agree on the truth-conditions of every thought.

On the permuted interpretation, a bare demonstrative based on a perceptual link to Woody denotes not Woody, but the image of Woody under the permutation, which may be anything (a small furry creature from Alpha Centauri, perhaps). Such permuted interpretations are not among those we’ve been considering at all in the last three posts, since we always took for granted the referential relations between general concepts and observable properties that figure in Dickie’s description of the way demonstratives operate—and permuted interpretations attribute a different reference to those concepts. I think we will get insight into what’s wrong with such permuted interpretations (why they are disfavoured by radical interpretation) not by saying more about demonstratives, but by considering the analogous questions about general concepts.

NoR 2.3b: That (and those).

This is one of a series of posts setting out my work on the Nature of Representation. You can view the whole series by following this link

In the last post, I outlined the account Dickie develops of the cognitive architecture of perceptual demonstrative thoughts. In her telling, a linked “file” or “body” of belief is built up, associated with a perceptual link to a particular object O, and the information that arrives from that perceptual link has a privileged role in the maintenance of the file of information over time (it is the body of beliefs “proprietary” means of justification, in Dickie’s phase).

Now, from the perspective of radical interpretation, the question to be asked is: what assignment of reference to the perceptual demonstrative “that” that figures in each of these beliefs will make the subject overall most rational? And we’re assuming here that what’s at issue is finding an interpretation of the agent that’s most justification-maximizing. This is also Dickie’s view and her explicit project: her book is in large part an examination of how a “Reference and Justification” principle can explain the things that people have wanted to say (or improve those things) for a variety of kinds of singular thought, including perceptual demonstratives. Here is her formulation, lightly edited:

  • A body of ordinary [a is F] beliefs are about object O iff for all [F], if S has proprietary rationality-securing justification for the belief [A is F], this justification eliminates every rationally relevant circumstance where O is not F.

In the case of perceptual demonstratives, what this tells us is that the referent of demonstrative concept [A] is O iff the fact the [A is F] are canonically formed through the perceptual link justifies a belief that O is F (with a certain relevant-alternatives model of justification presupposed).

Here’s one thing I want to emphasize from the get go. The interpretation of the general concepts (which properties they denote) is presupposed in Dickie’s principle–that a concept [F] denotes property F is a fact about content that we don’t get by free just by giving the concept a suggestive label. The question Dickie’s principle addresses is this: among interpretations which hold fixed predictive content, what favours one assignment of singular content over others? That makes it different from the questions addressed in our discussion of conjunction and quantification, where our predictions were never contingent on the content of other elements of our conceptual scheme. So even if we can explain using Dickie’s principle what object a perceptual demonstrative denotes, that still won’t explain why the demonstrative denotes that object, since interpretations which assign some other singular content, but also tweak the interpretation of general content, remain uneliminated. I’ll come back to this later. But for now, I hope it’s clear that a background of radical interpretation makes excellent sense of Dickie’s project, even if it’s not the background for it she herself has in mind (she thinks one can derive Reference and Justification from first principles concerning the nature of Truth and Reference and Truth and Justification, rather than having to rely on a speculative global metaphysics of content).

The best way to cause trouble for the Reference and Justification puzzle is to question whether the object at the far end of the perceptual link is the unique way to rationalize the file-management practices the agent has in place. Those including forming beliefs which (in ordinary circumstances) will match the observable properties possessed by the object O at the far end of the link. The hardest cases will be when we have deviant candidate referent O* of the demonstrative concept whose observable properties reliably match those of O, for then the body of beliefs will be reliable if interpreted as concerning O* just as much as O. (In other cases, citing the lack of reliability of the file-management practices should be sufficient to defend the needed epistemological claim).

Let’s take Woody the tree as a plausible candidate referent of our perceptual demonstrative d. Problem cases fall into two kinds:

  • Non-natural objects closely related to Woody.
  • Natural objects not closely related to Woody, but whose properties match Woody’s.

Let me take the the second of these first. There are, perhaps, many trees very like Woody. We can imagine that (by coincidence) many of the perceptual beliefs that we in fact form would be true if the demonstrative were interpreted to be about one of these other trees. But truth of beliefs is one thing; justification is another. It’s pretty clear that forming beliefs about the detailed observational properties of some tree in Ireland, on the basis of a perceptual link to Woody, would be unjustified. It wouldn’t even tend to be a reliable method.

But there are more recherche possibilities where such a method of belief-formation would be reliable. Suppose (riffing on a theme from Strawson) that our sector of the universe has an exact duplicate elsewhere in space-time. We may even suppose that it is a consequence of the laws of nature that the universe has some mirror symmetry in space, or translational symmetry in time. Because it is a matter of law that Woody’s properties match those of his counterparts in another sector, forming beliefs about the properties of Woody’s counterpart on the basis of a perceptual link to Woody is reliable. Dickie’s response to such a case would be to insist that reliability in this sense isn’t sufficient for justification, and such beliefs, were they formed, would be unreliable (of course, if one were justified in believing one lived in a symmetrical universe like this, justification for such beliefs may be available, but such beliefs would be inferential, not immediately formed via the perceptual link).

(Here’s my reconstruction of Dickie’s account of the case at pp. 66-68. Key to it is a model according to which a belief is justified if it is true throughout all the “rationally relevant circumstances” which are not eliminated by our evidence. Dickie argues that the rationally relevant circumstances will include those where massive reduplication occurs, and those where it doesn’t (it isn’t limited, then, to whatever the nomic possibilities happen to be, if those enforce symmetry). She further argues that our evidence—our local causal interaction with pieces of our environment—won’t discriminate between worlds which agree on how things are in our local sector. Let’s suppose that the relevant belief is that Woody is large. By the first premise, for every world in which Woody has a twin in another sector which is (a macroscopic duplicate and so) large, there is another world in which that twin is not large. By the second premise, our evidence either eliminates both or neither of such a pair of possibilities. We now argue by cases. If in at least one such case the evidence eliminates neither, then the evidence doesn’t justify the belief that Woody’s twin is large, so that would (to that extent) be an uncharitable interpretation of the demonstrative belief. If on the other hand our evidence suffices to eliminate all such pairs of worlds, that it would en passant eliminate all the world’s where Woody’s twin is large. So again, the belief that Woody’s twin is large wouldn’t be justified. Either way, the beliefs attribute by the deviant twin-ascribing interpretation would be unjustified.)

The more serious threat to uniqueness arises from non-natural objects in the vicinity of Woody. Here are some examples: the shell of material that constitutes Woody’s outer surface. Today’s time-slice of Woody. The mereological fusion of Woody with a bug sitting on a leaf that is hidden from view. Woody himself is a “naturally unified” object, a material substance. None of the candidates that I’ve mentioned can claim this dignity. But nothing in what I’ve said so far accounts for why that should matter. The outer shell of Woody has the same distribution of coloured surfaces in space as Woody, and its observable trajectory through space coincides with Woody’s. The same goes for the present time-slice, throughout the period where we’re tracking Woody. The fusion of Woody with the bug differs, but only in minor ways that we do not discriminate, given our perspective on Woody. Any of these referents would make the beliefs we form through the perceptual link true.

But a belief can be true though formed through an unreliable method, and even reliability doesn’t guarantee justification. Consider Woody’s time-slice as candidate referent. Suppose our perceptual demonstrative “file” is opened at 12pm on Monday, and closed (the link is broken) five minutes later. The practice of allowing information from the link to override other sources of information tracks both the properties of Woody, and of Woody’s Monday time-slice, during that period. But the file-management techniques that are being deployed not sensitive to time elapsed. Had the file remained open, the file would have received information corresponding to how Woody was on Tuesday—but Woody’s Monday-slice would have no observable properties at that time. In a sense, the point here is similar to that made about unrestricted quantification. The epistemological structure doesn’t build in any sensitivity to the duration over which the file has been open, and so an interpretation which makes the beliefs so formed reliable only so long as not too much time has elapsed looks like it attributes epistemically riskier beliefs than the alternative. So this particular non-natural object I think can be ruled out.

We might try something similar to handle the bug problem. We can’t see the bug from our perspective, but the file management structure is such that were we to move in and examine Woody with care, we’d form beliefs about the location of the thing  which would track Woody’s properties, and not those of the fusion of Woody and the bug.

But not all candidate referents can be ruled out in this fashion. The outer shell of Woody is indiscriminable from Woody, observationally speaking. (Or at least, I assume so. Dickie at one point suggests that our visual systems attribute solidity to the objects we see, and that an outer shell would not have this property. I find that surprising: it doesn’t seem to me that we are suffering a perceptual illusion of solidity when viewing a football or a hollow tree). Variants of the bug case can do similar work. Some bugs that are definitely not part of Woody are too small for us ever to observe unaided (through our standard perceptual links), and so prima facie the file-management structures would work very nicely to provide justified beliefs Woody+bug and other “microscopic variants” of Woody. The account, if it aims to secure determinate reference to Woody, suffers from a bug bug. We should distinguish the bug bug from what Dickie calls the intentional problem of the many. The latter case concerns “atomic trees”, i.e. fusions of atoms in Woody’s vicinity, massively mereologically overlapping Woody himself. Such cases are constructed to be ones where it’s not clear that the cited objects are distinct from Woody (it’s not clear that they’re not all trees), whereas the bug bug focuses on something we agree is not a tree, and asks how and why and whether we lock onto the tree itself.

Now perhaps Dickie would want to push the project of securing determinate reference to the natural unities here—the tree rather than the shell or tree+bug. If she can achieve it (without adding new ad hoc principles) I can borrow her work. On another day, I might pitch in to try to help her secure this result.

But for today, I don’t think for my purposes it matters much whether we can narrow down the reference further or not. I can simply concede that the bare perceptual demonstratives are indeterminate in reference between all these natural and less natural objects located at the far end of the perceptual link, which share all the observational properties. This does mean giving up the claim that bare perceptual demonstratives by default lock on to natural unities or ordinary objects of familiar kinds. That might be a disappointment for some, but it doesn’t seem an implausible point at which to end up.

Let me be clear: what I’m proposing is the bare architecture we’ve been discussing so far doesn’t explain how we secure determinate reference to natural unities. I’m not saying determinate demonstrative reference to such things is impossible. The bare story, I think, is really better equipped to explain the semantic value of a slight syntactic variation of the concept we’ve been considering, viz. the bare demonstrative plural those, which determinately refers plurally to all the objects at the far end of the perceptual link which are macroscopic duplicates of one another (including Woody). But even if the bare perceptual demonstrative can only secure determinate plural reference, it can still be part of what explains how determinate demonstrative reference overall gets established, via a combination of descriptive and demonstrative material. For example, we might analyze the complex demonstrative “that tree” as follows:

  • The tree which is among those.

This concept determinately picks out Woody, since all the other candidates are not trees. Of course, this complex concept presupposes not just a content for a bare plural perceptual demonstrative, but also for the general concepts “tree”, “among” and the determiner “the”. But remember: Dickie’s story already presupposed fixed content for general concepts (in her case, observational properties).

More generally, we are not here trying to construct a reductive metaphysics of meaning (that was already given by the general formulation of radical interpretation) but to draw out predictions of that story. So there’s no need for the stories we offer for this or that concept to fit together some reductive hierarchy.

NoR 2.3a: That

This is one of a series of posts setting out my work on the Nature of Representation. You can view the whole series by following this link

In previous posts, I’ve explored what radical interpretation (plus assumptions about cognitive architecture, plus epistemological assumptions) predicts about the denotation of logical concepts, conjunction and quantification in particular. I want to now tackle a different sort of concept: concepts of material objects.

I’m not going to attempt a general characterization of singular thought, any more than I covered all the interesting logical concepts previously. I’ll again pick an illustrative case: perceptual demonstrative concepts. There is, for example, a plate on the table to my left, and I can train my attention upon it, and think

  • that is round
  • there is a biscuit lying on that,
  • that would survive being dropped onto carpet.

These are perceptual demonstrative thoughts, and the concept that in each is a token of a perceptual demonstrative concept.

Following the pattern established before, I will make some architectural assumptions about the way that perceptual demonstrative thoughts figure in our psychology, and explore what radical interpretation will predict under those assumptions. By making everything conditional in this way, I avoid taking a stand in the rich literature concerning the correct characterization of our perceptual demonstrative thought. But I will again borrow my architectural assumptions from another’s theory, so interested readers can look to them for a defence of these assumptions as true of us. In this case, the resource is Imogen Dickie’s terrific book Fixing Reference, and my strategy is to show how we can undergird her theory of reference for perceptual demonstrative concepts within radical interpretation.

The first three architectural assumptions I will make are now familiar—that belief states are structured, that facts about syntax are grounded prior to questions of content arising, and that we can pick out inferential dispositions interrelating belief-types, again prior to questions of content being determined (for uniformity, I will continue to use the Peacockian idea of inferences that are treated as primitively compelling).

It is worth pulling out from these assumptions one kind of “syntactic” fact that is load-bearing here: we will be assuming, when it comes to perceptual demonstratives, that we have access to the facts that a pair of token beliefs and feature the same perceptual demonstrative. Dickie assumes (in line with “mental file” assumptions about cognitive architecture) that a signature of this is that we find an inference from this pair to the generalization primitively compelling, even in the absence of side-premises. Now, prima facie, this sort of file-individuation constraint is grounded in certain facts about content, in particular, the identification of particular concepts as quantification and conjunction. Whether there’s some looming circularity here is something I’ll come back to later.

The fourth architectural assumption, as before, concerns a characteristic conceptual role that a perceptual demonstrative concept d plays. Thoughts of the form “d is F” (for fixed perceptual demonstrative d and variable F), form a unified body—in part, this unity consists in the identity-free generalizations already mentioned as a signature of token beliefs containing the same concept d in the first place. Another aspect of unity is that the architecture seeks to resolve or eliminate inconsistencies within the body of beliefs. But the key assumption is that there is a hierarchical structure in how we resolve such inconsistencies, based on the way the states were formed. Let us look at this in more detail.

When Sally is in a position to token a perceptual demonstrative concept d, there must be, says Dickie, a “perceptual link” between Sally and a certain object (say, a plate lying on the table), “a perceptual feed which carries information as to some or all of colour, size, shape, state of motion or rest, and so on across the range of observable properties”. The link involves some causal connections between subject and the scene in front of them, and also internal “feature-to-property” processing carried out subpersonally by the visual system, processing initiated by the subject’s perceptual attention. Beliefs are formed by “uptake” of the information made available through this link, and all the beliefs produced by a given concrete link to an object are fed into the same body of beliefs—that is, the resulting beliefs , , feature the same perceptual demonstrative d. Further d-involving beliefs can be added to this body by various means—by inference, by testimony, etc—but those beliefs formed by uptake from the original perceptual link have a privileged role. Specifically, if there is an inconsistency between a belief [d is F] and a belief [d is not-F], where the first is formed via the perceptual link the second is not (but instead, e.g. through testimony) then the former is retained and the latter thrown out.

Just as before, whatever the empirical status of this cognitive architecture, we can consider possible creatures who do work this way. So what does radical interpretation would say about creatures with this architecture?

Radical interpretation (again!) tells us that the correct interpretation of mental states is one that maximizes the rationality of the individual concerned. Given the assumptions about cognitive architecture, we need our interpretation of the concept d to make rational the practice of belief-management.

The following auxiliary normative assumptions will suffice to generate the prediction that d denotes whatever object O is at the end of the perceptual link associated with d:

  • A substantively rational agent would be disposed to have a belief that O is F (with a perceptual-demonstrative mode of presentation), formed by uptake from a perceptual link to O, override a belief that O is not F (with the same perceptual-demonstrative mode of presentation), when that is formed by some other means.
  • For no object X other than O would a substantively rational agent be disposed to have a belief that X is F, formed by uptake from a perceptual link to O override a belief that O is not F, when that is formed by some other means.

 

Clearly, it will be crucial to explain why we should believe these auxiliary normative premises—and in fact, I’ll be qualifying them shortly. But before turning to that, I want to fill in (in a now-familiar way!) the remainder of the story by just running through how the assumptions just articulated, together with what we have on the table already, deliver the result that the perceptual demonstrative d denotes the object at the far end of the perceptual link.

First, we have the a posteriori assumption that d plays the stated role in Sally’s cognitive architecture. Second, we have substantive radical interpretation which tells us that the correct interpretation of c is one that maximizes (substantive) rationality of the agent. We need again a third, “localizing”, assumption, inferential role determinism for d, which says that the interpretation on which Sally is most rational overall is one which also makes most rational the particular belief-management tendencies just listed. Putting these three together we have the following: the correct interpretation of Sally is one that makes the belief-management tendencies associated with d most rational. The final element to add to this is the pair of normative premises introduced above, which tell us that O—the thing at the far end of the perceptual link—is the thing that (uniquely) makes those belief-management tendencies rational. We then derive that Sally’s demonstrative concept d denotes O.

The pair of normative assumptions is clearly crucial to this. But why believe them? The first sounds very plausible, once we unpack it a bit. Since we’re concerned with belief formation, the substantively rational agent is the one whose beliefs are justified. And what we’re specifically concerned with is a case where an agent has testimonial reasons to believe that is not F (where they’re picking out that via a perceptual link to thing in question), but then perceives that that is F. The compelling thought is that in such a circumstance, the testimonial reasons one had to believe that the thing in question is not F are defeated, but that one is justified in trusting and endorsing the perception, and so coming to believe that that is F.

In response, someone might wonder whether there might be cases where background knowledge and testimony are sufficiently weighty that one should distrust the deliverances of perception, rather than drop the other beliefs. Cases of perceptual illusion provide examples where it perceptually seems that pair comprises lines of different lengths but memory of similar lines, plus theoretical understanding of the source of the illusions, plus testimony that the lines are in fact of the same length, conspire to make endorsing the seeming irrational. In general, we can view scenes in which there are misleading cues so that ordinary visual processing represents the objects to which we attend as having properties other than those they in fact have.

But as well as showing that sometimes we are justified in hanging on to the belief that is F when the perceptual seemings are that the thing in question is not F, this sort of case also shows that *our* belief management policies are not as simple as those in the model with which we’ve been working. We don’t always let beliefs formed via the perceptual link override those formed by other means. A better approximation would be this: that by default, and when we have no evidence that we are in circumstances where perception is systematically unreliable, perceptually-based belief trumps beliefs with other sources. But where we have specific reason to believe doubt the reliability of the perceptual link in a particular respect, we do not allow it to be trump beliefs from other sources. And this more nuanced belief-management policy does seem to produce justified beliefs.

(This is no accident! Insofar as you think our actual belief-management practices involving perceptual demonstratives are pretty good at producing justified beliefs, then to the extent that we describe those practices successfully, you should end up agreeing that under the correct interpretation those practices issue in justified beliefs. Of course, simple models like the one’s we are working with might fail to produce justified beliefs, but when we discover an issue, complicating the model to make it more closely approximate the actual case is a decent device).

But what of the second assumption? This is more involved, and I delay it for a follow-up post.

NoR: supplement to 2.1/2.2: Peacocke interpretation.

This is one of a series of posts setting out my work on the Nature of Representation. You can view the whole series by following this link

This post compares what I have said so far about the concepts of conjunction (which should be the easiest case) and unrestricted universal quantification (which is a notoriously hard case) to what Christopher Peacocke says on the same topics.

Peacocke’s account of logical concepts is something of a moving target. There is a “partial” treatment given in his 1986 book Thought. There are some important elaborations in his 1987 British Academy lecture “Understanding logical constants”. And there’s yet more on this in his 1992 book A Study of Concepts (ASOC). His thinking continued to develop after this, but one theme of his later work is the appeal to “implicit understandings” of concepts which make him less comparable to what I’m doing here. So I’ll concentrate on the the remarks he makes in these three early pieces, concentrating in particular in the account presented in the 1992 work, and using the earlier two works to fill in the account where the 1992 book is inexplicit.

One immediate contrast between the ideas I have been presenting and ASOC is the intended subject matter. Peacocke is assuming a Fregean account of thought, on which thoughts are structured entities whose components are Fregean senses. He calls these thought-components concepts. Each concept then determines (perhaps in context) a referent. He makes some standard Fregean assumptions about the division of labour between sense and reference. So we’d expect to see, for example, differences in cognitive significance of thoughts explained by difference in some component sense.

Peacocke’s project in ASOC is to give an account of what it is for a person to possess a concept. And he does this by setting out “possession conditions” for a target concept (be that conjunction or universal quantification). In the cases that concern us, these possession conditions consist simply in the subject finding relevant inferences primitively compelling (and the subject needs to be finding the token inferences primitively compelling because they have the right form). With this account of what it is for a subject to possess the concept conjunction, or quantification, or whatever, Peacocke then goes on to offer what he calls a “determination theory”, for each individual concept i.e. an account of what the concept refers to (or “has as its semantic value”). For conjunction he offers the following:

  • The truth function that is the semantic value of conjunction is the function that makes transitions of the form mentioned in its possession conditions always truth-preserving. (p.10).

The relevant transitions are of course the familiar conjunction-introduction and conjunction elimination rules.

Though each determination theory is a determination theory of one particular concept, Peacocke says we should expect such determination theories to have a general form.

  • The determination theory for a given concept (together with the world in empirical cases) assigns semantic values in such a way that the belief-forming practices mentioned in the concept’s possession condition are correct. That is, in the case of belief formation, the practices result in true beliefs, and in the case of principles of inference, they result in truth-preserving inferences, when semantic values are assigned in accordance with the determination theory”. (p.19)

Let me work this through for the various things that Peacocke says about quantifiers. In ASOC, the leading example is quantification restricted to the natural numbers C. At p.7, he states that the possession conditions are that the thinker find suitable instances of the form primitively compelling:

  • from Cx: Fx derive Fn

The suitable instances are are “those involving a concept n for which the content n is a natural number is uninformative”. In addition, the thinker is (a) required to find these inferences compelling because of the given form; and (b) the thinker is not required to find any other inferences essentially involving C primitively compelling. The determination theory is then

  • The semantic value of this quantifier is the function from extensions of concepts (true or false of natural numbers) to truth values that makes elimination inferences of the form mentioned in the possession condition always truth-preserving.

Now, clearly validity is playing a major role in Peacocke’s determination theory for conjunction and (numerical) quantification. But notice also that in both cases the requirement to make the inferences valid selects a semantic value from restricted pool of candidates (truth functions in the first case, quantifiers over natural numbers in the second). The general form of determination theory he gives doesn’t say anything about how we narrow down to this particular pool, and (as later authors have emphasized) this is an extremely substantive step. For example, in the case of quantification over the natural numbers, the elimination rule will be valid on any domain that includes the natural numbers, whether it includes also the integers, or is absolutely unrestricted.

This is one place where Peacocke’s 1987 work may be relevant. There, for cases of concepts whose possession conditions are spelled out in terms of elimination rules only (/introduction rules only) he suggests that the concept denotes the weakest (/strongest) semantic value that makes the rules valid (p.161). In application to the possession conditions for numerical quantification, for example, interpreting the quantifier as ranging over a more inclusive domain than the naturals would be, intuitively, to give it a stronger interpretation than is required to make the elimination rules valid. That would allow us to drop the ad hoc constraint to quantifiers over natural numbers in the determination theory for numerical quantification, but it wouldn’t bring that determination theory into line with Peacocke’s “general formulation” of determination theories, since that doesn’t provide for constraints of strength or weakness. So this is an area where it’s a little unclear how to mesh the different eras of Peacocke).

Let me mention one last thing. In ASOC, Peacocke explicitly discusses radical interpretation a few times. He is thinking of it, though, as a rival account of concept possession, and though he allows that radical interpretation may say nothing false, he complains that it doesn’t have the right formal features to provide the kind of illumination of concept possession he is targetting. So Peacocke’s conception of radical interpretation at this point contrasts with my own, where radical interpretation is conceived as a theory of reference-fixing, not (Fregean) concept-possession.

I’ll now compare and contrast this setup to my own.

First, unlike Peacocke, I say nothing about Fregean sense. I do talk about “concepts” as components of structured mental states, but for me these are vehicles of mental content, not constituents of content (they are more like Fodorian concepts than Fregean ones, and indeed, thinking of them as words in the language of thought is one model I’ve suggested). So the whole enterprise of articulating “possession conditions” and worrying about them having the right form is just absent from my story. What I have said is consistent with additional assertions that the “inferential roles” I have been appealing to are possession conditions in Peacocke’s sense. But I’ve left open that enacting these inferential roles might be either unnecessary or insufficient for possessing a concept of conjunction, of numerical or unrestricted quantification.

Second, and connectedly, there may be nothing terribly natural about the inferential roles which figure in my explanations of why our concepts have the reference that they do. They may be carved out of our cognitive economy in quite artificial ways. For my purposes, I’m just saying that finding (at least!) those rules primitively compelling is sufficient (given the rest of the background) to explain why the concept denotes what it does.

Third, Peacocke’s determination theories are stated in an unqualified way—e.g. “the semantic value of C *is* that truth-function which makes the conjunction rules valid”. What you get from my framework at this point will have to be caveated with a “ceteris paribus” clause, since everything depends on the assumption that the particular patterns encapsulated in the conceptual role aren’t “overridden” by the way the concept figures elsewhere in the cognitive economy. For some reason I don’t understand, the Peacockian privileges the way that the conjunction-concept figures in belief, rather than other kinds of thought, so that if a concept figures in the conjunction-way in beliefs, and in the disjunction-way in desires, it would determinately be a concept of conjunction. My account (correctly I think) makes no strong predictions about such conflicted cases. The caveats in my theory are a feature, not a bug.

Fourth, what I say is consistent with the thesis that there is a only a small range of concepts which have neatly specifiable inferential/conceptual roles of the kind I’ve been talking about. For all I’ve said there’s no such patterns to be found for many of our concepts. This relaxed attitude is possible since I already have my story about what fixes the content of thought, and so for me the Peacocke-style theorizing is not an essential part of articulating what grounds representation, but a matter of illuminating some special cases.

Fifth, Peacocke alludes to the “general form” of determination theories, which raises the question: why should determination theories have a common form? Radical interpretation answers this by starting with an account of what it takes, generally, for a thought to have content, and then the analogues we get to his “determination theories” are accounts of what the account requires when applied to a particular case. This feels to me like a better direction of explanation than a bottom-up approach on which the convergence on a common form would appear as a cosmic coincidence

Sixth, even when working through cases where inferential rules are central, the emphasis on their validity I think ties our hands. I mentioned above Peacocke’s 1987 constraints of strength and weakness, so it appears that more than validity is going on—and if this is not to be mere monster-barring, we need a sense of why such constraints turn up in theories of reference. Radical Interpretation is well equipped to give principled underpinnings for these extra sorts of constraints.

To illustrate. In response to my earlier post on conjunction, my colleague Jack Woods asked me what ruled out hyperintentionally deviant interpretations of the conjunction c. I asserted that no rational agent would find the conjunction rules for c primitively compelling unless they were working with conjunction. But what if the c they had in mind was something where we read “pcq” as p and q and I am here now? Or indeed, conjunction supplemented by any tautology? The answer I want to give to this sort of challenge is that such an interpretation would not make the agent’s practices justified. Let’s accept that “I am here now” is itself a priori. Still, there is a story to be told about what one has to be like to be justified in believing it (we just know that these conditions aren’t empirical ones). If we believe a thought—even one that is a priori justifiable— merely “en passant” on irrelevant grounds, then we are not forming a justified belief. So the crucial feature is that the raw data is that the inference from p,q to pcq is felt to be primitively compelling. The requirement that we make the primitively compelling rules valid isn’t fine grained enough, since there are many valid arguments we are not justified in taking as primitive. So it’s crucial that the story really incorporates the constraint of making the practice a justified one (which is indeed something that features in Peacocke’s “general formulations” of determination theories) and don’t cash it in too quickly for constraints of making-the-inference-valid (as Peacocke himself does when citing particular determination theories).

A second illustration will open up some further complex issues in Peacocke interpretation. My discussion of unrestricted quantification made essential appeal to epistemological claims about the conditions under which non-enthymetic universal instantiation is justified. Validity alone, I said, wouldn’t knock out skolemite interpretations of the quantifier, but nevertheless, considerations of maximizing justification do so. So I take this to illustrate the same moral of the importance of full justificatory structure in getting the story about reference-fixing right.

Now, Peacocke’s discussion spirals around the issue of what fixes unrestricted interpretation, without ever (in what I’ve read) really nailing it. Let me outline what I’ve found him saying on this topic.

Peacocke discusses unrestricted quantification in the 1986 book Thought. But he acknowledges there (p.36) that he has not given a full account of what determines the denotation of an unrestricted quantifier. More generally, I see no new resources in what he says there or in the later works to rule out skolemite interpretations, given only the resources of validity and completeness.

It could be that he envisages the possession conditions of unrestricted “everything” as tying the subject to every instance of the elimination scheme, for every individual concept whatsoever—even those currently beyond the subject’s conceptual repertoire and which it might be  physically impossible for the subject to possess. On the (substantive!) assumption that there is an individual concept for every object whatsover, the constraint to make all such instances valid would “peg out” the domain to be absolutely universal. There is an echo of that in the way he talks of “open ended” inferential dispositions (pp.33-34). But insofar as Peacocke has to appeal to what we’re disposed to do in counterfactual circumstances where our conceptual repertoire is expanded, this account is vulnerable to an interpretation on which our quantifier has an always-restricted but counterfactually-varying domain, just as McGee and Lavine’s appeals to open-endedness are.

In the later work ASOC (particularly in chapter 5) there is material that may speak to the issue, but ultimately not enough is said to resolve it.  Let’s spot ourselves that Sally both treats particular dated transitions from the thought “Everything is physical” to the thought “Roger is physical” as primitively compelling, and treats as primitively compelling the inference-type inferring Roger is physical from everything is physical (perhaps one or other is more basic, but it won’t matter for our purposes). Peacocke insists, further, that part of the explanation for why we find such inferences primitively compelling is that they have a certain form: inferences just mentioned are said to have the form “Cx:Fx” to “Fa”. Peacocke’s idea is that the determination theories for “C” will say (I think!) insist that the semantic value assigned to C make every (token or type) instance of this form always truth-preserving, i.e. valid.

Now, if this “form” is one shared by absolutely all instances of universal instantiation, including those involving singular concepts outside Sally’s conceptual repertoire, then you might think that Peacocke here has just pulled the rabbit out of the hat. For even in a single case, Sally is related to a particular inference form of universal instantiation (since it part of the explanation of why she finds the token/type inference primitively compelling). And making all instances of the form valid will pin down the quantifier to be the absolutely unrestricted one—the instances “peg out” the unrestricted domain. But of course, this is only good if we can undergird the claim that this the “form” which plays a causal explanatory role in Sally’s psychology is general in this way, rather than a more restricted “form” whose instances are only those individual concepts within Sally’s present ken. If she’s related to the latter “form”, or its indeterminate which “form” is playing the role in her psychology, we get no new leverage here.

In fact, Peacocke raises exactly the relevant issue at ASOC p.137—not in the case of unrestricted quantification, but in connection to the question about whether certain inferential rules for numerical quantification have a form that ranges over all singular concepts for natural numbers, or only over “surveyable” ones. Unfortunately, all he says there is that the issue is “far beyond” the scope of that book to resolve. We can at least take it that he doesn’t view the resources he’s provided in ASOC as giving an easy resolution of the sort of form-determination issue which would be utterly central to the success of this strategy.

Despite these differences, it is striking that the story that falls out of radical interpretation (modulo empirical and normative assumptions) is a recognizable variation of what Peacocke says about his parade cases, which after all were not developed with (my kind of) radical interpretation in mind. Obviously something in this ballpark has independent appeal—and so it’s a big bonus that radical interpretation can predict and explain why something like Peacockian determination theories are in force for logical concepts such as conjunction and quantification.

NoR 2.2: Everything

This is one of a series of posts setting out my work on the Nature of Representation. You can view the whole series by following this link

I turn now to another logical concept: generality. Specifically, I am interested in unrestricted generality.

For starters, we do seem able to express thoughts that range over everything without  restriction. Our ability to do so is, furthermore, significant to us. The philosophical thesis of physicalism is that absolutely everything is physical, not that everything-around-here is physical. Without the ability to talk about absolutely everything, such theses would be ineffable. The moral rule is that all people should be treated fairly, not that all people-who-meet-some-further-condition should be so treated. The “all” still needs to be absolutely unrestricted in force in order to capture the intended thought, and this illustrates that even explicitly restricted quantification (“all people”) presupposes that we are not missing out on absolutely anything in the underlying domain from which the restricted class is selected.

But against this, there are formal results that show there are deviant, restricted interpretations of the generality of “everything” and “all” on which our interpretee Sally doesn’t quantify over absolutely everything. It turns out that we can construct such restricted interpretations in ways that pass most tests we can muster for what makes an interpretation correct. For example, we can find a deviant interpretation that agrees with the correct, unrestricted, interpretation of Sally on the truth value of every thought she can think. And we can choose the interpretation so it agrees with the distribution of truth values over thoughts not just in the actual world, but at every possible situation. Further, such deviant interpretations diverge from the correct interpretation only on the domain of the quantifiers. So they assign the same denotation to singular concepts, predictive concepts and the propositional connectives.

Because of the matching truth values/truth conditions, you won’t be able to rule out the deviant interpretation of Sally on the grounds that it makes her thoughts any less reliable than the original interpretation. Because of way the interpretations on everything-except-quantifiers, side-constraints on the interpretation of singular terms and the like won’t help at all. The challenge for us is to explain how, despite the existence of such “skolemite” interpretations, Sally manages to generalize unrestrictedly, in the way that philosophy and morality presupposes she can.

Aside. I’m here skating over a number of technical issues in constructing the construction of these deviant “skolemite” interpretations. So let me just flag, in lieu of getting into details, that it may be that the challenge is restricted to those whose conceptual repertoire is expressively limited in some ways. While the technicalities (involving higher order quantifiers and modal resources) are interesting in their own right, and I have things to say on the topic, I don’t find it at all credible that our ability to quantify restrictedly depends on our ability to think thoughts whose logical form goes beyond that of the first-order extensional calculus. So I’m happy setting these aside.

While I’m flagging side-notes, let me also highlight the point just made that skolemite interpretations aren’t ruled out by imposing side-constraints on the denotation of singular or predicative concepts or propositional connectives. While we’re talking about radical interpretation specifically here, notice that this feature means that the skolemite challenge is relevant to theorists who start from a quite different place, since there’s a laser-like focus on the denotation of quantifiers here—the problems won’t come out in the wash just because you have some interesting things to say about the denotation of concepts in other categories. End Aside. 

Following the pattern of the last post, I will make some architectural assumptions about the way that quantificational thoughts figure in our psychology, and explore what radical interpretation will predict under those assumptions. The first three architectural assumptions I will make are just as before—that belief states are structured, that facts about syntax are grounded prior to questions of content arising, and that we can pick out inferential dispositions interrelating belief-types, again prior to questions of content being determined. I will continue to use the Peacockian idea that certain core inferences are treated as primitively compelling.

The fourth architectural assumption is about the character of the inferential role associated with the quantifier concept everything which I write as q. I’ll concentrate on just one of these rules:

  • From qx:Fx  derive  Fa

Our interpretee, Sally, endorses an instance of this for every individual concept a that she possesses. We’ll come back in a moment to the question of whether this is all the relevant facts relating Sally to tokens of this inference-type.

If the story rolled on from this point as it did for conjunction, we might expect that we’d find radical interpretation predicting something along the same lines as the Peacockian “determination theory” for conjunction, that is, the semantic value for q will be the quantifier, whatever it is, that make all instances of the above instances of a valid type. Just as before, Radical interpretation will approximate an interpretative constraint of this kind.  Sally’s rationality, which includes maximizing justified beliefs, which ceteris paribus includes interpreting q so as to make the inferences justification-preserving. Making them valid looks just the ticket.

If this is all we can extract from the story, we’d be in trouble. There are every-so-many ways of choosing restricted quantifiers as an interpretation of q to make all instances of the above truth-preserving. The ones that are constructed by the skolem procedure are among them, since the skolemite interpretation’s restricted domain includes every object for which the subject has an individual concept a.

In some special restricted cases, we have devices that allow us to construct concepts for every member of a restricted domain (as in our procedure for constructing numerals for natural numbers). But that’s not so in the general case. It’s not even so in very large restricted domains, for example for quantifiers over the real numbers, over space-time points, or sets.

Aside. One reaction in the literature has been to double down on the idea of making instances of the above scheme valid, and argue that more “instances” are relevant than one might at first think. Thus, one might argue that Sally is disposed to find compelling not just instances of the above scheme for singular concepts she currently has available, but also for potential singular terms not currently within her ken. I think, though, that this is ultimately not a productive approach. Although the general idea that Sally’s endorsement of the inference pattern above is “open-ended” when she’s using an absolutely unrestricted quantifier is a good one, I do not think that this is best factored into the theory of how denotation is fixed by an insistence that those  extra token inferences be interpreted so as to be instances of a valid type. (I consider the approach at length elsewhere, but briefly, the problem is that consistently with this constraint, one can interpret the agent as deploying a contingent and contextually flexible restricted quantifier whose domain “expands and contracts” in sync with the agent’s available singular conceptual resources. In a slogan: using counterfactual pegs to fix quantifier domains only end up constraining the counterfactual domains. End Aside. 

Radical Interpretation can explain how finding tokens of the elimination rule above primitively compelling can fix a truly unrestricted interpretation of our quantificational concepts. It can do so by appeal to the epistemological character of even a single token instance of the inference-type.

Take an interpretation where Sally uses a quantifier tacitly restricted to some skolemite domain S, and see what we think of the epistemological status:

From Everything (psst… in S) is beautiful. Derive Toby is beautiful.

Now, Toby is among the items in the skolemite domain S, we may assume. So the inference preserves truth, on the suggested interpretation. But what’s striking is that Sally is not interpreted as having or utilizing any information either way about this fact. To state the obvious: that’s not the usual way we would think of restricted quantification as going. For example, in order to justify my belief that “Toby should be treated fairly” by inference from my (justified) belief that “All people should be treated fairly”, I surely also need to have the justified belief that Toby is a person. After all, if I wasn’t justified in thinking that Toby was a person—if my evidence was that he’s my neighbour’s cat—then the justification for the derived claim would be undercut. The same goes for ordinary, tacitly restricted quantification. From “everyone has some marking to do” (contextually restricted to faculty members) I can’t be justified in believing “Toby has some marking to do” unless I have some justification for believing that Toby is a member of faculty. What’s striking and central about Sally’s deployment of an unrestricted quantifier is that her inference is not enthymetic in this way. She doesn’t pause to check whether or not Toby has this or that feature before inferring that he’s beautiful.

In sum: Sally’s justification for the belief that everything is beautiful transfers to Toby is beautiful without mediation. The lack of mediations explains why her acceptance of the inference rule is “open ended”, as theorists like McGee and Lavine have emphasized. But what matters for grounding facts about quantifier-meaning is not the way this open-endedness manifests in the piling up of accepted instances of the inference across counterfactual scenarios, but the lack of mediation in the epistemological structure of the inference, a feature that is already present the actual cases.

I propose the following piece of epistemology. Consider an elimination rule for a restricted quantifier—whether restricted explicitly (all people) or restrictedly tacitly (either by contextual mechanisms or in the way proposed by the skolemite construction. If deployments of that rule are to transfer justification, then that rule will have to include a side-premise, to the effect that the object in question has the feature that defines the restriction. This is not the case for an unrestricted quantifier.

This piece of epistemology then tells us why we wouldn’t be interpreting Sally as substantively rational if we interpreted her as using a skolemite quantifier—we’d be representing her as constantly engaging in inferences that involve enthymetic premises for which she has no justification.

This story gives us a satisfying resolution of long-standing skolemite puzzles about what grounds our ability to quantify unrestrictedly. Methodologically, it illustrates the virtue of thinking through what radical interpretation require in detail—in the case of conjunction, we only needed to make the primitively compelling inferences valid in order to pin down the denotation. That is insufficient here, since making valid the elimination rule (and indeed, the analogous introduction rule) wouldn’t eliminate the deviant interpretations.

I finish by running through the derivation of Sally’s q denoting the unrestrictedly general universal quantifier. First, we have the a posteriori assumption that c plays a distinctive cognitive role in Sally’s cognitive architecture, captured by the unmediated elimination rule. Second, we have substantive radical interpretation which tells us that the correct interpretation of q is one that maximizes (substantive) rationality of the agent. We add the “localizing” assumption, inferential role determinism for q, which says that the interpretation on which Sally is most rational overall is one on which the particular inferential dispositions captured by the rules just given for q are rational. Putting these three together we have the following: the correct interpretation of Sally is one that makes the inferential role associated with q most rational. And now we add the conclusion of the discussion we’ve just been having: that the way to make the inferential role that consists in the elimination rule without side-premises most rational (especially to make it most justification-preserving) is to make it denote the unrestricted quantifier.

Edited 12/9/17

NoR 2.1: And

This is one of a series of posts setting out my work on the Nature of Representation. You can view the whole series by following this link

In the previous series of posts, I introduced radical interpretation as my favoured account of the “second layer” metaphysics of representation. It aims to specify the way in which the facts about what agents believe and desire is grounded, inter alia, in facts about what they experience and how they act (a story about the latter, “first layer” of representation remains on the to-do list). Radical interpretation tells us that what it is for an agent x to believe/desire that p is this: the correct interpretation of x attributes a belief/desire that p to x. It also says that the correct belief/desire interpretation of x is that which best rationalizes x’s dispositions to act in light of their experience. So a key question was how “best rationalization” is to be understood. To this point, I’ve used an underdetermination argument—the bubble puzzle—to show that we can’t read the relevant notion of rationalization in purely structural way. And I’ve argued that an alternative understanding of rationalization—substantive rationality, roughly glossed as the agent believing and acting as they ought, given their experience/desires—can pop the the bubble puzzle.

The aim here and in the next series of posts, is to draw out much more specific consequences of radical interpretation for specific kinds of representation. Across a series of posts, we’ll derive results that speak to well-known and challenging-to-explain features of representations. Among these are the referential stability of “morally wrong”, how it’s even possible to express absolute, unrestricted generality. More generally, I will show how patterns in the grounding of the denotation of this or that concept—causal patterns, inferentialist patterns and the like—emerge within the radical interpretation framework. I take the posts to follow to illuminate the relation between the kind of foundational theory of representation that I am pursuing, and the more local, less-reductive projects that sometimes go under the project “theory of reference”. That’ll bring me into dialogue with theorists like Peacocke (on logical concepts), Wedgwood (on moral concepts) and Dickie (on singular concepts). Towards the end, as attention turns to descriptive general concepts, I’ll investigate in a similar spirit the role that so-called “natural properties” can play, which brings me into contact with recent work in the Lewisian tradition by Weatherson, Schwarz and Pautz.

This is a big agenda! Any one of the following posts could generate a whole series of discussions on their own (indeed, the post about the reference-fixing of moral wrongness is going to be a short presentation of the ideas I set out and defend at length in a long, forthcoming paper). But I think there’s a virtue to laying out the essential ideas as cleanly and sparsely as possible, so the common patterns can emerge, and so I’ll stick to one manageably-sized post on each.

I start right now with the simplest case, but one which illustrates the moving parts at work in all that follows.  This is the case of the propositional logical connective and. And here, as ever, my focus is not on the word “and” in a natural or artificial language, but in the logical concept and as it appears in thought. Let me remind readers of a familiar story about how this a concept gets its meaning.

A connective-concept c is associated with the following inferential patterns:

  • from A, B derive  AcB.
  • from AcB derive A
  • from AcB  derive B.

The crucial claim is that what makes it the case that the connective concept c denotes the truth-function conjunction is the fact that it is associated the rules just mentioned.

Different versions of this idea will fill in the the two steps in more detail: saying more about the nature of the “association” between concept and the patterns expressed above with “derives”, and saying more about the recipe for getting from such patterns to denotation. For example, the A Study of Concepts-era Peacocke held that each concept figured in patterns of belief formation that were “primitively compelling”, and that for c, the relevant patterns of belief-formation were inferences mirroring the transitions labelled with “derive” above. He held that for c to denote the truthfunction f was for f to make valid the primitive compelling inferences configuring c.

My aim in this section is to capture what’s right about this inferentialist idea within the overarching theory of radical interpretation—to show in particular that radical interpretation can predict and explain what I think is quite an attractive view of the grounds of meaning of that particular logical concept.

Radical Interpretation unaided won’t get us there. And so here (and in following sections) I’ll be adding two kinds of auxiliary assumptions to the setup. These will comprise, first, assumptions about the specific cognitive architectures that our subject—Sally—possesses, and second, normative assumptions (epistemic or practical) involving specific kinds of content.The first and most basic architectural assumption (one crucial to securing our subject-matter) will be that Sally’s thinking consists in tokening state-types which have a language-like structure, within which we find analogues to the logical connectives “and”, “or”, “not” etc.

[Aside: The hypothesis that there is a ‘language of thought’ would vindicate this assumption, though it’s not the only thing that would do so.]

Second, I’ll be assuming also that the syntactical properties of the structured states in question, and the attitude-types they token  (e.g. flat-out-belief, supposition, degree of belief, degree of desire) , are grounded prior to and independent of the determination of content that they are paired with. The job of the interpretation that the radical interpreter selects is purely to assign content, which it does in a “compositional” way via assigning content to the atomic elements and specifying compositional rules.

[Aside: There is a tradition of appealing to functional role both to individuate the syntax of mentalese and word-types. In Fodor, for example, such assumptions are preliminaries to giving a causal metasemantics to pin down the content of the attitudes. That would be a suitable backdrop for this discussion, though of course, the metasemantics I explore is a rival to Fodor’s. If one thought that interpretations should do all these jobs holistically, then you can read what is to follow as a story about what, in substantive radical interpretation favours one interpretation over others among all those agreed on syntax and attitude-type, and there will have to be further discussion about how such local rankings interact with factors that fix the attitude-types.]

Third, I make the auxiliary assumption is that we can identify, prior to content-determination, which inferential rules involving c Sally finds primitively compelling.

The fourth and final architectural assumption is that such associated-entailments for the case of c turn out to be those given above and repeated here:

  • from A, B derive  A and B.
  • from AcB derive A
  • from AcB  derive B.

In the case of a fictional character like Sally, we can make such assumptions true by stipulation. But to hypothesize that we are like Sally in these respects would be a theoretical posit about our cognitive architecture, not something that is a priori or analytically obvious. So to emphasize: such assumptions are not essential to radical interpretation as such. Radical interpretation will have something to say about creatures who do possess this particularly clean sort of architecture—which for all we know from the armchair, includes us. And we want it to say plausible and attractive things about creatures with such an architecture. Let’s see if it does.

Radical interpretation tells us that the correct interpretation of mental states is one that maximizes the rationality of the individual concerned—where rationality in the relevant sense means that as far as possible the subject believes as they ought to, given their evidence, and acts as they ought, given their beliefs and desires. All else equal, an interpretation will score well on that measure to the extent that it makes the most basic patterns of belief formation ones that preserve justification (we don’t want leaky pipes!). And so, given the assumptions about cognitive architecture, we need our interpretation of the concept c to make rational our practice of treating the given rules as primitively compelling (inter alia, being willing to reason in accordance with them).

We can already see this desideratum has teeth. Interpreting c as disjunction, for example, makes a nonsense of the fact that Sally associates with it the rule that A is entailed by AcB. Having a basic disposition to infer A from A or B would be irrational! This is representative, and what we need to do is add more auxiliary assumptions, this time about what a rational agent could or could not be like—in order to derive specific predictions about what c denotes.

The following auxiliary normative assumptions will suffice to generate the prediction that c denotes conjunction:

  • A substantively rational agent would be such that they find primitively compelling the inference from A and B to A, they find primitively compelling the inference from A and B to B, and they find primitively compelling the inference from A, B jointly to A and B.
  • For no content X other than conjunction would a substantively rational agent would be such that they find primitively compelling the inference from AXB to A, they find primitively compelling the inference from AXB to B, and they find primitively compelling the inference from A, B jointly to AXB.

Notice here we use rather than mention the concept of conjunction. These are simply a couple of claims (very plausible ones) about what substantively rational agents equipped with a certain kind of inferential cognitive architecture are like.

The argument to the conclusion that Sally’s connective concept c denotes conjunction is as follows. First, we have the a posteriori assumption that c plays a distinctive cognitive role in Sally’s cognitive architecture, captured by the given rules. Second, we have substantive radical interpretation which tells us that the correct interpretation of c is one that maximizes (substantive) rationality of the agent. We now need, third, a “localizing” assumption, inferential role determinism for c, which says that the interpretation on which Sally is most rational overall is one on which the particular inferential dispositions captured by the rules just given for c are rational. Putting these three together we have the following: the correct interpretation of Sally is one that makes the inferential role associated with c most rational.

The final element to add to this is the pair of normative premises introduced above, which tell us that conjunction is the thing that (uniquely) makes those particular inferences rational. We then derive that Sally’s connective concept c denotes conjunction.

I finish by emphasizing a few things in this derivation. First, the assumptions about cognitive architecture are sufficient (given the other premises) to derive the metasemantic result that c denotes conjunction. There’s no suggestion here that they’re necessary, in order for c to denote conjunction. Remember—the aim was to show what radical interpretation predicts for a certain possible, contingent architecture, not about what is required in order to think conjunctive thoughts per se.

Second, the localizing assumption that I flagged up plays a very significant role. The most rationalizing global interpretation of an agent can in principle attribute local irrationalities—there could be other inferences Sally makes that involve c, which are irrational by the lights of the interpretation of c as conjunction. For example, the stated rules are silent about the way that conjunction figures in desires, and if figured in desires in a way that would be best rationalized by interpreting c as disjunction, then there would be an interpretative tension, and it is not at all clear that a plausible theory would predict that c picks out conjunction. Inferential role determinism assures us that we’re dealing a case where “all else is equal” where such pressures are absent.

Third, the normative assumptions themselves, even if accepted as true, are the sort of things we would expect to be backed by more detailed first-order normative (/epistemological) theory. That need for that kind of principled backing is something that I emphasized in a previous post. Why is it that it’s rational to perform, and find primitively compelling, the inferences involving conjunction? Surely the full story has something to do with the fact that those inferences are guaranteed to be truth-preserving, since they are valid. Why is it that no other connective content will do the job? Presumably this will be defended on the grounds that conjunction uniquely has the property of making the inferences in question valid. But why is validity required to rationalize those rules? Certain of the inferences we’re disposed to perform–even those that are plausibly basic—are not guaranteed to be truth-preserving, so it’s not clear why validity is required for rationalizing an inferential disposition. I think the reaction to this should be to strengthen the assumed inferential role in ways so that validity plausibly is required for rationalization—e.g. by making the inference indefeasible.

NoR 1.5b: Popping the bubble puzzle.

This is one of a series of posts setting out my work on the Nature of Representation. You can view the whole series by following this link

To recap: I distinguished two forms of radical interpretation: structural and substantive, based on a distinction between two corresponding readings of “rationalization”. Structural radical interpretation has a counterexample in the bubble puzzle: obviously wrong interpretations of an agent can perfectly structurally-rationalize her actions given her course of experience. Attention turns then to substantive radical interpretation, but that’s going to be no good if all we have to offer by way of fleshing out “substantive rationality” is an ad hoc laundry list of odd-seeming psychological states. But I’ve set out a principled framework which we use to explain and predict which interpretations will be counted as substantively rational, and which not. In this post, I’ll turn back to the bubble puzzle, and show why it isn’t a problem for substantive radical interpretation.

What the construction provides us with are (structurally-rational) candidate interpretations Deviant and Paranoid. Now this was a counterexample to structural radical interpretation, but because we deny that all structurally rational interpretations are substantively rational, they are not (at this point) a counterexample to substantive radical interpretation. However, they still serve generate a challenge: to pin down what constraints of substantive rationality they violate.

Consider the case of a character to whom Deviant would truly apply, who really is agnostic and indifferent about matters outside her local bubble. You might think of her—at least as far as her beliefs are concerned—as someone who read too many classic Cartesian and Humean sceptical arguments and, convinced, ends up suspending judgement on the character of the wider world beyond her local surroundings. This suggests a way of identifying what’s wrong with Deviant. Take your favourite anti-sceptical story about how and why agents are justified in their (standard) beliefs about the world around them, given their evidence. Cite the anti-sceptical story to defend the view that the beliefs attributed by the interpretation Original are justified by the agent’s course of experience, but suspending belief or having paranoid beliefs is not. Then (via a restricted identification of kind 2 above, e.g. that substantive rationality of beliefs coincides with justification) we explain on this basis why Original is more substantively rational than Deviant or Paranoid (all else equal).

To illustrate, suppose the local bubble around an agent encompasses a region of space time varying from a few metres to a several miles (perhaps on occasion, when gazing at the stars, it is much more extensive). We believe that the world outside that bubble—in the region now behind the wall that blocks my view, in the years before I was born and after my death—is similar in character to the world in my bubble. That belief in the spatio-temporal uniformity of nature, across the boundaries of the bubble of my experience and action, is a presupposition of my reasons for holding more specific beliefs, e.g. that the hose I left in the yard yesterday is still there now, that there was some decent popular music recorded in the 1960s, and that pouring chemicals in the local streams will cause environmental damage that will last centuries.  What justification we have for believing that nature is uniform in this way is a familiar, Humean, question, to which first-order epistemology owes us an answer. Perhaps it is this: a belief in the uniformity of nature is the best, simplest explanation of the uniformity that we do see within our local bubble. Add to this the claim that we are justified in believing the best explanation of the phenomenon we observe, and that we are not so justified in believing a worse explanation/no explanation, or in suspending judgement, and we have a epistemic principle we can wield in defence of substantive radical interpretation.

Suppose that the local bubble is more “Cartesian”, including only the agent’s pattern of sense-data and inner volitions—this on the basis that sense-data constitute the sphere of experiential evidence and volitions the sphere of basic action. In that context, we will need to look to epistemology to tell us what justifies ordinary beliefs about the material world around us (insofar as that is not an idealist construct out of sense-data and volitions). We need an answer to what justifies the agent in thinking that there is a solid material chair in which she is sitting, given her evidence consists of a mosaic of colour patches in visual space, pressure patches in bodily space, etc. Andgiventhe starting assumption about the character of basic experiential evidence, then unless the sceptic is going to win, there must be some good answer to it. Perhaps, as Russell thought, inference to the best explanation is again the key. Or perhaps there are simply a priori justified “dogmatic” conditionals, that experience as of a material object so-shaped justifies the belief that there is a material object so-shaped. Having got to the material contents of the local space-time region, we are at the starting blocks of the Humean puzzle just discussed, and by chaining the stories together our first-order anti-sceptical epistemology again is the source of the detailed story about why the bubble puzzle is answered.

It is characteristic of the approach that the necessary first-order epistemology will be a matter of controversy. It is not uncontroversial that justification in the uniformity of nature proceeds by inference to the best explanation. It is not uncontroversial that suspending judgement in the uniformity of nature is unjustified. And even once the operative principles are accepted, there is of course a lot of work to be done in understanding the most foundational epistemic principles on which the epistemic principle operative here—inference to the best explanation—is based. Substantive radical interpretation, for better or worse, simply doesn’t offer many autonomous predictions, independent of the details of first-order epistemic or practical normative theory. But the case of the bubble puzzle, since it’s linked to familiar sceptical scenarios, is a special one, since we are entitled to assume that some anti-sceptical story or other will be forthcoming, and however this plays out, the bubble puzzle will have an answer.

One caveat to the above. An anti-sceptical epistemology is not quite enough. We need an intolerant anti-sceptical epistemology, that is, one that doesn’t simply say that it’s okay (epistemically permissible) to believe in the material world around you and the uniformity of nature, given a standard course of experience, but that such beliefs are epistemically obligatory. Subjective Bayesianism, as an epistemic theory, denies this. They are not classic sceptics, and might even endorse the letter of inference to the best explanation. But for subjective Bayesians, having prior probabilities that favour explanations with such-and-such character in response to a standard course of experience is simply one rational option among many. Sure, our priors are like this, and perhaps there’s a good, non-rational, evolutionary explanation about how we come to be disposed to react to evidence like this. But on this view, there’d be no normative defect in having very deviant priors that favour more complex explanations over simpler ones, and priors that favour e.g. the beliefs attributed by Paranoid, over those attributed by Original. Subjective Bayesians, at least as I’m understanding them here, essentially deny there’s a category of “substantive rationality” in epistemology that can’t be analyzed as the joint upshot of structural rationality constraints plus facts about the priors that are typically shared among creatures like us, but which are not normatively privileged. It is not immediate that the subjective Bayesian reduction of “substantive” rationality would reinstitute the bubble puzzle, since Paranoid and Deviant attribute paranoid and deviant desires as well as beliefs, and one might combine the subjectivist epistemology with a more objectivist account of practical normativity. But I think that’s a faint hope, and we should simply insist that a demanding anti-sceptical epistemology, as opposed to be the permissivist one just sketched, is a presupposition of the project.

This brings us to the end of this initial presentation of my favoured approach to type-2 representation, the grounding of facts about what an agent believes and desires, and to the end of part 1 of my series of posts. As promised, we’ve concentrated on the selectional ideology of “rationalization”. Nothing has been said about the base facts on which the metaphysics of belief/desire is grounded—experience and action. In part 3, I’ll be looking in more detail at the metaphysics of these, more basic, representational facts. Facts about linguistic representation, or other representational artefacts, have not been mentioned. Because of this, the story that has been presented is itself an appropriate basis for going on to theorize those “third-layer” representational facts, and part 4 will cover this. Before all that, though, I want to extract more mileage out of the story of belief and desire than we have seen hitherto, and show the way that this foundational story can predict and explain in a detailed and nuanced way aspects of particular kinds of representational states. Part 2, then, will be deploying the framework I’ve just set out to explain aspects of the way we think about the logically complex, the general, individual objects, and their normative and categorical features.