Monthly Archives: September 2017

NoR 3.3: Intention

This is one of a series of posts setting out my work on the Nature of Representation. You can view the whole series by following this link

Neander gives a theory of the representational contents of sensory-perceptual systems. She is explicit that this account is aimed to ground “original intentionality” in contrast to “derived intentionality”, where

“derived intentionality [is defined] as intentionality derived (constitutively) from other independently existing intentionality, and original intentionality [is defined as] intentionality not (constitutively) so derived”.

Neander’s view is that original intentionality belongs at least to sensory-perceptual states “…and might only belong to them”. On the contrary, I want to argue that certain other states have almost exactly this sort of original intentionality.

I will assume that our agents’ cognitive architecture includes an intentional-motor system, which takes as input representations from the a central cognitive system (intentions to do something), and outputs states to which we may have limited or no conscious access, but which control the precise behaviour needed to implement the intention. I suggest that original intentionality belongs also to this intentional-motor states, and the metaphysics of this sort of representation is again teleoinformational. Indeed, it will be a mirror-image of the story of the grounding of representation in sensory-perceptual states—the differences traceable simply to the distinct directions of fit of perception and intention.

Our starting point is thus the following:

  • A intentional-motor representation R in intentional-motor system M has [E occurs] as a candidate-content iff M has the function to produce E-type events (as such) as a result of an R-type state occurring.

This time, representation is analyzed as a matter of a production-function rather than a response-function, but this simply amounts to reversing the direction of causation that appeared in the account of perception.

We can illustrate this again with a non-biological example. Every shopper has a half-ticket-stub, and as their goods are brought up from storage, the other half of their ticket is hung up on a washing line in front of the desk. The system is functioning “as designed” when hanging up half of ticket number 150 causes the shopper with ticket number 150 to move forward and collect their goods.  (the causal mechanism is the shopholder collecting the goods, bringing them to the desk, and hanging up the matching ticket). This is a designed system where certain states (of tickets hanging on the line) have production functions.

A perceptual state has many causal antecedents, and many of these causal antecedents are intermediaries that produce the state “by design”. Just so, an intentional state has many causal consequences, many which produce the state “by design”. An intention to hail the taxi (or even: to raise and wave one’s arm) will produce motor states controlling the fine details the way the arm is raised and waved, as well as the bodily motion of the arm waving and finally the taxi being summoned. Again, the more proximal states produced “by design” are means to an end: producing the most distal state. To capture this, we mirror the account given in the case of perception:

  • Among candidate contents E1, E2 of an intentional-motor state R, let E1>E2 iff in S, the function to produce E1-type events as a result of an R-type state occurring is a means to the end of producing E2-type events as a result of an R-type state occurring, but not vice versa.
  • The content of R is the >-minimal candidate content (if there are many >-minimal contents, then the content of R is indeterminate between them).

Suppose that I intend to grasp a green sphere to my right, and suppose that the vehicle of this representation is a single state of my intentional-motor system (a state whose formation will trigger a good deal of further processing before bodily motion occurs). What grounds the fact that that token state represents what it does? On this account, it is because in the evolutionary history of this biological system, states of that type produced a hand prehending and attaching itself to some green sphere to the right of the perceiver—and this feature was selected for. Though there were other causal consequences of that states that were also selected for, they were selected as a means to the end of producing right-green-sphere-graspings.

I will be using this teleoinformational account as my treatment of the first-layer intentionality of action. So when we see appeal, in radical interpretation, to rationalizing *dispositions to act* given the experiences undergone, the “actions” are to be cashed out in terms of teleoinformational contents.

The focus here has been the contents of certain mental states—intentions, motor states and the like. Typical actions (raising an arm, hailing a taxi, etc) in part consist of physical movements of the body, so I haven’t yet quite earned the right to get from Sally-as-a-physical-system to Sally-as-acting-in-the-world. Further, there’s nothing in the account above that guarantees that states with content grounded in this way are personal-level and rationalizable, rather than subpersonal and arational. The exact prehension of my hand as it reaches for a cup is controlled, presumably, by states of my nervous system, and these states may have a function to produce the subtle movements. But the details are not chosen by me. I don’t, for example, believe that by moving my fingers just so I will grasp the cup, and hence form a person-level intention to move my fingers like that. Rather, I intend to grasp the cup, and rely on downstream processing to take care of the fine details.

So there’s work to be done in shaping the raw material of first-layer intentionality just described into a form where it can feed into the layer-2 story about radical interpretation that I am putting forward. This may involve refining the formulation of radical interpretation in addition to focusing in on that correct contentful states. It’s open to us to question whether actions are the things that need to be rationalized, or whether that’s just a hangover from the (behaviourist?) idea that overt bodily movements form a physicalistic basis for radical interpretation. Readers will now spot, however, that these are just salient examples of the same point we saw earlier with perception and experience. In both cases, we need to show how the material grounded in the teleosemantic account of sensation/perception and intention/motor states allow us to characterize the relata of the rationalization relation at the heart of radical interpretation.

In this post and the previous, I’ve given you my story about the foundations of layer-1 intentionality, in one case directly lifted from the teleosemantic tradition; in another, a mirror-image adapation of it. Three items now define our agenda for the rest of this subseries of.

  1. We need to explain how the raw materials are shaped into an account of the base facts for radical interpretation: the relata of substantive rationality.
  2. As flagged in the first post in this subseries, we need for an account of what our interpretee’s options were, those not taken as well as those taken, since we rationalize choices or actions against a backdrop of available options.
  3. The appeal to “functions” of elements of biological systems (specifically, sensory-perceptual and intentional-motor) is a working primitive of this account. That will continue to be the case, but I want to at least briefly look at the problems that may arise, to reassure ourselves that the account won’t be dead on arrival.

NoR 3.2: Experience

This is one of a series of posts setting out my work on the Nature of Representation. You can view the whole series by following this link

The job of the next few posts is to fill out the details of how source-intentionality is to be grounded. And as flagged, here I will draw on Karen Neander’s recent defence of a teleosemantic account of sensory-perceptual content.

This post lays out Neander’s approach to perceptual content. As mentioned, Neander concentrates on the representational content of sensory-perceptual states—so ones that occur within a particular cognitive system. Her story comprises two steps. The first is the following:

  • A sensory-perceptual representation R in sensory-perceptual system S has [E occurs] as a candidate-content iff S has the function to produce R-type events in response to E-type events (as such).

So let’s unpack this. The key notion here is the appeal to the function of something within a certain system. It’s this appeal that makes the account part of the teleosemantic tradition. Now, there’s a lot that could be said about what grounds facts of the form “x has function y in system z”. All we need, for now, is the assumption that these are “naturalistically respectable” and grounded prior to and independently of any representational facts. So for example, a theological account of functions, whereby the function of x is y in z iff God designed x to y in z, is out. More subtly, a stance-relative account of functions, whereby the function of x in y in z for an theorist t depends on theorist’s t’s projects and aims, is also out. But an etiological theory of functions, whereby the function of x in y is z iff x’s in z were evolutionary selected to do y, is an option. The details matter, of course (the details always matter) but for now, we’ll treat functions as a working primitive.

Neander’s proposal is that once we see Sally’s sensory-perceptual system as containing states with a variety of functions, it is response-functions that hold the key to analyzing perceptual content. The system is functioning “as designed” when a certain worldly event-type causes a specific state-type to be tokened within it. Consider the following non-biological example. Runners passing a checkpoint throw a tab with their number into a bucket. The system is functioning “as designed” when runner number 150 passing the checkpoint causes there to be a tab with 150 inscribed upon it in the bucket (the causal mechanism is the runner throwing a random tab from those on a loop on their belt into the bucket). Of course, things can go wrong (the runner can forget to throw the tab, they can miss the bucket, they may have been given a wrongly-inscribed tab at the start) but those would be cases of the system malfunctioning.

Designed systems, at least, can have “response-functions”.  In such cases it’s very natural to think that it’s in virtue of the response-function that the contents of the bucket records or represents the runners who have passed. Neander’s contention is that biological systems with etiological functions can work analogously. Because the grounding of the relevant functions doesn’t require intentions or design but just a pattern of selection in evolutionary history, this is a way of grounding such representation in non-representational facts.

Now, one famous challenge to naturalistic theories of representation (especially perceptual representation) was to distinguish those items in the causal history of an episode of perception which figure in the content of the perception, from those that do not. For example, a red cube observed from a given angle causes a certain pattern of retinal stimulation, which in turn causes a certain state R to obtain in the sensory-perceptual system. The perception has a content that concerns red cubes, not retinal stimluations. Yet it’s perfectly true that part of a well-functioning sensory-perceptual system is that it responds to retinal stimulations of a certain pattern by producing R. It’s also true that that the well-functioning system produces R in response to red cubes at the given angle, and this—indeed, within the system, the response to retinal stimulation is the means by which it responds to “distal” red cubes. But we better not analyze perceptual content as anything to which the perceptual system has a function to produce states in response to, else we’ll include proximal and distal events together. This is why the gloss above talks of “candidate contents” not “contents” simpliciter. Neander appeals to asymmetric means-end relation in the functioning of the system to narrow things down. Here is my reconstruction of her proposal:

  • Among candidate contents E1, E2, let E1>E2 iff in S, the function to produce R-type events as a response to E2 is a means to produce R-type events as a response to E1 but not vice versa.
  • The content of R is the >-minimal candidate content (if there are many >-maximal contents, then the content of R is indeterminate between them).

Suppose that I perceive a red cube to my right, and suppose that the vehicle of this representation is a single state of my sensory-perceptual system (presumably a state produced after a fair degree of processing has gone on). What grounds the fact that that token state represents what it does? On this account, it is because in the evolutionary history of this biological system, states of that type were produced in response to the presence of red cubes to the right of the perceiver, and this feature was selected for. The process by which the states were produced includes intermediary objects and properties, and the sensory-perceptual state was produced in response to those no less than the red cube  (perhaps the intermediary states include three mutually orthogonal red surfaces orientated towards the subject, a certain pattern of retinal stimulation in the subject, etc). However, the function to respond to such intermediaries is a mere means to the end of responding to the presence of *red cubes to the right*.

I will be using Neander’s theory as my account of the first-layer intentionality in perception. When we see appeal, in radical interpretation, to rationalizing dispositions to act given the *experiences* undergone, the “experiences” are to be cashed out in terms of teleoinformational contents. As I mentioned in the last post, there’s further work to be done in turning these representational raw materials into the kind of base facts that radical interpretation needs—identifying the relata of rationalization. How do we get from the content of possibly subpersonal representational states of the sensory-perceptual system, to the content of experience, and ultimately to the impact of that experience on rational belief? This will be addressed in future posts.

NoR 3.1: Source intentionality

This is one of a series of posts setting out my work on the Nature of Representation. You can view the whole series by following this link

Back in my first post, I set out an account of the nature of representation (or at least: core kinds of representation) that broke the task into three layers, each building on the next. I then started in the middle, by setting out a story about how representational properties of beliefs and desires are grounded (layer 2 representations). That story was radical interpretation, and the key thesis was this:

* The correct interpretation of an agent x is that one which best rationalizes x’s dispositions to act in the light of the courses of experience x undergoes.

To further unpack that story, I distinguished between the base and the selectional ideology. The base consisted of what our interpretee (Sally) is disposed to do, given various courses of experiences she might undergo (another base fact, somewhat more implicit, is facts about the reidentification of Sally over worlds and time). The selectional ideology is then whatever else is needed to pin down a correct interpretation from these base facts. The key notion here is that of “best rationalization”.

At that point, I set the question of the grounding of base facts aside, since the immediate problem that we encountered (the bubble puzzle) would arise on any plausible story about these facts and their nature. What we needed to get clear about to address this problem, I argued, was the other element of the story—rationalization. The key to resolving the bubble puzzle was to set aside a traditional construal of “rationalization” as picking out only formal, structural rationality. Instead, the needed selectional ideology is “substantive” rationality, which makes a broader appeal to what particular contents Sally ought believe, given her evidence (what she is justified in believing) and how she ought act, given a set of beliefs and desires. We then moved to investigate the consequences of that theoretical setting, showing radical interpretation offers quite specific predictions and explanations on the denotations of concepts of various types. Substantive rationality was again central here, since normative assumptions in epistemology or practical reason always played a key role, alongside assumptions about the internal cognitive architecture of the agents in question.

While rationalization has to this point played the starring role, it is only one half of the resources needed to get Radical Interpretation up and running. The base facts, as well as the selectional ideology, need to be in place. Indeed, Radical Interpretation can be viewed as a story about how one set of representational facts is “transformed” to bring about another. So we need those “source” representational/intentional facts to be in place so we have something to work with (I am here borrowing Adam Pautz’s nice terminology). That is why I think of Radical Interpretation as a story about a second layer of representation, built on and presupposing a more primordial kind of representation: that of perception and action.

My formulation assumes that facts about action and experience are representational facts. I think the true, layered structure of radical interpretation has been hidden from view by equivocation on this point. Both action and experience are closely related to other nonrepresentational facts—facts about motions of the body, and about patterns of sensory (e.g. retinal) stimulation. Just as there is a possible project—a cousin of my own—which reads “rationalization” as thin, structural rationalization, and seeks to develop radical interpretation on that basis, there is another possible project which seek to develop radical interpretation with only non-representational facts about behaviour and sensation in the base. We have already seen the primarily obstacle to the former approach—the bubble puzzle. This time, I’ll reverse the order, and first of all develop the positive account of first-layer intentionality which would underpin radical interpretation as I set it out, and only afterwards consider the relative attractions of an account build on the thinner, non-representational base. From now on, therefore, I will assume that to give a full account of radical interpretation, we need a prior and independent account of the first-layer, source intentionality of experience and action.

At this point, there is a fork in the road. Well, maybe more than a fork: we could head off-road in several directions, but here are what I see as the two theoretical highways.

  • We could, following Adam Pautz’s lead, pair radical interpretation with a non-reductive account of the intentionality of experience and action. More specifically, Pautz contends that we should take the intentional features of phenomenology of conscious experience as a metaphysical primitive.
  • We could preserve the original ambition to reduce representational facts to the non-representational. Having reduced belief/desire intentionality inter alia to representational facts about experience, we then stand in need of another reductive story about this “source intentionality”. This story will have to be prior to and independent of facts about belief and desire representation, so that we don’t go round in circles. And it won’t be radical interpretation, since that shot has already been fired.

My proposal is that we go for the second of these options. More specifically, I intend to build on Karen Neander’s work. This is an account of representation that sits squarely in a tradition often opposed to radical interpretation—teleosemantics. But Neander explicitly presents her theory just as an account of the intentionality of experience, and the narrowing of focus (setting aside the analysis of representational facts about belief and desire for another day) helps her defend the account against objections that bite against other views in that tradition. This looks like a match made in heaven! Neander has a story about what grounds (some) layer-1 representational facts. The Radical interpreter has a story about how layer-2 representational facts emerge from the layer-1 facts. Plug and play, and the job is done. (Well, actually, it’s not going to be that simple, as we’ll see).

First issue. One thing that stands out right from the start—and afflicts Pautz’s proposed primitivism as well as Neander’s reductionism—is that both accounts are developed as an account of the representational properties of experience. But radical interpretation, as I developed it, includes in its base the representational features of action, as well. So having a story about the representational features of perception is not enough—some extension or supplementation is called for. And, for the case of Neander’s treatment of perception, I’ll be providing the required extension in this subsequence of posts.

Second issue. Going back to the case of experience, even if we ground the representational content of some experiential states teleosemantically, it’s not automatic that those states and their content is suited to play the role demanding by radical interpretation. For example, I see a chicken with nine spots. My visual system may represent nine spots, but I do not attend to or count the spots. I may be unsure how many spots the chicken has. In this case, some of the representational content of my visual system has not been “uptaken” by the wider cognitive system. This is a place the radical interpreter must tread with care. On the simplest Bayesian models of rationality, for example, the “evidence” we need to extract from layer-1 intentionality is something on which we update by conditionalization, and so, post-update, we are certain that the world is that way.  On that model of rational update, the contents of the perceptual states are not suitable relata of the rationalization relation; they do not play that particular “evidence” role (this is even before we come to consider cases such as perceptual illusions and the like). Now, of course, the lesson to draw from this may just be that the simplest Bayesian models are wrong. Be that as it may, it illustrates that once we have layer-1 representation in place, we have further work to do to integrate it with the layer-2 story we’ve seen so far. (There are analogous issues to consider also on the action/intention side, where the output of a system of rational decision is presumably much coarser than the detailed content of motor states).

Third issue. Suppose we had a fix (somehow or other) on layer-1 facts about what an agent is experiencing, and what she is doing. Suppose we had succeeded in getting this at the right level of “grain” to mesh with belief and desire. There’s still a missing element that I will argue is critical to getting an adequate set of base facts for radical interpretation. This is to give an account of what the agents options were amongst which she chose (on the basis of her evidence) to do what she did. The agents options, in the sense that matters for rationalization, are not simply behaviours that are physically possible for her. It is possible for me to it the bullseye with a dart from ten metres away, but that doesn’t make it an option in the relevant sense (e.g. even if the dart hitting the bullseye would bring great benefits to me and no costs, the rational thing for me to do may be to put down the dart and walk away, for fear of the consequences of failing to hit the bullseye). Options in the relevant sense are things the agent has control over; what they can “do at will”. And this relation to what the subject wills or intends means that an adequate account of options is likely to involve representational resources. (Options don’t figure as such in the gloss on radical interpretation I gave above. By the end of this series of posts, we’ll be in a position to put forward a more refined version of way the various base facts, including options, show up).

Accounting for options is the challenge that I would pose to anyone wishing to claim the base for radical interpretation consists in non-representational facts about sensation and behaviour, non-representationally described. Options not taken have no behavioural signature. If I throw a ball at a window, then my limbs are moving in distinctive ways with relation to things in my environment. But if I have an option to throw a ball at the window, but do not take that option but continue typing, the trajectory of my body is keyboard-orientated, and I stand in no obviously special physical relation to the ball and window. So while there are purely physical correlates to experience and action, I simply don’t know how the advocate of the more austere alternative is planning to set up their theory.

I’ll be exploring the reductive approach to source intentionality. I hope I’ll also, down the linem have the chance to compare and contrast my approach to Pautzian primitivism, but for now, some initial remarks will have to do. Even at this stage, we can see that buying into primitive representational features of conscious experience is only the start of the commitments we would need to prosecute a primitivist approach to source intentionality. Actions/intentions also demand treatment, and it’s unclear from Pautz’s published work how he would cover that case. One approach is to multiply representational primitives. Perhaps the representational properties of intentions as well as experiences are metaphysical bedrock. An alternative is to seek a reduction of all kinds of source intentionality to the intentionality of experience—for example, by appealing to our experience of our own actions. Neither route is straightforward or cost-free, and those who are tempted to follow him should bear in mind also the need to provide not only for the representational properties of actions and intentions, but also for (intentional) truths about the agents’ options.

NoR 2.6: Wrapping up section 2

This is one of a series of posts setting out my work on the Nature of Representation. You can view the whole series by following this link

We get to know a philosophical theory by considering what it predicts. We evaluate it by figuring out whether whether what it predicts is plausible and provides a good explanation. Radical interpretation, taken neat or with only uncontentious assumptions, appears not to predict all that much. It can do some things. For example, the bubble interpretations of agents, on which they’re agnostic about the nature of the world outside their bubble, were a major problem for structural radical interpretation. But substantive radical interpretation, given only the minimal gloss that maximizing substantive rationality involves, ceteris paribus, maximizing epistemic justification, predicts the bubble interpretation is correct. Even this result is not completely neutral: we need the first-order epistemological assumption that suspending judgement on the existence external world is (given our interpretee’s evidence) an unjustified attitude to adopt. But in comparison to what we’ve been looking at more recently, this is neutral on many issues of cognitive architecture and normative theory. But while ruling out one obviously incorrect interpretation is a predictive success, it’s not much to work with. A Godlike intelligence could perhaps look down at a creature’s pattern of actions and evidence and pick out the most substantively rational interpretation, and read off a list of predictions that could be subsequently checked for plausibility, etc. But that is no comfort for limited theorists like you and I. We need to do better.

In the last subsequence of posts, I’ve shown how–in conjunction with auxiliary assumptions–radical interpretation will predict a great deal. None of the auxiliary assumptions concerns the metaphysics of representation per se. Rather, they amount to particular claims in epistemology, practical normativity, or about how our psychological processing (“cognitive architecture”) works. But add them to radical interpretation as a metaphysics of representation, and the predications and explanations about representation start to flow. And of course, for every categorical prediction C derived on the basis of radical interpretation plus auxiliary assumptions A and N, radical interpretation on its own gives the following conditional prediction: if A and N, then C.

We are then in a position to evaluate radical interpretation on the grounds of whether these conditionals themselves are plausible or not. In principle, the result could have gone either way. It could have been, for example, that when you add together a plausible epistemology and cognitive architecture, you find radical interpretation undergenerating representational results. It could have been, for example, that within an inferentialist architecture, radical interpretation was unable to explain why a concept deployed like conjunction, denotes conjunction. It could have been that the cognitive structures plausibly associated with a perceptual demonstrative leaves their referent wildly determinate, unless we threw in more than just radical interpretation+plausible epistemology (e.g. it might really have been that *causal* or *naturalness* “saving constraints” in the theory of meaning would prove necessary). But in the test cases that I’ve been considering, this hasn’t happened. On the contrary, I’ve been able to reconstruct independently-motivated claims about patterns in what refers to what. This is promising, and allows radical interpretation to inherit predictions about concrete cases that advocates of the more local “theories of reference” built up in constructing their theories.

(Of course, it also inherits the vulnerabilities of such theories, though where the criticism is simply that the cognitive architecture is wrong, that simply shifts us to considering a different conditional prediction, and before taking radical interpretation to be refuted by a local counterexample, we might also consider whether it is faulty or incomplete normative assumptions that are really at fault).

The first moral I suggest we draw from this subsequence of posts is that the conditionals I have derived, in five different cases, constitute reasons to believe radical interpretation. Notice! The evidence I cite in its favour is that the targeted conditionals are correct. You could agree with me in this respect, even if you don’t endorse their antecedents. It’s natural for readers to want to consider whether the pattern of success is maintained for those conditionals whose antecedent assumptions about architecture and normativity they are prepared to endorse (or those they take more seriously). Having provided the model in the last few posts, I would be delighted to hear about the results of such further case studies.

In addition to the work done in providing reasons to believe radical interpretation, these case studies deliver further illumination in several respects.

A key theme in all our case studies is the relation between different literatures in the metaphysics of representation. On the one hand, there are (apparently) competing foundational theories of reference such as telosemantics, Fodor’s causally-driven psychosemantics, and radical interpretation. On the other, there are more local literatures where “causal” and “descriptive” theories of reference are in competition, or where authors are trying to work out what the connection is between the way we use logical concepts and their denotation. Often the first literature is explicitly reductive in motivation, whereas the latter may disclaim such ambitions. Contrast, for example, the spirit of Fodor’s asymmetric dependency “causal theory of reference” to Kripke’’s work that goes under that title. Kripke seeks to capture a pattern in the way reference works, but has no ambition thereby to contribute to a project that would reduce reference to something more naturalistically friendly.  But that reductive project is explicitly the motivation for Fodor’s account.

My story has the two approaches naturally meshing. The more Kripkean non-reductive project systematizes patterns in facts about what our concepts denote, connecting this to other features of us. There’s no reason to expect the most interesting such patterns can be stated in a way free of representational idioms. For example, Dickie’s story about perceptual demonstratives takes for granted throughout the interpretation of the observational concepts featuring in the bodies of belief associated with a given demonstrative concept. The pattern she articulates doesn’t require supplementation with a reduction of predicate-reference to be interesting. We might use them, for example, as the ingredients of an Evansian treatment of the sense of singular concepts—something I’ll be looking at later. And they may be explanatory valuable in other ways—for example, insofar as subjects are aware of the existence of these sort of patterns, they may use them in working out the likely content of another’s representations, and these facts may help explain how two subjects interact—e.g. I might infer that the subject to which you are perceptually linked is dangerous on the grounds of (1) hearing you express a perceptual demonstrative belief that that thing is dangerous, and (2) my knowledge of the patterns of reference involving perceptual demonstrative concepts.  (Stalnaker’s work putting “metasemantics” to work in explaining communicative phenomena is the inspiration for this kind of explanatory project).  But even once we see there’s a lot of reason to be interested in identifying patterns in the referents our concepts received, we can and should wonder why those patterns emerge—what unifies and explains them. Even if we detect a “common pattern” in theories of reference (“determination theories”) as Peacocke claims to have done, we might wonder why that meta-pattern emerges. Here, there is call for a more explicitly metaphysical, and reductively-constrained, theory that underwrites all the more local theories of reference. And it this lacuna that, I have argued, radical interpretation fills.

This perspective on the two kinds of project and their relation leads to two subsidiary benefits.

The first subsidiary benefit is to shape the way we articulate local patterns of reference. For example, my treatment of the way that “morally wrong” is fixed does not (unlike Wedgwood) appeal to a notion of validity applying to transition from moral judgement to preference. The centrality of validity in Wedgwood’s account is, I argued, an overgeneralization of a pattern Wedgwood takes from Peacocke, and there’s simply no need for this if it radical interpretation that gives the principled unification of local “determination theories”.

The second subsidiary benefit is to allow us to see better how to divide labour between foundational theory and pattern-articulation. For example, in the Lewisian tradition, “naturalness” of candidate referents has long been seen as an element in the foundational theory of representation. But we see on the present perspective how to locate it instead as an determinant of local patterns of reference for a broad class of “inductive” concepts—and we can also see how and why naturalness enters into the derivation of that pattern at a late stage (as part of a particular gloss on simplicity) so that it emerges as one of a whole family of possible patterns we get by varying epistemological factors.

Finally, one thing that we have gained through these case studies is the resources that will be needed to deal with a whole range of underdetermination/indeterminacy/inscrutability challenges. The bubble puzzle is one such challenge, particularly suited to radical interpretation in its most general form. Many others widely discussed in the literature (skolemite problems for quantification, permutation challenges for reference, Kripkesteinian and Quinean challenges to predicate-interpretation, the “problem of the many” and others) were developed with an eye to other theoretical settings, and need adapting to speak to the setting here. But ultimately, a good foundational theory needs to show where exactly the adapted versions of these famous puzzles go wrong. Quantification is a good example of the sort of satisfying resolution substantive radical interpretation promises. In my discussion of that case, I show how a plausible epistemology and plausible inferentialist architecture favour an unrestricted interpretation of our broadest quantificational concepts. While underdetermination theories remain excellent tools for testing our overall theory, already the discussion to this point gives us everything we will need to pass the examination.

Supplement to 2.5: Schwarz on naturalness and induction.

This is one of a series of posts setting out my work on the Nature of Representation. You can view the whole series by following this link

Wolfgang Schwarz’s paper “Against Magnetism” gives a rather different perspective on the Lewisian treatment of induction than the one I have been developing. Wo notes, and I acknowledge, that Lewis himself did not buy into the architectural assumptions I have been throwing in throughout this subseries of posts. In particular, he denied that we should assume sentence-like vehicles of individual of belief. Lewis liked an account where holistic belief-states were attributed to agents. And just to be clear: my version of radical interpretation per se is not committed to the sentence-like structure. Just like Lewis, I see it as a hypothesis about the contingent cognitive architecture that we may or may not have. I just don’t see why we shouldn’t be interested also in what the theory predicts under those hypotheses. It would be interesting, though, if Lewis himself had found a way to connect up naturalness to inductive reasoning and radical interpretation without the aid of such hypotheses. Wo sketches one way in which this arise:

Here is the key quote from Wo’s paper:

“…rational agents should assign high prior probability to the assumption that nature is uniform. But uniform in what respect: should one believe that emeralds are uniform with respect to green or with respect to grue? Here we can appeal to natural properties: one should assign high probability to worlds that are uniform with respect to patterns in the distribution of fundamental properties. At least in worlds like ours, attributes like green (unlike grue) supervene on intrinsic physical features of their instances: perfect duplicates never differ in colour. Hence if unobserved emeralds are similar to observed ones in their fundamental physical properties, it is plausible that they will also be green and not blue. It does not matter, for this proposal, whether green, or being in a world where all emeralds are green, are themselves particularly natural.”

The general idea is that if one has high prior probability in uniformity among patterns of fundamental properties, one needn’t know in detail exactly how other, macroscopic properties relate to the fundamental properties, to know that they too will be uniform.

The question is how to leverage that insight. Schwarz emphasizes the following feature of green/emerald: that they locally supervene on intrinsic physical features of their instances. Now, that’s compatible with there being a large set of different physical descriptions, any one of which would suffice for being an emerald, with no-non-disjunctive single description being necessary and sufficient. The same goes for being green. That observation fits with Schwarz’s remark that he is not assuming that the properties themselves are particularly natural. Unfortunately the same feature means that there’s no real reason to think that the confidence in the uniformity of fundamental patterns will generate confidence in emerald/green uniformity. Suppose that being P or Q or R is necessary and sufficient for being an emerald, and each LHS disjunct is an intrinsic physical description. And suppose A or B or C is necessary and sufficient for being green (again, with the LHS disjuncts intrinsic physical descriptions). Now, it might be that all observed emeralds are P-emeralds and all observed emeralds are A-green, and so all observed Ps are A. With high probability, we could project this pattern in the fundamentals: all Ps are As. But that alone tells us nothing about the colour of Q-emeralds or R-emeralds. They could be A, or B, or C (i.e. green), or none of the above (i.e. not-green), without there being any lack of uniformity in patterns in the fundamentals.

But there is another element in the quote from Schwartz. He at one point adds the assumption “unobserved emeralds are similar to observed ones in their physical properties”. This does not follow from the stated supervenience assumption: things which are P and which are R can be quite unlike, physically.  So this should be seen as an additional assumption concerning intra-world (lack of) variation: though emeralds (/green things) may have all sorts of different physical features in other possible worlds, within the actual world, all emeralds (/green things) share the same (non-disjunctive) physical property, E (/G).

Now, this still allows emeralds/green things to be “unnatural” by many Lewisian measures–any necessarily equivalent description might be infinitely disjunctive. But if it’s going to do work for us, we do need I think to assume there is some extensional characterization of them that is not disjunctive, or at least, is the sort of description that we can reasonably take to specify a “pattern” in the physical fundamentals, in the sense relevant to the uniformity constraint on the priors.

Let’s introduce the label “E1” for the (unknown-to-the-agent) property in fact possessed by all and only the things that are emeralds. Let “G1” plays the same role for green things. And now, the crucial thing is that we have high confidence that all E1s are G1, conditional on all observed E1s being G1, as part of the general fundamental uniformity assumption. The same goes for many other analogous instances: all Eis being Gj, for arbitrary i and j.

Now, I can imagine the story running as follows. First, some information about the agent’s prior credences:

  1. For every i,j: C(all Ei are Gj|all observed Ei are Gj)=1.
  2. C([(x)(x is an emerald iff x is E1) v (x)(x is an emerald iff x is E2)…..])=1
  3. C([(x)(x is green iff x is G1) v (x)(x is green iff x is G2)…..])=1

(1) is the uniformity constraint on priors. (2) and (3) are the assumption the agent is certain that actual emeralds (/green things) are physically similar to each other in some respect or other. We now argue that by (2) and (3) the agent’s credal space divides into cells, according to which property necessarily covaries with being an emerald, and being green. Within the emerald=Ei/green=Gj cell, we can cite the appropriate instance of (1), to get that C(all emeralds are green|all observed emeralds are green&Ei=E&Gj=G) is high. When we glue these cells back together, we get that C(all emeralds are green|all observed emeralds are green) is high.

Okay, this reasoning needs studying. But if it’s what Schwarz intended, it provide different kind of perspective on the way that inductive generalizations could relate to considerations of naturalness.

A crucial question is what grounds the agent’s confidence in (2) and (3) comes from. In virtue of what are we so confident that actual emeralds/green things are physically unified? Is this based on evidence, or are we somehow default-justified in assuming such uniformity? Notice that these assumptions play no role in the story outlined in the last two posts. And they would be false for many things on which we are prepared to induct (as Jackson notes being observed by Sam can be a perfectly good “projectable” property, so long as Sam isn’t the one doing the projecting). Wrongness is not an intrinsic property of acts, but we inductively generalize upon it. So we really do have a different account here (though one compatible with the same overarching theory of radical interpretation, but embedded within very different architectural and normative assumptions).

NoR 2.5b: Inductive concepts and Reference Magnetism

This is one of a series of posts setting out my work on the Nature of Representation. You can view the whole series by following this link

In the previous post, I argued that a default-disposition to use concept c in inductive generalization means that simpler properties are favoured as the denotation of c. This is a result absolutely in keeping with all the earlier examples of drawing out specific predictions from radical interpretation, being derived from normative and architectural assumptions.

In this post, I want to connect this idea of “simpler” properties with “natural properties’ (i.e. those which are somehow “close to” the fundamental or perfectly natural features of the world). Specifically, I want to articulate the perspective this gives us on the influential claim that natural properties are reference magnets.

We get the connection between claims about simplicity in the previous post and claims about naturalness, if we add one additional (and unforced, and contentious) claim to the mix. The claim is explicit in Lewis’s discussion of theoretical virtues in the context of laws of nature, and is that the canonical language–the one that gives us a level playing field on which to measure compact expressibility of theories–was one which contained only broadly logical resources and predicates for every metaphysically fundamental property and relation. Given this, if we follow Lewis in defining a property’s degree of naturalness as the length of its shortest definition in this canonical language, then F will be simpler than G iff F is “more natural” than G (i.e. has a lower degree of naturalness) So given the full Lewisian treatment of simplicity, we derive a version of the famous Lewisian “reference magnetic” constraint on interpretation: that all else equal, interpretations that ascribe more natural properties are to be favoured.

(Compare the discussion of naturalness and simplicity in my “Eligibility and Inscrutability” and the discussion of alternative canonical bases in my “Lewis on Reference and Eligibility”. Both those discussions take place in the context of the foundations of the linguistic representation, and I was concentrating on simplicity as it applied to the theorist’s interpretation not the subject’s inductive inferences. But the connection is the same).

Now, even if we take this final step, the naturalness constraint here is not the one that appears in most of the literature. It does not imply that there is a general bias towards interpreting a general concept as natural. In order for the above to kick in, the concept must be one of those that we are default-disposed to deploy in inductive inferences. A fortiori, there’s no immediate extension of these considerations to a “naturalness” constraint on reference-fixing for concepts in other categories (quantifiers, singular terms, connectives, etc)—even if we allow, with Sider, that the semantic values of such terms can be ranked by naturalness. If we are to secure that result, I think the way to go would be to argue that we are default-disposed to induct on complex general terms as well as simple ones. If “all”, “and”, “that” and the like figure in inductive complex general terms, then just as before pressure arises to interpret them as denoting the simplest entities in their respective category, and given the final step above, this will favour the most natural candidate referents. This is still selective: if “or”, for example, never figures in inductive complex general terms, no naturalness constraint on its denotation will arise.

I’ve encountered many who think the whole idea of natural properties being reference magnets is prima facie weird and unmotivated. But the discussion to this point should undercut that idea. Its source is exactly the same as (I have argued) other kinds of theories of reference–a first-order normative theory paired with assumptions about cognitive architecture. It is no ad hoc primitive piece of metaphysical prejudice, but something that falls out of the story given suitable auxiliarly background.

We need not accept that background, and charting the various points at which one can get off the boat .

  1. We can resist the Lewisian package on simplicity by refusing to take that final step and identify the simplicity-measuring language with a language populated with metaphysically fundamental predicates. If we still allow there is a “canonical” simplicity-measuring language, then properties picked out by the atomic predicates in that language will be predicted to be “reference magnets” for inductive concepts, for exactly the same reasons as before.
  2. If we resist the Lewisian package by refusing to identify simplicity with elegance-in-a-canonical-language, then we won’t get a picture isomorphic to Lewis’s. But if there is still a distinctive systematic contribution that properties make to the simplicity of (interpreted) theories in which they figure, so that we can grade properties as more or less simple, than we still get a reference-magnetic thesis, but framed around simplicity (treated now as a working primitive) and detached from naturalness.
  3. If we resist the Lewisian package at its first step, by giving a different account of the theoretical virtues that doesn’t cite simplicity among those, we won’t get this. But still, a parallel discussion may be conducted. Some (e.g. Sider) express sympathy for the idea that naturalness itself is virtue of theories. That would give a rather more direct route to the conclusion that naturalness is reference-magnetic in this context! But even if one sets aside simplicity and naturalness, if some properties are more suited to figure in explanations than others–than all else equal, those will be favoured as referents.

As this illustrates, one has to work quite hard not to get a reference-magnetic thesis out of these considerations—though what property exactly turns out to be magnetic will be sensitive to the fine details of one’s account of the explanatory virtues.

The connection between radical interpretation, induction and Lewis’s naturalness constraint has been suggested before (Pautz 2013, Weatherson 2013), and I was led to the above story by thinking through these author’s writings. But my account is not based around what Weatherson calls “inductive dogmatism” (nor is Weatherson’s own, in its published version). That theory cuts out the background of IBE. No connection is made, then, to considerations of simplicity or other theoretical virtues. In their place is an epistemology specifically of induction, of the shape that Goodman suggested. Here, faced with an enumerative inference, the major theoretical question is just this: which properties make an inference with the enumerative inductive form good (call those the projectable properties). On this reading, what forges a link to Lewisian naturalness is the thesis (not to my knowledge ever endorsed by Lewis) that the projectable properties are those that are natural enough (rather than, as in Goodman himself, those that are historically entrenched in inductive practice).

What we get from this picture is the thesis that there is a cut-off (maybe a vaguely defined one) in the hierarchy of more or less natural propeties, with those more natural than the threshold being suited to figure in induction, and those less natural than threshold not. That is problematic (and particularly problematic as Lewis exegesis). Green is projectable, being positively charged and located before the midpoint of history or negatively charged and after the midpoint is not. But on various accounts (including Lewis’s) the latter will be more natural than the former.

If we think things through starting from IBE, the account we derive makes relative simplicity/naturalness the key factor. If we try to move directly to characterizing Goodman’s “projectability” in terms of a threshold of naturalness, an absolute notion of “being natural enough” plays the key role. I think the former is much nicer.


NoR 2.5a: Inductive concepts

This is one of a series of posts setting out my work on the Nature of Representation. You can view the whole series by following this link

The previous sections have had a common pattern. I’ve picked out a certain concept, laid out some assumptions about the way it is deployed, and shown how particular theses about its denotation fall out. Crucial premises in the derivation (in addition to the “cognitive architecture” are radical interpretation and specific normative (epistemological, practical) normative theses. We could continue this themes for descriptive concepts—starting perhaps, from concepts of primary and secondary observable properties presupposed in the account of perceptual demonstrative concepts.

However, in this section I’m aiming for something a little different. Rather than arguing for a specific denotation for a specific concept for a specific conceptual role, I’ll be discussing a feature that characterizes our deployment of many concepts, and showing how (given radical interpretation) this constrains what they denote. I’ll call these concepts inductive, and the assumption about cognitive architecture I’ll be making (in addition to the usual generic ones) is that we’re disposed to indulge in inductive generalizations using such concepts. Observable (primary and secondary) concepts such as green and square are within the class, as are natural kind concepts such as being an emerald, being a tree or being positively charged. The class isn’t restricted to descriptive concepts: normative concepts (immoral, just, imprudent) are deployed in induction.

But there are concepts that don’t feature in inductive generalization. Famous foils are concepts like grue (=green and first observed before 2050, or blue and not first observed before 2050), or observed by me. Since so many concepts of interest plausibly are deployed in induction, any conclusions about their denotation we can draw from that feature will have wide application.

My focus in this post is developing a positive view of the relevance of inductive generalization fixing the denotation of general concepts.

The starting point is a particular view of inductive generalization: a view on which it is a special case of “inference to the best explanation”. For Sally, a highly reflective thinker, the formation of a justified general belief might go as follows:

  1. All observed emeralds have been green (and those observations were carried out in thus-and-such a manner).
  2. All emeralds are green best explains (1).
  3. So: All emeralds are green.

“Observed” here should be read “observed by Sally”. Premise (1) includes the note about the manner in which observations were carried out because the fact that all observed Fs are green may require a very different explanation if the observations were carried out in an unbiased and controlled sampling, from the explanation that suggests itself if the observations were conducted in the museum of green things. The grounds on which Sally may endorse (1) can be various, but in the most basic case will be based in memories of individual episodes in which she has observed a green emerald and failure to recollect any countervailing instance.

Premise (2) appeals to facts about best explanation. What determines whether an explanation is best will be very important, but it’ll help for the comparisons that follow to follow a certain tradition and assume that this is a matter of a trade-off between features like: being consistent with (1), strength (entailing as much of what (1) entails as possible), and simplicity of the hypothesis. The grounds on which Sally may endorse (1) are again various, but presumably she casts her mind over a range of salient rival hypothesis consistent with (1) and evaluates them for relative simplicity and strength, judging (3) the winner.

The assumption about cognitive architecture that we make, then, is that Sally finds the transition from (1) and (2) to (3) primitively compelling.

This is all very highly reflective. And surely we inductively generalize on the basis of experience without running through all this story. So perhaps what goes on is something like this (this is not an inference that Sally carries out, but a description of her psychology as she forms a general belief):

  • (A1.1) Sally remembers seeing emerald 1 in circumstances C1, and it was green.
  • (A1.2) Sally remembers seeing emerald 2 in circumstances C2, and it was green.
  • (A1.n) Sally remembers seeing emerald n in circumstances Cn, and it was green.
  • (A1.n+1) Sally tries and fails to remember seeing any non-green emerald.
    On that basis:
  • (A3) Sally forms the general belief that all emeralds are green.

So far as the psychology goes, this looks much more like a classic case of “enumerative induction”. And the 1.x facts are exactly the grounds on which on more reflective occasions Sally might endorse the original (1). But this formulation is not the whole epistemological story, since it doesn’t capture the epistemological significant difference between Sally’s good reasoning and Goodman’s famous variant, where grue=either green and first observed before 2050, or blue and not first observed before 2050:

  • (G1.1) Sally remembers seeing emerald 1 in circumstances C1, and it was grue.
  • (G1.2) Sally remembers seeing emerald 2 in circumstances C2, and it was grue.
  • (G1.n) Sally remembers seeing emerald n in circumstances Cn, and it was grue.
  • (G1.n+1) Sally tries and fails to remember seeing any non-grue emerald.
    On that basis:
  • (G3) Sally forms the general belief that all emeralds are grue.

The belief formed in (G3) is inconsistent with the belief formed in (A3), while (so long as the circumstances Ci entail the fact that the observation took place before 2050) the content of the memories reported in each pair (A1.x) and (G1.x) are each equivalent. Looking back to the original reflective case, what suggests itself is the following contrast:

  • A2: All emeralds are green best explains (A1.1)-(A.n+1).
  • not-G2:  All emeralds are green does not best explain (G1.1)-(G.n+1)

On the epistemology I consider, IBE-dogmatism, Sally is by default (i.e. in the absence of defeaters and undercutters) justified in believing a generalization such as (A3) when it is in fact the best explanation of A1.1-A.1.n+1. The sort of thing that would undercut this justification would be a not-obviously-worse candidate-explanation being salient to Sally. So it’s because A2 obtains that the A-inference produces a justified belief. Because G2 does not hold, the G-inference does not.

Moving from epistemology to features of cognitive architecture, what I’ll be assuming is that Sally is default-disposed to find the inference A1.1-A1.n+1 to A3 primitively compelling. She is similarly default-disposed to find other instances with the same form primitively compelling, so long as the concepts in the “green” and “emerald” positions are taken from a certain stock of concepts (which includes the usual observational concepts, natural kind concepts, normative concepts, etc). Let’s call that our stock of inductive concepts.

So just as in previous cases, we have assumptions about cognitive architecture and normative theory. We turn now to draw out their significance for reference-fixing, given radical interpretation.

One thing to note immediately is that interpreting Sally’s concept “green” as picking out the property grue will make her default-disposition to induct on green unjustified, since on that interpretation in generalizing using the concept “green” she will be making the bad G-inference, rather than the good A-inference. And that moral generalizes: for every pair of inductive concepts c, d, all else equal, the best interpretation F, G, will all else equal be one which makes all Fs are G the best explanation of the fact that all observed Fs have been G. 

If there are general features common to properties that figure in best explanations, then we could conclude at this point: all else equal, inductive concepts will denote properties with those required features. What might those be?

Well, consider what makes for something being the best explanation of data. Among those rivals consistent with the data, the best explanation needs to be optimally simple and strong. All else equal, it needs to be the simplest. So here’s a feature that properties featuring in best explanations will have: all else equal, they will be no less simple than those that feature in rival candidate explanations.


(1) I’m assuming that it makes sense to talk of a property being more or less simple, as well as the propositions that ascribe that property.
(2) What’s important is not the absolute level of simplicity/complexity of a property, but its relative simplicity compared to rivals.

A treatment of simplicity that underwrites (1) is to be found on Lewis’s work on laws of nature. There, he suggests we treat simplicity (of an interpreted theory, which we can think of as a set of structured propositions) as a matter of what some would call its elegance: how compactly we can express the theory in language. But compactness of expression is sensitive to expressive resources, and so could vary across different languages, so to secure objectivity Lewis posited a “canonical” language in which theories are to be expressed for the purposes of measuring their compactness. Notice that this measure of simplicity applies just as much to properties as to sets of propositions. Simpler properties will be those that are more compactly definable in the canonical language. And the simplicity of an interpreted theory directly depends on the simplicity of the properties it contains—the longer it takes to express the properties, the longer it takes to express the theory that ascribes them.

The upshot for us is the following: all else equal, the referent of an inductive concept will be the simplest of the candidates.  To finish off this post, here are some ways this result matters.

Consider the permuted interpretation of observational concepts introduced earlier. *Being the image under p of something green* is less compactly expressable, for any sensible choice of canonical language, than the property of *being green*. Explanations framed in terms of the former will be less simple, so less good, than the latter. This suggests a diagnosis of the challenge from permuted interpretations left open at the end of the post on demonstratives. The permuted interpretations depict the agent’s inductive dispositions as unjustified, and hence the agent as overall less rational, than the alternatives.

Consider the Kripkensteinian property of being green within region R or blue and outside region R. Again, like permuted-green and grue, this is a less simple property than green, and so interpreting an agent’s green concept as denoting it will make the agent less rational than otherwise.

2.4 supplementary: comparison to Wedgwood

This is one of a series of posts setting out my work on the Nature of Representation. You can view the whole series by following this link

As my treatment of reference-fixing for conjunction and quantification stand to Peacocke’s account, so my treatment of reference-fixing for normative concepts stands to Ralph Wedgwood’s. Here I concentrate on the view he sets out in “Conceptual Role Semantics for Moral Terms” , Philosophical Review 2001. Since Wedgwood himself builds on Peacocke’s approach, this is perhaps not too surprising.

The six differences between Peacocke’s approach and my own that I earlier highlighted are again relevant here, and I pick up on one below.  There are a couple of more specific divergences.

Wedgwood’s paper focuses primarily on giving possession conditions and a determination theory for the concept B: all-things-considered-better-to-perform. And the “possession conditions” he sets out (the assumptions about cognitive architecture, in my terminology) is not like the one I gave, appropriate to the moral case and linking normative judgements to blame. Instead, for Wedgwood B has a specific role in practical reasoning—roughly, a transition from a judgement that such-and-such is better to perform than so-and-so, to a preference for such-and-such over so-and-so (a preference, for Wedgwood, is a certain kind of conditional intention—but that detail need not detain us).

Wedgwood seeks to generalize the kind of “determination theory” we’ve already seen in Peacocke. After positing that each concept is associated with a set of “basic rules”, he initially says that the semantic value of the concept as “makes best sense of the fact that these rules are the basic rules for A”. But he immediately refines this, following Peacocke in saying that this requires making the relevant rules valid and complete. In order for this refined account to apply to the kind of transitions Wedgwood is interested in, he can’t characterize it as necessary truth preservation, since preferences are not the sorts of things that can be true or false. Accordingly, he defines a notion of “validity” for transitions from judgements to preferences—guaranteed correctness-preservation—where an intention is correct, says Wedgwood, if it conforms to the goal of practical reasoning.

With the case now squeezed into the model of valid inference, the question is what would make the inference valid (and complete), i.e. what semantic value for the concept B would mean that a true judgement that B(x,y) would guarantee that a preference for x over y would conform to the goals of practical reason. Wedgwood contends that assigning the normative relation being better to perform uniquely fills this role.

Among the ways that my account differs from Wedgwood’s, the thing I think is most illuminating to highlight is the role that he makes validity play. I think he goes wrong, and opens himself up to criticism unnecessarily, by trying to squeeze his account into the model that Peacocke offers of the logical connectives. So really, I’m not criticizing the spirit of Wedgwood’s account. I think that using radical interpretation in the ways already illustrated, one could reach more or less the same conclusion about what the semantic value of B is, on the architectural assumptions Wedgwood makes. But I think the letter of his own account misfires in instructive ways.

If the moral you take from Peacocke is that validity is central to reference-determination, and you are interested in transitions between beliefs and other states (preferences, intentions, emotions, feelings) rather than belief-belief transitions, the central challenge that looms is to generalize the notion of validity so it has application to such states. That is Wedgwood’s strategy. And Wedgwood proposes, quite generally, that the generalized notion of validity needed is necessarily correctness preservation.

Enter Schroeter and Schroeter 2003. They ask us to consider the content “I am in pain”, and suppose–I think plausibly—that part of its conceptual role is a transition from the state of actually being in pain, to the state of believing one is in pain. Again, when it comes to reference-determination, on a validity-centric model we’ll need to posit a notion of the conditions where it is correct that one is in pain (maybe: that all things considered one deserves to be in pain?). And we will then look for a semantic value for the pain-concept P that guarantees correctness-preservation for the transition. But that someone deserves to be in pain doesn’t guarantee that they are in pain! Nor would pain as the semantic value make the transition-rule complete, since someone being in pain certainly doesn’t entail they deserve to be. The property deserves to be pain, on the other hand, would make the transitions valid and complete, in the Wedgwoodian sense.

Something has obviously gone horribly wrong if we reach this point! But it’s interesting to reflect on what has happened. The point is that the notion of correctness that features in the characterization of “validity” is turning up in the validity-making content-assignment. That is something that is not provided for in the general gloss with which Wedgwood begins, viz that the semantic value of a concept “makes best sense” of the fact that the basic rules for that concept are its basic rules. Assigning pain to the concept pain makes perfect sense of the transition mentioned, as far as I can tell. We only get the odd projection of normativity into the semantic value determined when we move to the “more precise” formulation of this in terms of validity-Wedgwood-style. That is when the normative rabbit is stuffed into this particular hat. When we’re dealing with normative concepts, that has what look to be interesting and good results, since it allows us to easily derive the assignment of normative properties to normative concepts. But—and this I take to be the Schroeters point–we can see the way that this is cheating by noting that we continue to get those results even when we turn to non-normative concepts whose conceptual roles involve more than belief-to-belief transitions.

I think this is instructive of the dangers of fetishing validity’s role in fixing reference. Validity should never have been seen as the primary mechanism for reference-determination. It gets into the account of reference-determination for logical connectives and the like only because it is part of a wider epistemological story about which beliefs are justified. Radical Interpretation, on the other hand, makes us ask the question: what assignment of semantic value would make the transition rational? A notion of “validity” will enter the picture, only if we have some reason to think that validity, so understood, is part of what it is for an agent to rationally manage such transitions. There’s no obvious role for it in the case of the transition from a state of pain to a self-ascription of pain. And—say I—while we might be able to back-engineer a notion of validity for a specific sort of belief-to-preference transition, the explanatory order is from thinking about the rationality of the transition, to constructing such a notion, not vice versa. If we made this modification, and saw Wedgwood’s proposal as backed by radical interpretation, rather than specifically Peacockian theses about the the general form of “determination theories”, then we can recover what’s right about his story, and evade the Schroeter’s objection to it.

NoR 2.4: Wrong.

This is one of a series of posts setting out my work on the Nature of Representation. You can view the whole series by following this link

In the subseries of posts to this point, I’ve derived local reference-fixing patterns for connectives, quantifiers, and singular concepts. In a moment, I’ll discuss certain descriptive general concepts (an example drawn from the class of primary and secondary qualities). But I’m going to start with a different general concept: the normative concept of moral wrongness. (I’m drawing heavily on material I’ve discussed in much more detail in “Normative Reference Magnets”, Philosophical Review forthcoming).

Two things guide this expositional choice. First, what I have to say about this case fits the general pattern we’ve seen, whereas the focus of my discussion of descriptive concepts will be a little different. Second, there’s an odd split in the literature on the metaphysics of representation whereby the theory of reference for normative concepts is hived off into the separate subdiscipline of metaethics, rather than being one of the parade cases that any adequate theory of representation should have in its sights from the get-go. So I want to emphasize that radical interpretation is just as well-placed to predict and explain how normative concepts get their denotations as any other, and by juxtaposing my story of normative concepts with the story singular concepts and connectives, etc, I emphasize that no special pleading is required.

So the pattern will be as before: I’ll ask you to consider some architectural hypotheses about the patterns of deployment of this concept in our cognitive economy. With that in place, radical interpretation together with first-order normative assumptions will predict that any concept so deployed will denote moral wrongness. The discussion here will introduce two new notes. First, for the first time, practical rather than epistemic normativity with have pride of place in the explanation. And second, we will illustrate how radical interpretation can help explain central puzzles in the literature—in this case, the distinctive referential stability of wrongness.

The three generic architectural assumptions are now familiar, so I won’t repeat them. The final such assumption will again concern the particular inferential role associated with the concept wrong, w. What I’ll be assuming is that when a subject believes that x’s A-ing is w, then this makes them blame x for A-ing, and when they disbelieve this, this prevents them from doing the same. The talk of “making them” or “preventing them” plays the same role as the Peacockian notion of “primitively compelling inferences” did before. Surely a cognitive architecture could be disposed to make an immediate transition from judgement to an intentional state of blame, but it is terminologically odd to call this an “inference”—so I won’t do so.

(One might wonder here if a prior fix on some other kinds of content is presupposed in the articulation off this cognitive role. I think there is. This doesn’t lie in the way that “x’s A-ing” turns up as the thing to which wrongness is ascribed. “x’s A-ing” also turns up as the object of the blame-attitude, we could replace it in both places by a variable for some-content-or-other and run the story. But the judgement that x’s A-ing is unexcused can’t be handled in this way.  Just as in the discussion of singular concepts, there is no structural concern here, since we are not at this stage in the business of attempting a reductive analysis of reference, but rather in articulating and explaining patterns of reference-fixing.)

Turning now to first-order normative assumptions, I add the following:

  • that a substantively rational agent would be such that the judgement that x’s A-ing was wrong and unexcused makes them  blame x for A-ing.
  • that a substantively rational agent would be such that the judgement that x’s A-ing was wrong and unexcused makes them  blame x for A-ing.
  • that no substantively rational agent would be such that the judgement that x’s A-ing was F and unexcused makes them  blame x for A-ing, unless F entails wrongness.
  • that no substantively rational agent would be such that the judgement that x’s A-ing was F prevents them blaming x for A-ing, unless wrongness entails Fness.

These are substantive ties between moral judgments and blame attitudes. Elsewhere, I defend the tenability of these normative assumptions against a variety of challenges—for example, that they mistakenly presuppose that wrongness is a reason, or that they are counterexampled by cases of those with obnoxious moral views. I think these charges can be resisted, but they helpfully emphasize the way that this sort of story depends on contestable normative premises. This is a feature, not a bug.

The derivation of the denotation of w follows the same pattern as previously. First, we have the a posteriori assumption that w plays a distinctive cognitive role in Sally’s cognitive architecture, captured by the w-blame link. Second, we have substantive radical interpretation which tells us that the correct interpretation of w is one that maximizes (substantive) rationality of the agent. We add the “localizing” assumption, conceptual role determinism for w, which says that the interpretation on which Sally is most rational overall is one on which the rules just given for w are rational. Putting these three together we have the following: the correct interpretation of Sally is one that makes the conceptual role associated with w most rational. Dropping in the normative premises just set out, we can derive that it is moral wrongness that makes that conceptual role most rational, and hence, it is moral wrongness that is the denotation of w.

To go back to the aspects of this story that I emphasized at the beginning, the conceptual role for w that I cited is not a link between judgements, or between evidence and judgement, as in the previous cases we have looked at. Rather, it is a link between judgements and emotional attitudes. So the kind of normative premise that becomes relevant is not an epistemological one—it is a thesis in practical normativity about what ought to prompt a specific emotive response. That brings out the significance of the conceptual role determinism in these derivations—why shouldn’t patterns of w-belief formation be as significant here as they were in the case of concepts of quantification? The answer is that such patterns are potentially significant, but we expect a well-run cognitive architecture to hold these aspects in sync. In the other work mentioned, I consider specific cases where an architecture “hardwires” specific w-belief-formation methods in addition to the patterns given above, and claim it as a mark in favour of the radical interpretation framework that it does not continue to predict that w denotes wrongness in cases where the cognitive architecture has these extra elements that produces such overall tension.

The framework also shows the power of radical interpretation to explain long-standing puzzles. One of these is that agents can disagree with one another across vast differences in their first-order theories of what features constitute moral wrongness—the so-called “moral twin earth” phenomenon. A convinced Kantian and a convinced Utilitarian are not speaking past one another—one or other or both has an incorrect theory of morality. That apparently means that they must be thinking (sometimes false) thoughts about the same subject matter. Massive and systematic moral error is possible. This requires explanation, since there are plenty of cases—particularly for descriptive concepts—where concepts embedded in such utterly different theories would be properly interpreted as differing in meaning. Radical interpretation predicts that so long as both agents implement the mentioned conceptual role, then ceteris paribus, they will pick out the same property. The conceptual role, since it concerns a link to emotion, not an embedding within other beliefs, allows for great differences what beliefs the agents have.

Now, there are some limits, according to this framework—if interpreting one or other of the disputants’s w as picking out moral wrongness would be to attribute irrationality not just falsity in their “moral” beliefs, then this is the kind of tension that calls into question conceptual role determinism. A Kantian constructivist who is convinced that utilitarian views are deeply irrational might accept the framework I have been laying out, and draw the conclusion that utilitarians after all are not even talking about morality—or at least that it is indeterminate whether they are. But these are special exceptions to the rule of stability (so even from that perspective the framework would still explain how divergent but broadly Kantian theorists could dispute about a common subject matter).

NoR 2.3c: That, redux.

This is one of a series of posts setting out my work on the Nature of Representation. You can view the whole series by following this link

Let me recap the take-home messages of the last two posts.

  1. If Dickie is right about the architecture of demonstrative thought we can derive (something like) the result that the demonstrative “that” refers to an object at the far end of the perceptual link associated with the demonstrative concept.
  2. The proximal reason why the demonstrative will denote that object is that it uniquely makes the resulting belief-management practices associated with the demonstrative justified (here Dickie would agree).
  3. The underlying reason why justification-maximization has this role, in my account, is radical interpretation as a general story about reference-fixing.
  4. As with earlier such derivations, my account is caveated—we are assuming that to make the subject overall most rational involves, inter alia, making the particular structures associated with demonstratives most justified.
  5. Though radical interpretation at the global level is intended as a reductive story about content, there is no obligation to provide a local story about demonstratives in particular that is “reductive” in form, and we have not done so here. Rather, we have concentrated on how the the patterns in the way that reference is fixed identified by other theorists can be predicted and explained by radical interpretation.
  6. I’ve discussed some worries one might have about the extent to which the “bare” demonstrative architecture really locks on to a determinate reference. In particular, I pressed some concerns about “unnatural” objects that overlap in various ways with the “natural unity” at the end of a perceptual link. But, I argued, if bare demonstratives do turn out to be indeterminate in reference between this localized range of referents, that wouldn’t be either intuitively repugnant or theoretically damaging, since the bare demonstrative (now perhaps recast as a plural) can still play an anchoring role.

I want to finish by considering point 6 one last time. I said in the previous post that I wouldn’t be distraught if bare demonstratives turned out to be plural, or somewhat indeterminate in reference. But perhaps others would be distraught. So I want to survey the options, and whether securing determinacy for bare demonstratives would motivate a shift away from radical interpretation.

Here is one proposal: that among a range of candidate interpretations of Sally scoring equally well on “charity” (i.e. making her substantively rational) the correct one is that which assigns the most natural referents to concepts, overall. Woody the tree is a “real object” or a “material substance”, a natural unity that contrasts with Woody’s outer shell or the fusion of Woody and a bug living on his surface. David Lewis is often taken to propose something similar when it comes to assigning properties as the denotation of predicates. Ted Sider has argued for a generalization of this idea to terms in other syntactic categories.

Here is another proposal: Woody is the causal source of the beliefs that are filed away with the demonstrative concept. The shell or fusion, though they massively overlap Woody, and share his macroscopically observable properties, don’t enter into such relations (we’ll assume). So if we build a causal theory of reference, or even a constraint added to radical interpretation, that demonstratives should denote the dominant causal source of the (canonical) information in the associated file, then we get the result that the demonstrative picks out Woody.

These are two ways of securing determinate reference that do not fit with my story about radical interpretation. The first could, just about, be forced into my model. If we could make the case, in general, that one person is more substantively rational than another to the extent that her beliefs are more natural, then this sort of constraint would fall out. Sider, for one, argues that theories are better the more natural they are (the more they are framed in terms of concepts that “carve nature at its joints”, and perhaps the same goes for entire psychologies. Perhaps this could even be held to be part of justification-maximization, if the most justified body of beliefs to have is the one that is best not just by being reliable, based on evidence etc, but also reflective of the joints of nature in Sider’s sense. It’s an intriguing idea, and would fit into the remit of “first-order normative assumptions” that can be consistently and interestingly combined with radical interpretation. But this is not something I myself want to endorse.

Sticking a causal side-constraint into radical interpretation, on the other hand, would go entirely against the spirit of the programme. I would happy to see a causal pattern like this emerge as a prediction of radical interpretation, but that should be a consequence of the sort of derivation we’ve been looking at, not one of the explanatory premises. (I will be critizing the idea of monstrous metaphysics of representation obtaining by combining side constraints with radical interpretation later—but whether this is a feasible metaphysics or not, it is not my metaphysics).

If these are off the table, how might determinate reference be secured?

First idea: deny the problematic objects exist. If there is no such thing as Woody’s outer shell, or the fusion of Woody and a random microscopic bug, we’re okay. That is a way to go that I anticipate some readers will already favour. But I want to keep on board those who accept a more abundant ontology, so I will set it aside and continue to look at options.

Second idea: we might argue that the unnatural objects would not make the belief-forming practices of the bare demonstrative ones that result in justified beliefs. I already endorsed this strategy to rule out Strawsonian twins and temporal slices of our target. We could try taking it further. For example, if the microscopic bug wanders off the tree to find another home, Woody+bug will end up as a scattered object, but beliefs formed through the perceptual link will continue to attribute the property of being contained within a certain confined region. So we might start building a case that the relevant mechanisms justify beliefs about Woody’s location, but not Woody+bug.

But since we’re dealing with unnatural objects, one might now be worried about the object which coincides with Woody+bug while the bug is upon Woody, but which coincides with Woody at other times. And we extend this over counterfactual situations too: the counterparts of Woody+bug will include the bug when its counterpart is microscopic and attached to Woody, and not include it otherwise. Perhaps more readers this time will be prepared to say that such a thing does not exist. I myself am sceptical that specified counterpart relations accurately pick out the de re modal facts about the object. But there are those whose abundant ontology includes not just unnatural objects, but objects with unnatural essences (cf. McGonigal and Hawthorne). A similar dialectic can be traced for our other candidate: Woody’s outer shell.

Third idea: What’s striking about perceptual demonstratives is their closeness to immediate interaction with the world. That insight is reflected in the central role, in Dickie’s account, of the perceptual link that structures bodies of demonstrative belief. But there’s another way in which they’re close to the world: our most basic actions change the properties of objects in our immediate environment. It’s  plausible that the intentions that guide these actions are structured by demonstrative identification of those objects we most directly manipulate in action. It’s interesting that this agential link between the states of mind and an object plays no role in Dickie’s story. If (as one would expect in our actual case) the Janus-faced role of demonstratives in perception and action cohere, and the perceptual link already suffices to fix reference, then there’s no concern: the partial story suffices for explanatory purposes. But if the partial story leaves us open to underdetermination, we might want to revisit the issue. I won’t develop this in detail, for reasons of space (and because it would be pretty speculative). But I do think that the upshot will be that the kind of objects that are of concern won’t only need to share observable properties with Woody, they’ll have to share manipulable properties with Woody: those properties that we can directly change about him. That might help us here! We might not be able to observe the region of space that Woody occupies, which is why Woody’s outer shell was still in the mix as a candidate referent. But arguably (by chopping and gouging) we can change facts about what region he occupies. I can’t see this line will help that much with the bug bug, but it can do some work for us.

Final idea, and the one I think gets maximum effect from the most minimal theoretical assumptions. Consider again the derivations we’ve given. Our assumption has been that the correct interpretation overall is justification-maximizing with respect to the belief-forming and management practices that Dickie picks out for demonstratives. But that doesn’t entail that any interpretation which is justification maximizing in that particular way is correct. It could be that other ways in which the demonstratives figures in our cognitive economy also matters for reference-determination, breaking the ties left by the bare demonstrative structure alone. For example, we might “presuppose” that d is a natural object in later belief formation, for example, in characterizing a natural kind by bare demonstrative identification of exemplars and foils: “the property shared by that and that and that but not that”. Downstream belief forming practices like this will succeed in picking out natural kinds only if the referents of the demonstratives pick out exemplars that fall under natural kinds in the first place. Justification-maximization will then favour interpreting the bare demonstrative as picking out naturally unified item insofar as it falls under a natural kind, over others. Note well that it may be that such practices are attached to some demonstrative files and not others, so this might give a nuanced grip on how we sometimes secure determinate reference to natural unities, without entailing that we can only demonstratively refer to such things. And note also that this doesn’t require that we “have in mind” and attach to the demonstrative some disambiguating sortal (though this is thing that could happen in some cases, and would move us to a discussion of complex demonstrative thought).

The overall upshot: I wouldn’t be too worried if it turned out that bare demonstrative thought a la Dickie turned out to be indeterminate in various respects. I personally wouldn’t be moved to introduce some kind of causal or naturalness-based constraint on interpretation just to secure determinacy. But further, I think that working within the radical interpretation framework there are many routes by which determinacy of reference to natural unities could be established. And if this were the rule (e.g. our agent had a cognitive architecture which always presupposed that bare demonstratives picked out things falling under natural kinds) then we might derive from within the system the sort of generalizations about reference-fixing that others are tempted to introduce as unexplained explainers—e.g. a referential bias towards natural referents, or to the entity with is the causal source of the information received in perception.

Having spent this post advertising ways of securing determinacy, I want to finish by flagging again an underdetermination threat that nothing here speaks to. This is the threat posed by permuted interpretations. Take a permutation of the universe, p. Let p(a) be the image, under the permutation, of a. Let p(F) be the property that applies to something iff that thing is the image, under p, of some object that is F. Notice that necessarily, a is F iff p(a) is p(F). Original and permuted interpretation agree on the truth-conditions of every atomic thought. It turns out that they will agree on the truth-conditions of every thought.

On the permuted interpretation, a bare demonstrative based on a perceptual link to Woody denotes not Woody, but the image of Woody under the permutation, which may be anything (a small furry creature from Alpha Centauri, perhaps). Such permuted interpretations are not among those we’ve been considering at all in the last three posts, since we always took for granted the referential relations between general concepts and observable properties that figure in Dickie’s description of the way demonstratives operate—and permuted interpretations attribute a different reference to those concepts. I think we will get insight into what’s wrong with such permuted interpretations (why they are disfavoured by radical interpretation) not by saying more about demonstratives, but by considering the analogous questions about general concepts.