Category Archives: Uncategorized

NoR 4.4: Beyond belief.

This is one of a series of posts setting out my work on the Nature of Representation. You can view the whole series by following this link

In the previous three posts, I’ve given my favoured interpretationist account of linguistic content: fixed by that explanation fitting the base conventions of truthfulness and trust that best manages the trade off between (subject-sensitive) simplicity and strength.

The base facts were conventions linking utterances to belief. So as well as feeding into an account of how words get their meaning, the conventions give us an excellent handle on what it might mean to say that in asserting a sentence, we are “expressing a belief”. Notice, though, that the beliefs conventionally expressed in this way will not necessarily have the same content as that of the sentence—this is the phenomenon we covered in the post Fixing Fit.

Is the state of mind expressed by a sentence always a belief? Might moral sentences express pro and con-attitudes, epistemic modals a state of uncertainty, and conditional sentences states of conditional belief? There is a very natural way to understand what such claims mean, in this framework, respectively:

  • There is a conventional regularity of uttering “x ought phi” only if one’s contingency plan for x’s situation, is to phi, and coming to so plan when one hears the utterance “x ought phi”.
  • There is a conventional regularity of uttering “It might be that p” only if p is compatible with what one believes, and of adjusting one’s beliefs to make p compatible with them when one hears the utterance “It might be that p”.
  • There is a conventional regularity of uttering “if p, q” only if one believes q on the supposition that p, and of adjusting one’s beliefs to come to believe q on the supposition that p, when one hears the utterance “if p, q”.

These are not the only ways of formulating the connections in the conventional framework—for example, perhaps closer to the dialogical role of epistemic modals would be to present them as a test: the “sincerity” condition remains the same, but on the “trust” end, the convention is that the speaker checks that p is already compatible with their beliefs, or else challenges the utterer.

Are there regularities of this kind? One might wonder whether someone uttering the words “Harry ought to phi” regularly leads their audience to plan to phi in Harry’s circumstances. Attention is naturally drawn to cases where these normative claims are contested—where someone is saying that Harry ought to change career, or do more exercise, or avoid white lies. In those cases, we don’t respond to bare assertions simply by incorporating the plans expressed into our own—we would tend to ask for a bit more explanation and justification. But of course, the same could be said of contested factual claims. If someone claims that the government will fall next week, we want to know how they know before we’ll take them at their word. These are violations of the “trust” regularity for belief, unless we add the caveats mentioned in an earlier post: to restrict it to situations where the speakers have no interest in deception and hearers regard speakers as relevantly authoritative. The same qualifications are, unsurprisingly, necessary in this case. And once we make that adjustment, then it may well be that the cases above are indeed conventional regularities. (It’s worth remembering one can come to plan to phi in x’s situation, without changing your contingency plans for any situation. For example, if you hear someone say “Harry ought to change career”, one might hold fixed one’s opinion about the circumstances in which changing-career is the thing to do, and simply come to believe that Harry is now in one of those circumstances. Lots of information-exchange using normative sentences can take this form.)

So there is a plausible way of extending the talk about sentences expressing beliefs to a more general account of sentence-types expressing attitudes of various kinds.

Now, this is all perfectly compatible with following being true, at one and the same time:

  • There is a conventional regularity of uttering “x ought phi” only if one believes x ought to phi, and of coming to believe x ought to phi upon hearing “x ought phi”.
  • There is a conventional regularity of uttering “It might be that p” only if one believes it might be that p, and of coming to believe this upon hearing “it might be that p”.
  • There is a conventional regularity of uttering “if p, q” only if one believes that if p, q, and of coming to believe this upon hearing “if p, q”.

After all, I already flagged that on this account, sentences of any kind will be conventionally associated with many different beliefs. So why not beliefs and other attitudes too?

There are questions here, however, about the appropriate order of explanation, and it’s hear we make contact with work on broadly expressivist treatments of language. For one might think that our layer-2 story about fixing thought content did not get us to a point where we had a grip on thoughts with modal, deontic or conditional content. If that is the case, then although the later three conventional associations with belief will come out eventually as true commentaries on an agent, they are not something to which we had earned a right at layer 2 of our metaphysics of representation. On the other hand, we might think that from patterns of belief and desires, we will have a fix on states of belief, supposition and planning (planning states with factual content are not something that I’ve covered so far, but I’d be happy to add them as an additional element to the kind of psychology that radical interpretation will select).

That leaves us in the following position: as we approach linguistic content, for some sentences, we are not in a position to appeal to belief-centric conventions of truthfulness and trust, since the belief-states that they might express have not yet been assigned content. The semantic facts that we can ground by means of the story just given will then be restricted to a fragment of language that leaves out these problematic bits of vocabulary. So we need to go back to the grindstone.

Expressivists about deontic and epistemic modals and conditionals will tend to think that this is the situation we find ourselves in—and they have a solution to offer. Rather than building our metasemantics for the problematic terms by looking at features of the belief expressed, they propose to work directly on the other attitudes—the plans, suppositions or ignorance—that stand to these sentences just as ordinary beliefs stand to factual sentences. Let us consider how this might go.

To start with, the story I have been giving would need to be generalized. The key “datapoints” that a semantic theory had to “fit” were the range of propositions that I said were “conventionally associated” with each sentence. Those contents are associated with a sentence because by being the contents of the beliefs the sentence expresses (i.e. that figure in conventions of truthfulness and trust). We can’t just mechanically transfer this to other attitudes: for example, the content of the plan expressed by “Harry ought to change career” might: changing career in Harry’s circumstances. But we will get entirely the wrong results if we required our semantic content to assign to normative content the factual proposition that one changes career in Harry’s circumstances. The semantics needs to fit with a planning state, not a factual belief that the plan has been executed.

Let us take a leaf out of Gibbard’s book here. Let a world-hyperplan pair be the combination of a complete description of how the world is factually, together with function from all possible choice situations to one of the options available in that situation. To accept a world-hyperplan pair is to believe that the world fits the description given by the world component, and to plan to do x in circumstance c iff the hyperplan maps c to x. To accept a set of world-hyperplan pairs is to rule out any combination of world and hyperplan that is outside that set—this amounts, in the general case, to a set of conditional commitments to plan a certain way if the world is thus-and-such. (Okay, you might want more details here. This is not the place to defend the possibility of such a redescription: if it is not legitimate, then that’s a problem for Gibbardian expressivists independent of my kind of metasemantics).

I will assume that our metaphysics of layer-2 representations gets us to a point where we can read off what a subject’s conditional plans are, in this sense. We can then redescribe the combined belief/planning states of this agent in terms of which sets of world-hyperplan pairs they accept. And that means we will have earned the right to redescribe this:

  • There is a conventional regularity of uttering “x ought phi” only if one believes x ought to phi, and of coming to believe x ought to phi upon hearing “x ought phi”.

as follows for a suitable q (set of world-hyperplan pairs):

  • There is a conventional regularity of uttering “x ought phi” only if one accepts q, and of coming to accept q upon hearing “x ought phi”.

And once in this format, we can extract the data the semantic theory is to fit, since we now have a new, more general kind of conventionally associated content: the combined belief/planning states q. The correct compositional interpretation will then be as before: the (subject-sensitive) simplest, strongest interpretation that fits these base facts. And contemporary expressivist semantic theory is exactly a specification of functions of this kind.

The crucial technique here is the Gibbardian transformation of a description of a subject’s psychology as the acceptance of enriched content—that’s what allows us to articulate the convention in a way that provides a target for compositional interpretations. So if we want to replicate this metasemantics for other kinds of expressive content, we need to perform the analogous redescription. It might be, for example, that to underpin epistemic modals, we need to describe an agent as accepting a set of world-information state pairs, representing combinations of factual states of the world and states of their own factual information to which they are open. To underpin epistemic modals, we need to credit the agent as accepting a set of world-update function pairs. And if this is to be a single story across the board, we will need to combine these and others to a highly complex summary of a possible opinionated psychology: world-hyperplan-information-update-etc, and then, on the basis of the facts about mental content already established at layer 1 in more familiar terms, say which of these possible opinionated psychologies are ones to which the subject is open.

None of this is easy or uncontentious! The existence of the conventions that tie the target sentences to non-doxastic attitudes, the Gibbardian redescriptions, and the availability of compositional interpretations of language are all points at which one might balk. But those who favour an expressivist theory of the discourse in question are likely to be sympathetic to these kinds of claims, and my central point in this post has been to show that the convention-based metasemantics can underwrite that project just as much as it can underwrite more traditional cognitivist projects.

(I’m very grateful to discussions with Daniel Elstein and Will Gamester that have shaped my thinking about the above. They are not to be blamed.)

NoR 4.3: Subject-sensitive simplicity

This is one of a series of posts setting out my work on the Nature of Representation. You can view the whole series by following this link

In the previous post, I presented three underdetermination challenges. These amount to different ways of assigning the same truth-conditions to the sentences we use. Each is definitely incorrect, either because it assigns the wrong referents to words, or the wrong compositional rules to the language. But we can’t explain why it is incorrect by pointing to a lack of fit (or lack of strength) with the base facts (conventional associations between sentences and coarse-grained propositions). And for at least the last of the challenges—twisted compositional rules—we can’t explain why it is incorrect even if we suppose that our work on layer-2 intentionality earns us the right to a much more fine-grained set of facts about belief content.

My goal in this post is to review the obvious saving constraint: simplicity. I’ll recap the way that in Lewis’s hands, this turned into the idea that “natural properties” were “reference magnets” (we already saw this simplicity-naturalness connection as it arose in the context of inductive concepts). And I’ll put forward a very different treatment of simplicity, a “subject-sensitive” version on which each agent’s cognitive repertoire defines a set of contents that are reference-magnetic for their language. The effect is a new and better way to use the work already done on thought content to fix determinate linguistic content.

So first of all: simplicity. I have considered it once before. When considering concepts that were used in inductive generalizations, I sketched a way of deriving a constraint on content, via the rationality of inference to the best explanation. The best explanation, again, was determined by strength, fit with data, and simplicity. Applied to that case, the prediction was that the interpretation of the subject should be one which makes the content that they are treating as an explanatory generalization a simple hypothesis (all else equal). Overly “disjunctive”, less simple, interpretations of the concepts subjects deploy in such contexts are thus disfavoured.

Notice that in this case of belief/desire interpretation, there is no direct constraint that the interpreter’s story about “rationalization” should be simple. Rather, the constraint is that it rationalize the agent, inter alia making their beliefs justified. Simplicity entered the picture only when the subject’s cognitive architecture made simplicity epistemically relevant. By contrast, the proposal in the case of linguistic representation is that a role for simplicity is falling out of the fact that we appealed to best explanation directly in the selectional ideology itself. That will mean that the way that theses about simplicity play out here will be rather different to what we saw before. In particular, simplicity in content-assignment is non-contingently, always and everywhere a determinant of content—there are no restrictions to special classes of words, as there was to inductive concepts in the earlier account.

I will be assuming that simplicity is in the first instance a property of theories, not of abstract functions. So in order to make sense of that idea that ranking compositional interpretations (abstract functions, remember) as more or less simple, we need to do some work. We look to the ways those functions can be most concisely expressed—by an axiomatic specifications of the referents of lexical items, plus compositional rules. As well as some measure of the compactness of a given axiomatic specification, we will also need to make sure the specification is presented in some canonical language. In all this, I follow what I take to be Lewis’s treatment of simplicity (independently motivated, and deployed in e.g. his Humean theory of laws). And as I noted earlier, if we make one final move we can explain a famous feature of Lewis’s account of language.

That move is to identify the canonical language with “ontologese”, a language that features predicates only for metaphysically fundamental properties and relations (plus various bits of kit such as broadly logical concepts—Lewis was never very clear what resources were in the canonical language beyond the natural properties (the best guess is that he thought he would do enough by just listing them). Sider, the coiner of the term “ontolegese”, suggested that we get a more principled and satisfactory theory by generalizing the idea of fundamentality, so that quantifiers, connectives etc can be fundamental or not. On Sider’s version of the view, every primitive expression in the canonical language should denote something metaphysically fundamental.

Note the following (which I first presented in my “Eligibility and Inscrutability” paper from 2007). Suppose we have a pair of compositional interpretations of L, differing only in their interpretations of a single predicate “F”. The first says it denotes P, the second says it denotes Q. And suppose that the shortest way to define P in ontologese is longer than the shortest way to define Q in ontologese. Then the second interpretation will be more compactly expressable in ontologese—simpler—than the first. If the two theories are otherwise tied (on grounds of fit, predictiveness, etc) at the top as candidates for being the best interpretation of L, then on these grounds, the second will win. So we derive that length of definability in ontologese—what Lewis calls “relative naturalness”—of the semantic values assigned as referents to words is one of the determinants of correct interpretation. We can also see that compositional rules, no less than reference-assignments, can be evaluated for relative naturalness, and on exactly the same grounds: contribution to simplicity.

Consider the Kripkenstein problem of deviant compositional rules. We have every reason to believe that the deviant rule takes longer to write out than the original—after all, the way we have of writing it out is to write down the original, add a conjunct restricting its application, and then add a further disjunction. So we have every reason to believe it’s a less natural compositional rule. So the theory that uses it is less simple. Since it has no compensating virtues over the standard interpretation, it is incorrect. Similar stories can be run for the skolemite and permuted interpretations, if those have not already been dealt with at an earlier stage of the metaphysics of representation.

I highlight again that one can accept much of this putative resolution of the underdetermination challenges without going all the way to relative naturalness. The identification of simplicity with compactness of expression in ontologese is a theory: and a very contentious one (even in application to theories in metaphysics and fundamental physics, and certainly for higher-level theories of social phenomena like language). We might short-cut all this simply by stopping with the very first claim: that simplicity partially determines best explanation. Add the assumption that the Kripkensteinian compositional rule is less simple than the “straight” alternative. If that is true (never mind what grounds that fact) our problem is over. There is work to do for those with an interest in the theory of simplicity, but the metaphysician of content can pack up and go home. The same structure applies also to permuted and skolemite interpretation—those interpretations can be ruled out if we assume that the interpretations involved are less simple than the standard.

The needed assumptions about simplicity are very plausible. So there’s a good case to be made that at this level of description, the Lewisian solution just works. And if one is content to treat it as another working primitive, we are done. But of course, if simplicity turned out to be some kind of hyper-subjective property linked to what each of us feels comfortable working with, then there’s a danger that linguistic content will inherit this hyper-subjectivity. And one might worry that it will be impossible to articulate simplicity as it applies to linguistic theories, without appealing to facts about linguistic representation. So there’s good reason to dig a little deeper. That also has the advantage of making the account more predictive—it’s a great virtue of the full Lewisian package that we can start from on which we have an independent grip (what is more or less natural, in his sense) and derive consequences for linguistic representation. It would be nice to recover similar explanatory power.

One can dig deeper without going all the way to the point that Lewis reaches. Indeed, one can endorse the general identification of simplicity with compactness-of-expression-in-a-canonical-language without saying the canonical language is ontologese. Now in other work (“Lewis on reference and eligibility”, 2016), I floated the idea of “parochial” simplicity. This involves the theorist specifying—in a quite ad hoc and parochial manner—some language C that they favour for measuring simplicity. Relative to that choice of C, simplicity facts can be handled as before, and shortness of definability from C becomes a determinant of content (“reference magnetism”). Of course, if different theorists select different C, they may pick different interpretations as correct, and so in principle come up with different candidate accounts of semantic content. So this approach makes facts about linguistic content (insofar as they go beyond what we can extract from the constraint to “fit with the conventional base”), if not wildly subjective, at least parochial. I don’t find that as abhorrent as vast undetermination of reference. Indeed, I think it’s the best version of a deflationary approach to linguistic representation. But I do not think it counts as a realist theory of linguistic content—and that is my present target.

Accordingly, I float another option. Let the canonical language be fixed not by the theorist’s choice, but by the subject’s conceptual repertoire. For this to make sense, we need to know what their conceptual repertoire is, and it needs to be in some medium in which it makes sense to carry out definitions. So here I am going back to the work we did earlier in layer 2 metaphysics of representation, and adding the assumption that prior to public language, there is a sufficiently language-like medium for thought, whose components have fairly determinate content—the story of how they acquire that content is as given in the subsequence 2 of posts. I propose that ew now let the simplicity of a theory (for subject x) be the compactness of its expression in x’s medium of thought. So, if x’s medium of thought is mentalese, with a certain range of basic concepts, then we can let the simplicity of a property be its minimal length of definition in mentalese, from that basic range. When it comes to language, the things that each agent can think about via an atomic concept will be reference magnetic, for them. How this kind of subject-sensitive magnetism relates to naturalness is entirely deferred to to the level of metaphysics for thought-content.

(You might worry that the interpretation will be inexpressible for theorists who lack semantic and mathematical vocabulary involved in setting out the semantic theory. If that’s the case, then I will simply build into this account of simplicity for semantic theories that it should be judged by the subject’s conceptual repertoire supplemented with standard semantic and mathematical resource. This is analogous to Lewis’s supplementation of predicates with natural properties with other general-purpose resources, in fixing his “ontologese”).

My proposal gives up on the idea that “simplicity” is a subject-independent theoretical virtue, and so takes seriously the common idea that what is simple for me may not be simple for you, and vice versa. But given your conceptual repertoire and mind, we will both agree that the twisted compositional interpretation is less simple than the straight one, and that the permuted and skolemite interpretations are more complex than the standard. The agreement arises only because our differing conceptual resources overlap to a considerable extent: we both have the capacity to generalize unrestrictedly, for example.

There is a wrinkle in this proposal to make simplicity subject-sensitive. We are targeting a metaphysics of public language, and a public language involves a diverse population, each with a potentially idiosyncratic conceptual repertoire. So who within this population gets to set the standards of simplicity? I propose: no one person does. Simplicity relative to the population as a whole is indeterminate,  with each member of the population contributing their own precisification of the notion. Nevertheless, language-using populations will tend to overlap in conceptual resources, and so will agree on central verdicts about the relative simplicity of one theory over another—in particular, the permuted, skolemite and twisted interpretations are determinately less simple than the standard alternative.

(A good challenge to me, for enthusiasts to press: how can I see this about simplicity at the level of language, and also appeal to simplicity as a constraint on the content of inductive concepts. This is a challenge to which I hope to return.).

 

 

NoR 4.2: Fixing Fit

This is one of a series of posts setting out my work on the Nature of Representation. You can view the whole series by following this link

Let us suppose that the base facts about whole-sentence-content (for those sentences which feature in actual patterns of use) have been established. The story about how we fix word-content was the following: that the correct compositional interpretation was whatever “best explains” the facts about whole-sentence content.

So what makes one compositional interpretation of a language better than another?  I will work with a familiar model: that betterness is determined by the tradeoff between virtues of fit, strength and simplicity. If the datapoints are facts about linguistic content (for a constrained range of sentences) then the contents assigned by the compositional theory should explain that data, which minimally requires being consistent with it. The theory should be strong, in that it should predict as much of this data as possible. But this is balanced by simplicity. As in an earlier post, the model for this is compactness of expression, when the interpretation is expressed in some canonical language.

“Fitting” the base facts about sentential content does not mean the the compositional interpretation should assign the same content to whole sentences that the base facts supply. It would be bold to bet that there is a unique content conventionally associated with each sentence, and an even bolder bet that that will turn out to be the semantic content that can be generated compositionally. No, “fit” is a looser, more interesting criterion that this. The constraint as I understand it is the following: given the content that the compositional interpretation assigns to the sentence, plus suitable auxiliarly hypotheses (general information about rational agents and conventions of cooperation, special purpose pragmatic principles) as much as possible of the range of contents that are conventionally associated with the sentence should be explicable. It’s easy to explain why there will be a regularity of truthfulness and trust connecting “Harry is a bachelor” to the content that Harry is male, on the basis of a compositional interpretation of it as having the content that Harry is a bachelor, since generally we believe the obvious logical consequences of what we believe. The reverse would not be easy. Again, general information about Gricean conventions of orderliness together with the standard compositional content will explain why “Harry had five drinks and drove home” is conventionally associated with the content, inter alia that the drinking preceded the driving. So even if these regularities of truthfulness and trust count as conventions, the standard interpretations fit them well.

If Fit and Strength were the only determinants of “best explanation” of the language in use, then the account would be subject to counterexample. It is well known that the same sentence-level truth-conditions can be generated by a variety of lexical interpretations, some of which are obviously crazy. I introduced two in an earlier post:

  • Permuted interpretations. Where the standard interpretation has “Tibbles” referring to Tibbles, and “is sleeping” picking out the property of sleeping, the permuted interpretation says that “Tibbles” refers to the image under the permutation p of Tibbles, and “is sleeping” picks out the property of being the image under p of something which is sleeping. Systematically implemented, the permuted interpretation can be shown to produce exactly the same truth-conditions at the level of whole sentences as does the standard interpretation. But, apparently, that means that they fit with and predicts the same sentence-level facts about conventions. So we can’t explain on this basis the obvious fact that the permuted interpretation is obviously and determinately incorrect.
  • Skolemite interpretations. Where the standard interpretation has the quantifier “everything” (in suitable contexts) ranging over absolutely everything, the skolemite interpretation takes it to be quantify restrictedly only over a countable domain (this domain may vary counterfactually, but relative to every counterfactual situation its domain is countable). And (with a few caveats) we can show that the skolemite interpretation and the original are truth-conditionally equivalent. But, apparently, that means that they fit with and predict the same sentence-level facts about conventions. So we can’t explain on this basis the obvious fact that the skolemite interpretation is obviously and determinately incorrect.

We met these kind of deviant interpretations in the context of the metaphysics of mental representation. Under the assumption that thought had a language-like structure that allows us to pose such puzzles, I argued that normative facts about the way in which we handle universal quantification in thought and inductively generalize would solve the problem.

Now Lewis denied the starting point of this earlier story. He stuck resolutely to theorizing thought content in a coarse-grained way (as a set of worlds, and later a set of centred worlds/individuals), ignoring issues of individual representational states and any compositional structure they might have. That only delayed the day of reckoning, since once he reached this point in the story—with public language and its compositional structure—he had to face the challenge head on. Further, nothing that Lewis did earlier on helps him here. Remember, the the raw materials for selecting interpretations are conventions of the form: utter S only if one believes p; and come to believe p if one hears someone utter S. And since for Lewis the “p” here picks up a coarse-grained truth condition. But the permuted and skolemite interpretations fit that sort of data perfectly. So underdetermination looms for him.

(There is one way in which we might try to replay those earlier thoughts. Among the conventions of truthfulness and trust will be an association between, say, “everything is physical” and Jack being physical. That is obviously not going to be the semantic content, but we need to explain why there is a convention of truthfulness and trust there, given the semantic content we assign. Here is the suggestion: a restricted interpretation, even one that de facto includes Jack in the domain D of the quantifier, won’t afford such an explanation. That’s because the audience couldn’t rationally infer just from the information that everything in D is physical, to Jack being physical, except under the presupposition that Jack is in D (so what we should expect, under a restricted interpretation, is that we only get conventional association with: if Jack is in D, then he’s physical). This is a natural way to try to lever the earlier account I gave into the current setting. But unfortunately, I don’t think Lewis can appeal to it. For him, the contents that everything is physical and that everything in D is physical are identical—since they’re truth conditionally equivalent, denoting the same set of worlds. So it’s not an option for Lewis to say that believing one supports believing Jack is physical, while believing the other supports believing only the conditional—that’s a distinction we can only make if we presuppose a finer-grained individuation of the two contents.)

So Lewis needs something more than fit and strength in his account of better explanation. But so does everyone else. At first it might not seem this way. After all, if we’ve already solved these puzzles at the level of thought-content, then surely linguistic content should be able to inherit this determinate content? There are two problems with the proposal. First, it’s surprisingly tricky to get a workable story of how determinate content is inherited by language from thought. And second, there are further underdetermination challenges beyond the two just mentioned for which this strategy won’t help.

On the first point, let us assume, pace Lewis, that the structure of thought was itself language-like, and that we confronted and solved the problem of permuted and skolemite interpretations as it arose at this lower layer of representation in the way I described earlier. We will have already earned the right to ascribe to agents, for example, a truly universal belief that everything is physical (modelled perhaps by a structured proposition, rather than a Lewisian set of worlds).  The “inheritance” gambit then works as follows: there will be a regularity of uttering “everything is physical” only when the utterer believes that truly universal structured content. And to the extent that this regularity is entrenched in the beliefs and preferences of the community so that it counts as a convention (plausible enough), the constraint on linguistic interpretation will not simply be that we fit/predict data characterized by coarse-grained content, but that it fits and predicts fine-grained data. Ta-da!

But we have already seen that “fit” cannot simply be a matter of identity between the content of thought and language. And theorizing thought in a fine-grained way amplifies this. None of our assumptions entail that for each sentence in public language, there is a belief whose content exactly matches its structured content. Here’s a toy example: suppose we have no single concept “bachelor” in our language, but do have the concepts “unmarried” and “adult male”. Then the fine-grained belief content conventionally associated with “Harry is a bachelor” may be a conjunctive structured proposition: Harry is unmarried and Harry is an adult male. But we shouldn’t require a semantic theory to compositionally assign that particular proposition to the atomic sentence in question—it may be impossible to do that. What seems attractive is to say that the assignment of the stuctured proposition <Harry, being a bachelor> to the sentence explains the conventional data well enough: after all, at the level of truth-conditions, it is obviously equivalent to the conventionally associated content. But certainly the structured contents ascribed by the permuted interpretation are also obviously truth-conditionally equivalent to the structured contents conventionally associated with the sentence, so fit equally well. Maybe matters are less obvious in the case of the skolemite interpretation, but it’s still necessarily and a priori equivalent. Given the needed flexibility in how to measure “fit”, it’s far from obvious we are on solid ground in insisting that the fine-grain content of thought must be the semantic content of the sentences. (There are moves one can make, of course: arguing that something fits better the closer the content assigned is to conventional content. But this is not obviously independently motivated, and as we’re about to see, won’t do all the work).

But the real killer is the following challenge:

  • Compositionally twisted interpretations. O’Leary-Hawthorne once asked Lewis to say what fixed the compositional rules that must be part of any compositional interpretation. One version of this challenge is as follows: the standard compositional rules say, for example, that if the semantic value of “a” is x, and the semantic value of “F” is the function f (at each world, mapping an object to a truth value), then the semantic value of the sentence “Fa” is the function that maps a world w to the True iff at w, f maps o to the True (or, if one wants a fine-grained content, then it is the structured proposition <f,a>). But a twisted rule might take the following disjunctive form: if “Fa” is a sentence that is tokened in the community, the semantic value of the whole is determined just as previously. But if “Fa” is never tokened, then its semantic value is a function that maps a world w to the False iff at w, f maps o to the true (respectively for fine-grained content: it is the structured proposition <neg(f),a>, where neg(f).

Now, the trouble here is that the standard and the twisted interpretation agree on all the datapoints. They can even agree on the fine-grained structured content associated with each sentence ever used. So they’ll fit and predict the sentential data equally well (remember: the language in use which provides the data is restricted to those sentences where there are actually existing regularities). The “constraining content” we inherit from the work we did on pinning down determinate thoughts is already exhausted by this stage: that at most constrains how our interpretation relates to the datapoints, but the projection beyond this to unused sentences is in the purview of the metaphysics of linguistic content alone. This “Kripkensteinian” underdetermination challenge will remain, even if we battle to relocate some of the others to the level of thought.

Something more is required. And if what determines best explanation is fit, strength and simplicity, it looks like simplicity must do the job.

 

NoR 4.1: Fixing linguistic interpretation.

This is one of a series of posts setting out my work on the Nature of Representation. You can view the whole series by following this link

The non-representational world contains causes and functions. From these primordial elements arises the intentionality of perception and (contrastive) intention. Perceptions and intentions in the life of a single creature then ground beliefs and desires. What mediates between perception/intention and belief/desire is rationalization. If the states that will end up as vehicles of content are patterned in interesting ways, then truths about rationality allow us to derive succinct explanations why the elements of those states represent what they do.

In this new subsequence of posts, I’m going to extend this to “third layer” of representation, the representational properties of artefacts.  I will be concerned specifically with linguistic representation: the representational properties of words, sentences and utterances.

Just as my starting point for the metaphysics of belief and desire was David Lewis’s brief brief remarks on (mental) radical interpretation, I will base my story on Lewis’s (much more developed) theory of linguistic representation. In the case of mental representation, the metaphysical story took this form: there is a space of abstract interpretations, which map states or stages of agents to contents. The job of the metaphysician of mental representation is to give an illuminating story about which of these abstract interpretation is correct. And then we “read off” facts about what a person believes or desires, for example: that a person believes that p iff they are in a state which the correct interpretation maps to the ordered pair: <belief,p>. In the case of linguistic representation, the situation is similar. There is a space of abstract interpretations (what Lewis calls “languages”) which map sentences to contents. The job of the metaphysician of linguistic representation is to give an illuminating story about which of these abstract interpretations is correct.

As well as this story, however, one needs to delimit the right space of abstract interpretations. Will the vehicles of linguistic content be individual utterances, utterance-types, sentence-types or individual words? We make this choice when we stipulate the domain of the abstract interpretations that we select between. In the same way, we might target the sentences of an idiolect, or a whole public language.

Framing the problem as involving interpretations that map public language sentence-types to propositions, Lewis offered the following account of correctness.

  • The correct interpretation of a set of public sentence-types Z of population P is one that, for each sentence-type S in Z, maps S to p iff there is a conventional regularity within P for someone utter a token of type S only if they believe p (truthfulness), and a conventional regularity for someone to come to believe that p if they hear another utter a token of type S (trust).

The basis on which an interpretation is selected consists of (i) patterns connecting acts of uttering to agent’s beliefs; (ii) the entrenchment of those patterns in communal beliefs and desires—whatever is involved in making a regularity a “convention” within a given population. The appeals to conventions, sentences and populations is something to which we return in a later post. For now, they are working primitives.

People are not always honest, nor always trusting. Some sentences (“I did not do it”) may be more often used dishonestly than honestly. Audiences may rightly fail to trust certain speakers, ask for further evidence. Some sentences might be particularly prone to provoking distrust. Following Lewis, we may appeal to the attitudes of speaker and audience to refine the regularities we are looking for to “serious communicative situations” where the speaker has no interest in deception, and the audience takes the speaker to be an authority. I take such restrictions as read: they will not make regularities exceptionless, but they’ll ensure that generically, they hold.

Some further observations:

  1. There are many regularities of truthfulness and trust for any given sentence. I will utter “Harry is a bachelor” only if I believe Harry is unmarried. And someone hearing me utter this will likewise form that belief. Perhaps this regularity will not count as conventions (there are many regularities involved in driving—e.g. driving on the right for the first half of each hour—which are entailed by but not themselves conventions). But prima facie, there are a multitude of candidates for being “the” content conventionally associated with a sentence.
  2. Among these, the strongest content conventionally associated with sentences are not necessarily what we think of as what the sentence literally “means”. “He had three drinks and drove home” implies but does not state that the drinking preceded the driving. Speakers will utter this only if they believe in the temporal ordering, and audiences will come to believe the temporal ordering on hearing it. Absent a reason for thinking that only the regularity of truthfulness and trust involving the semantic content is conventional, we should be aware that Lewis’s account doesn’t lock us onto the literal meaning of sentences.
  3. Indexical sentences need special attention. A utters “I am sitting” while believing that A is sitting. B utters the same while believing B is sitting. C hearing one or the other utter that sentence, will form one or the other belief—or if the speaker is masked, he might form the descriptive belief “the utterer of that sentence is sitting”. Unless the account is tweaked (e.g. so that regularities of truthfulness and trust relate sentences to functions from contexts to contents, rather than contents directly) then it is not the semantic content, but the “diagonal” content, that figures in regularities of truthfulness and trust.
  4. This account is silent on the representational property of units of language below the level of a whole sentence. It is also silent on the representational properties of sentences that are never used, and so feature in no regularities connecting them to states of belief. So the scope of this account is constrained.

So this initial convention-based account leaves us very far from a full or satisfying account of the representational properties of linguistic artefacts. But it is the first staging post towards such an account. Conventionally associated content is the linguistic source intentionality—a layer of representational facts that form the raw materials for an interpretationist story about what ground (literal) meaning of words and more complex expressions.  This interpretationist account is not found directly in the above account of correct sentence-level interpretation (or as Lewis puts it: in the account of what makes an abstract “language” be “in use” by a population). Instead, it is found in his account of what makes something a “grammar” for such a language.

Let us frame the task anew. The space of interpretations now map words of the target public language onto objects (their “semantic values”: that is, the interpretation assigns a reference or denotation to each lexical item). Each now includes a rules that assigns semantic values to more complex sentences (as a special case: sentences). The simplest such rules are compositional, assigning contents to the wholes as a function of the contents already assigned to their parts. Nevertheless, I’ll call a member of this space of abstract interpretations a “compositional interpretation”.

  • The correct interpretation of a set of public words W of population P (and the semantic rules for the expressions they form) is the compositional interpretation which best explains the language in use in P.

Here “language in use in P” is just another way of saying “correct interpretation of the public sentence-types of the population”—and is to be explicated as above in terms of conventional regularities of truthfulness and trust.

This move is relevant to each of the four notes above. In the following, I will hold assume that the best explanation of the language-in-use is indeed compositional, and “not” is given its usual meaning.

  1. A compositional interpretation of “Harry is a bachelor” won’t assign it the content that Harry is unmarried. For (absent a deviant treatment of “not”) that would ultimately mean it had to assign to “Harry is not a bachelor” the content Harry is not unmarried. But that content doesn’t feature in regularities of truthfulness and trust: speakers are willing to utter the sentence when they believe Harry is unmarried and female.
  2. Similar constraints favour the compositional interpretation assigning intuitively “semantic” content over enriched “pragmatic” content.
  3. A compositional interpretation can use familiar means to associate indexical sentences both with “diagonal” contents that show up in the conventions, and account for the way that those sentences contribute as parts of larger wholes, delivering an account which has a place both for diagonal and contextually varying, semantic contents for such sentences.
  4. Even if the “language in use” just assigns content to those sentences that are actually uttered by members of the population, the compositional interpretation can in principle assign content that goes way beyond this. The correct assignment of referents to words, and compositional rules, are grounded in the properties of the finitely many sentences that are actually in use. But by recombining those words and applying the compositional rules, the compositional interpretation assigns compositional content to sentence-types that have never been uttered.

Lewis’s metaphysics of word-reference, then, has two stages: a sentence-level story that works by linking sentence-types to attitudes (belief) and having the sentences inheriting the content. And the key to understanding this is to understand what makes something a convention. The second stage uses the first like a set of datapoints, and interpolates subsentential content as whatever “best explains” that data. In the story I gave about layers 1 and 2, proper functions grounded source intentionality which grounded the intentionality of belief and desire. In parallel in this story conventions ground raw sentential content which ground word content. Just as radical interpretation appeals to the selectional-ideology of “rationalizing” interpretations of an agent, applied to a base of facts about perceptual and intententional content, Lewis’s metaphysics of word content appeals to the selectional-ideology of “best explanation”, applied to much sparser set of base facts about sentential content.

Our setup defines an agenda.

  • Say something about the selectional ideology of “best explanation”.
  • Say something about the base facts that fix sentential content: conventions of truthfulness and trust.
  • Say something about the how facts about words and populations factor into this story.

With the core of this approach to language on the table, we can move to more advanced themes.

  • Varying assumptions about the content inherited from belief.
  • Varying assumptions about the attitude conventionally linked to sentences.
  • Varying assumptions about the relative priority of thought and language.

 

NoR section 3 supplemental: functions II

This is one of a series of posts setting out my work on the Nature of Representation. You can view the whole series by following this link

In the last post, we reviewed a striking feature of etiological theory of functions. What functions organs or states of a creature have depends on their evolutionary history. So accidental creations—swamppeople or Boltzman creatures—that lack an evolutionary history would lack functions. This is very surprising. It is surprising even in the case of hearts—a perfectly functioning heart, a duplicate of your own, on this account lacks the function to pump blood. It is shocking in connection to representation, where in conjunction with teleosemantic account of perception and intention it means that accidental creations would not perceive or intend anything. Though teleosemanticists typically advise that we learn to live with this, and one can coherently add these claims to my favoured metaphysics—I would much prefer to avoid this. So here I look into this a little further.

Teleosemanticists emphasize an important foil to swamperson/Boltzmann creature cases which are a challenge to those of us who don’t want to take the hard line. Swamppeople are accidental replicas of fully-functioning systems, but we need to consider also accidental replicas of malfunctioning systems. To convey the flavour, I offer three cases involving artefacts:

First, take a clockwork watch, one of whose cogs has its teeth broken off. The faulty cog has a specific function within the watch, because that’s the role it would play if the watch were working as designed. That role is the one the cog was selected to play—though it isn’t functioning that way. But take a swamp duplicate of the watch. Does the smooth metal disk inside it *supposed* to play a certain role within the watch? It’s far from obvious on what grounds we would say so. Or consider a second case: a broken watch/swampwatch pair where all cogs are smoothed and mangled, so that it is impossible to reconstruct the “intended” functioning just from the intrinsic physical/stuctural setup. If we think that the parts of a badly broken watch still have their original functions (albeit functions they do not discharge due to damage) and the replica swampwatch, in the absence of the history, does not, that would demonstrate that function is not just a matter of the intrinsic physical setup.

Second, two watches which have different inner workings (hence different functions for the cogs) might both malfunction and be so distorted so that the resulting broken watches are duplicates of each other. But the functions of the cogs remain distinct. So, once more, functions can’t be preserved under physical duplication. This case dramatizes why we can’t merely “project” functions onto accidental replicas of damaged instances, since in this case different incompatible functions would be assigned by such a procedure.

Third, consider cases where a damaged instance of one artefact happens to be a physical duplicate of a fully functioning artefact whose purpose is different. Again, we’re left all at sea in determining which is the “normal pattern” for something that accidentally replicates both.

Each of the above points made about artefacts carry over to biological systems—in principle, a damaged version of some merely possible creature could physically duplicate actually existing creatures. And so again, we’re at sea in accidental creations in determining whether they have the functions of the former or the latter.

These cases it seems to me, do support the claim that functions of a malfunctioning system are an extrinsic matter.

I take it the argument at this point goes as follows: if it is a wildly extrinsic matter what the functions of system S are, when S is malfunctioning, then “having function F” is a wildly extrinsic matter in all cases. And so it is no cost to the teleosemantic account that it says the same for the case of perception.

There are two ripostes here. The first is that while these considerations may motivate the claim that the functions of perceptual/motor states are wildly extrinsic, that may just show that they are not suitable candidates for being the ground of representation, since representation [we still maintain] is not wildly extrinsic in this manner. The second riposte is that it is not true, in general, that because some instances of a property are wildly extrinsic, that all are. Consider the following property: either being a sphere, or being one hundred yards away from an elephant. A sphere possesses this property intrinsically. Whether I possess it depends on my location vis a vis elephants—a wildly extrinsic matter. I think that functions, and representation, may pattern just like this: being intrinsically possessed in “good” cases, but being extrinsically possessed in cases of malfunction. At the least, it would take further argument to show that this is not the right moral to draw from the foils.

To this point, I have been considering the etiological account of function. This is not the only game in town. Alongside the etiological (historical, selection-based) accounts of functions sit systematic-capacity accounts. In a recent development of the basic ideas of Cummins, Davies characterizes a version of this view as follows:

an item I within a system S has a function to F relative to S’s capacity to C iff there’s some analysis A such that: I can F; A appropriately and adequately accounts for S’s capacity to C in terms of the structure and interaction of lower-level components of S; A does this in part by appeal to I’s capacity to F (among the lower-level capacities); and A specifies physical mechanisms that instantiate the systematic capacities it appeals to.

Now, just at the level of form, we can see two important aspects of the systematic-capacity tradition. First, functions are relativized to specific capacities of a high-level system. And second, it’s a pesupposition of the account that items can discharge their functions—“broken cogs” or “malfunctioning” wings will not have their “normal” functions when the system is broken. If we were to try to appeal to this notion of function within the teleosemantic approach, we would have no problem with the original swampperson case, for swampperson would instantiate the same perceptual structure we do, and so functions of its components would be shared. But the two features just mentioned are problematic. The first appears to allow a embarrassing proliferation of functions (standard example: the capacity of the heart to produce certain sounds through a stethoscope may lead to attributing noise-making functions to contractions of the heart). I do not see this as a major problem for the teleosemanticist. After all, one can simply specify the capacities of the sensory-perceptual system or motor-intentional system in the course of the analysis—the interesting constraint here is that we be able to specify these overall capacities of perception or intention in non-representational terms. The second feature highlighted has been central, however. Part of the appeal of the teleosemantic approach was that it could account for cases of illusion and hallucination which involve malfunctions. But while some cases of illusion can be covered by the story [since a type might be functioning in a certain way in a system—e.g. being produced in response to red cubes in conditions C—even if a given token is not produced in response to a red cube, when conditions are not in C. But we can also have malfunctions at the level of types. In synaesthesia produced by brain damage, red experiences may—with external conditions perfectly standard—be produced in response to square shaped things. This systematic abnormal cause doesn’t undercut the fact that the squares are seen as red. An etiological theorist can point to the fact that the relevant states have the function of being produced in response to red things, and are currently malfunctioning. The systemic theorist lacks this resource.

A systemic capacity account of functions,would be an account of function independent of representation, and so fit to take a place as part of the grounds of layer-one intentionality. It also meets our desiderata: it is not wildly extrinsic, and it can underpin learned as well as innate functions, since what matters is the way in which the system is working to discharge its capacity, not on how the system came to be set up that way. But given the points just made, it may not count as a realism about “proper, normal” functions, if those are understood as allowing for wholesale malfunctioning of a state-type. We shouldn’t overstate this: not all cases of illusion or hallucination [misperception or intentions not realized] are wholesale malfunction. But it does seem that wholesale, type-level malfunction is involved in at least some cases of misrepresentation.

I don’t think this blows the systemic theory of functions out of the water as an underpinning for layer one intentionality. The etiological theorists, we saw in the previous post, were forced to drastic revision of intuitive verdicts over swamppeople. And if we’re in the game of overturning intuitive verdicts, the systemic theorist might simply deny that a synaesthete’s red experiences are misrepresentations in the first place. They could fill this out in a variety of ways: by saying that a synaesthete’s chromatic quale now represents a thing as having the disjunctive property of being red-or-square; or they could adopt an individual relativism about red, so that to be red for x is to be apt to produce red-qualia in x, in which case the right description is that the synaesthete’s experience accurately represents the square as being red-for-them. It’s important to the credibility of this that one grants my assertion above that mundane cases of illusion involving abnormal environments or visual cues can already be handled by the systemic function account. Once we’re into more unusual, harder cases, the revisionism looks not too costly at all.

Ultimately, then, the systemic capacity account does hold out the prospect of meeting all my commitments and desiderata simultaneously. And remember: my purpose is not to endorse it on any specific account of functions, but to explore their joint tenability.

While we were still discussing the etiological theory of functions, I noted that the etiological theorists had a decent case that in cases of drastic enough malfunction, functional properties are extrinsically possessed—it does seem that historical facts explain why we’re tempted to still say that the function of a watch cog is to turn adjacent cogs, even when smoothed off so it is no longer fit for that purpose. I also noted that it does not follow that functional properties are extrinsically possessed in all instances. We can emphasize this point by noting the possibility of a combined account of function that draws on both theories we have been discussing, thus:

I has uber-function to F (within a system with S and relative to capacity C) iff either I has a systemic-function to F (relative to S/C) or it has the etiological-function to have that systemic-function to F (relative to S/C).

Just as with the disjunctive property I discussed earlier, creatures who are fully functioning—you, me and swampman—will possess such properties independently of historical facts about the origin of our cognitive setup. But this account, unlike the pure systemic-function theory, provides for other creatures to possess the very same property in virtue of historical provenance. On this account, for example, a synaesthete’s red quale may represent the very same red property that yours and mine do, since the state was evolutionarily selected to be produced by the presence of that property. This combined account is not committed to the  revisionary implications of either of its components. So this again supports my contention that the commitments and desiderata of my deployment of functions can be jointly satisfied.

NoR section 3 supplemental: functions I

This is one of a series of posts setting out my work on the Nature of Representation. You can view the whole series by following this link

The account of source intentionality I offer appeals to a notion of a state (within a system) having a function to do this or that. Perceptual states have the function to be produced in response to specific environmental conditions. Those states can occur and represent what they do when the environmental conditions are missing (as a result of perceptual illusion or deliberate retinal stimulation by an evil scientist). So clearly it’s important that in the relevant sense, something can have a function it is not currently discharging: that it can malfunction when it is the wrong conditions or when interfered with. A state can have a function to X even when it’s not functioning by X-ing.

Functions like this (“normal, proper functions”) can look somewhat spooky. What makes it the case that the perceptual state is for representing red cubes, even in circumstances where it’s being trigged by something other than a red cube? What grounds this teleology?

Giving an answer to this question is strictly supererogatory. Relative to my aims, they can be for me a working primitive. It is even consistent with everything that has been said here that with these functions, we hit metaphysical bedrock—that facts about functioning are metaphysically fundamental. That would be a reductive account of the way that representation is grounded in a fundamentally non-representational world—but a non-representational world that includes unreduced teleological facts.  There are those who are interested in reductive projects because of an antecedent conviction that everything is grounded ultimately in [micro-]physics, and hitting rock bottom at teleological facts about macroscopic systems would count as a defeat. That is not my project, and for me, a reduction that bottoms out at this level would be perfectly acceptable. Arguably, it should count as a “naturalistic” reduction of representation—since biological theorizing prima facie is shot through with appeals to the function [the function of a heart being to circulate blood around the body—whether or not it is currently so functioning].

My commitments are as follows. First, I am committed to disagreeing with those who would deny the existence of normal proper functions and analyze quotidian and scientific function-talk in some other fashion—realism about proper functions. Second, on pain of circularity, the relevant functions can’t be grounded in representational facts—independence from representation.

I add a pair of desiderata. These are not commitment of the account, since a version of the project could run even if they are denied. So: third, I hold that functions can be established through learning: not all functions are fixed by innate biology. If this were denied, then I could not say what I said in previous posts about acquisition of behavioural or perceptual skills extending the range of perceivings and intendings available to a creature. While certain discussions need revisiting if functions couldn’t be acquired in this way, the overall shape of the project would remain.

Fourth and finally, I deny that representational facts are wildly extrinsic. I say that a perfect duplicate of a perfectly functioning human would perceive, believe, desire, and act—even if that duplicate was produced not by evolution but by a random statistical mechanical fluctuation [a “Boltzman creature”].

The job in what follows, then, is not to give you a theory of how functions are grounded, but to examine whether the commitments and desiderata are jointly realizable.

The first account of functions we’ll examine is the etiological account of function. This is the view that proper functions [of a type of state] are grounded in historical facts about how states of that type were selected. This is Neander’s favoured approach, and in her (1991) she characterizes the view as follows:

It is the/a proper function of an item (X) of an organism (0) to do that which items of X’s type did to contribute to the inclusive fitness of O’s ancestors, and which caused the genotype, of which X is the phenotypic expression, to be selected by natural selection.

The etiological account of proper functions meets the realism and independence commitments. But it violates both my supplementary desiderata. By tying functions to natural selection, it does not underwrite functions acquired in the lifetime of a single creature, and by making functions depend on evolutionary history, it is committed to denying that the states of Boltzmann creatures [or Davidson’s “swampman”] have functions in this sense—and given the teleosemantic account of layer-one representations, that is a violation of my desideratum.

The question is then whether some adjusted or extended variant of the etiological account of functions could meet the desiderata. Neander, for one, presents the account above as a precisification, appropriate for biological entities, of the vaguer analysis of the function of something being the feature for which the thing was selected. The vaguer analysis allows other precisifications: for example, she contends it also covers the functions to do something that elements of an artefact possess, in virtue of being selected by their artificer to do that thing. On a creationist metaphysics on which God is the artificer of creatures like us, we could still offer the teleosemantic story about the content of perception, but with this alternative understanding of what the function talk amounts to. I take it that the vaguer analysis also allows for selection within the lifetime of a creature—functions resulting from learning.

While this extension of the etiological account shows a way for it to meet the Learning desideratum, it is no longer clear that an extended account of function would be independent from representational notions. That’s familiar enough for artefacts and the creationist underpinning: unpacking “selection by an artificer” will appeal to the intentions and beliefs of that artificer. We can still include appeal to such functions in our account, but they can’t come in to explain layer one intentionality as I have characterized it, but must come downstream of having earned the right, at layer two, to the artificer’s representations [alternatively, the creationist might posit a new layer zero, consisting of the primitive? representational states of God].

What’s most interesting to me, however, is how this plays out with selection-by-learning. It could be that in at least some cases, this kind of selection does depend on the intentions and beliefs of one doing the learning. Let’s suppose that is so, in the interests of exploring a “worst case scenario”. This would indeed mean that such acquired functions wouldn’t be present at layer one. The moral may be that the idea of just two stages was oversimple. Better might be a looping structure: facts about evolutionary history ground innate functions which provide a base layer of perceptual and motor representation. These are the basis for radical interpretation to ground beliefs and desires of the agent. These belief and desire facts ground further acquired functions which give a suppementary layer of perceptual and motor representation. Radical Interpretation is then reapplied to ground a supplementary layer of beliefs and desires. The structure loops, and so long as every relevant representational state is grounded at some iteration of the loop, this is fine.

It is much harder to see how the etiological account can meet the no-wild-extrinsicality desideratum. In the literature, etiological teleosemanticists do not even make the attempt, but argue that we should learn to live with it. If you agree with them, you can stop worrying. I still worry.

So let’s take stock. I have explored the etiological theory of functions, not because I feel the need to provide a metaphysics of functions—after all, it’s fine by me if this kind of teleology is metaphysical bedrock. Rather, I want to test whether the commitments and desiderata of my deployment of functions are jointly realizable. One account that has been given of functions is the etiological one, and I have suggested that some changes [the introduction of a looping structure] would be needed if the learning desiderata was to be met in the context of that account. And, of course, the no-wild-extrinsicality desideratum is violated.

NoR 3.5: Relata of rationalization

This is one of a series of posts setting out my work on the Nature of Representation. You can view the whole series by following this link

In previous posts covering sensory/perceptual states, and intentional/motor states, I’ve provided a teleosemantic story of their layer-1 representational properties. The question now is move from this to characterize the base facts for radical interpretation, the “courses of experience” (E) and “dispositions to act” (A) that appeared in my formulations of that account of layer-2 representation. I’m not attached to vindicating that particular wording: what we are looking for is a refined proposal that’ll do the right job, more precisely:

  1. What we substitute for (E) and (A)  really do stand in rational relations to beliefs and desires.
  2. The resources developed in the last few posts enable us to characterize the formulations substituted for (E) and (A).

A disclaimer right at the start: I am not going to discuss here the possibility that the teleosemantic contents may fail to be the right relata for rationalization because they are in some sense “nonconceptual” in contrast to the “conceptual” contents of belief and desire. The teleosemantic story determines truth-conditional content, and radical interpretation seeks to say what it takes for beliefs and desires to have similar truth-conditional content. Issues of concepts (in the relevant, Fregean, sense) are not something I’ve broached so far. I’m aiming to maintain that track record.

I will start with the appeal to “dispositions to act”. Our discussion of options in the last post in effect has already introduced the theories and themes that are required. The account put forward there was that our options were intentions: when framing a decision problem, the item we assess for expected value is the formation of an intention, and moreover an intention that has a function to bring about states of the world. Various “high level” states we might call intentions in natural language do not qualify—it’s perfectly ok to say that Sally had the intention to run all the way to the bottom of the hill before she feel over, or that Suzy intended to insult Sally. But states with those kinds of high-level content do not have a function to indicate in the sense set out earlier—they can fail to be satisfied in the absence of any “malfunction”, if Sally or Suzy have false beliefs about their abilities or their target. Sally’s options, in a specific context, are all the intentions she might form in that context. The option Sally enacts is the intention she forms. On this account, what beliefs and desires rationalize is the formation of certain intentions, or better: the contrastive formation of one intention out of all the others possible for the agent.

This doesn’t quite pin down the characterization of the base facts, since there are can be plenty of intention/motor states with functions to produce states of the world which are not plausible “options” for an agent—since they characterize the fine details of motor control over which the subject typically has no access. In the cases that concern us, these subpersonal states are triggered by a person-level intention, but the relation they have to beliefs and desires is purely causal, not rational. So while this account of options tells us that they are to be found among those which are teleosemantically grounded, it doesn’t yet tell us which among these states count as options. To complete the account, I suggest we appeal to a causal-role characterization: that among those intentional/motor states teleosemantically grounded, options are those which trigger other intentional/motor states with functions-to-indicate but which are not themselves triggered by such states (perhaps a states can sometimes be triggered by another intentional/motor state, and sometimes comes into being without such triggering: this will suffice for it to count as an option is the relevant sense).

With this final piece in place, the proposed substitution for “dispositions to act” comes into view. Our interpretee, at a particular place and time, has an array of options (possible intentions-formations in the sense just defined). She forms one of the intentions in this set and not the others. The formation of this intention triggers further downstream intentional/motor states which cause and control bodily movements on the part of the agent. The belief/desire interpretation should attribute beliefs and desires to the agent at that place and time that rationalize this contrastive intention-formation. But of course, rationalizing a single-intention-formation episode is not the be all and end all: a belief/desire interpretation of Sally (attributing her beliefs and desires at arbitrary places and times) needs to (optimally) rationalize her contrastive intention formation dispositions with respect to every point. If we want a more accurate labelling than “disposition to act” we might go for: “dispositions for contrastive intention-formations”.

(Aside. The decision theoretic setting and the appeal to beliefs and desires rationalizing options makes this sounds all very internalist, and perhaps more suited to a theory built on structural rationalization rather than substantive rationalization. But there’s nothing inconsistent with using a decision-theoretic formalism for substantive rationality: the “value” functions can report not subjective degree of desire but objective value (or agent-relative value that does not match the same agent’s desires). The “probabilities” are equally open to a variety of interpretations. So the framework is extremely flexible. The appeal to a belief/desire interpretation that “rationalizes” options just expresses the presupposition that beliefs and desires are among the determinants of the probability and utility—which may be because the relevant probabilities are indeed degrees of belief, or that degrees of desire matter are at least a factor in determining value, or more broadly by a role for beliefs in determining what reasons you possess, or for personal projects to determine a wellspring of value that may vary psychology by psychology. Amidst all this flexibility, the very form of the calculations of expected value and the way they relate to options in Jeffrey’s formalism (and various related ones, such as causal decision theory a la Joyce) means that there is no contribution to expected value from contingencies that are inconsistent with the proposition that specifies the option. So the underlying drive to characterize options in a way that makes them certain (or better: probability 1, however that is to be interpreted) when pursued is baked into the form of the theory of rationalization independently of interpretation. And while it’s not inevitable that we respond to that by following Hedden and identifying options with intentions, that account retains its appeal even when we move beyond the structural rationalization interpretation of the formalism. End Aside).

Radical interpretation also requires that the attribution of belief and desires at each point mesh with one another; specifically, that the belief changes imputed be rational responses to the evidence made available by experience. This is where appeal to “course of experience” came in.

We have at our disposal teleosemantically grounded representational facts about perceptual states. Many of those states will be subpersonal intermediaries between retinal stimulation and the output of the perceptual system. In parallel with the discussion above in intention, I suggest we concentrate on perceptual states characterized by a terminal causal role: those which are do not themselves trigger further perceptual processing.

There’s another parallel with the discussion of intentions. There are plenty of states that we would ascribe in natural language as seeings, hearings, and so on, which involve high level content. We might talk of hearing the car return, or seeing that the dishwasher is finished. But clearly the content ascribed in such cases can be false even when there is no perceptual malfunction, but simply false beliefs. So even absent malfunction, such states need not be responses to worldly conditions matching their content, and that shows that are not states whose contents are teleosemantically grounded in the sense I have outlined. So a commitment of this framework that the relata of evidential rationalization, in the sense in which these appear as base facts in radical interpretation, need to be low-level, not cognitively-penetrated, perceptual states.

(Aside: I am not committed to denying there are perceptual states with high-level content, any more than I am committed to denying there are intentions (planning states) with high-level content. And I can allow that these stand in rational relations to beliefs and desires and lower-level perceivings; one might include the assignment of content to such states as an extra item in the interpretation selected by radical interpretation. But in each case, I am committed to denying that high level states are the only things that stand in relation to beliefs and desires—the critical thing if the account is to be applicable without further epicycles is that there be a layer of low level content in perception that rationally constrains the evolution of belief, and likewise, that beliefs and desires rationally constrain a layer of low level content. It’s worth noting, also, that the high level/low level boundary need not be fixed. I think it’s plausible that response-functions can be acquired. Just as we can expand the range of intentions which have functions-to-produce by internalizing and making automatic the skillful execution of complex routines, We can expand the range of perceptions that have response-functions by internalizing and making automatic the transition whereby they are triggered by more paradigmatically low-level perceptual states. The key thing, in both cases, is that the internalized routines are executed independently of what the agent beliefs or desires—a sufficient condition for this being the case would be the capacity for figuring in perceptual illusions. End Aside).

Suppose Sally is perceiving a yellow banana (better: is seeing that a yellow crescent-shaped thing in front of her). If we were to pursue the analogy to the case of intention fully, then we would suggest that the relata of rationalization, the “experiential evidence” is not to be identified with the content of this perception:

  • there is a yellow crescent shaped thing to the front.

but instead the following:

  • I am undergoing a perception with the content: that there is a yellow crescent shaped thing to the front

This would be the analogue of saying that the primary relata of practical rationality is not the action described in the content of an intention, but the intention itself.

The proposal has some independent appeal. The fact about perception truly describes both someone who is viewing a normal banana in normal conditions, and one who is viewing a white plastic banana under yellow lighting. It is something that could be straightforwardly uptaken into the beliefs of both parties, even if they knew their respective situations. This “common factor” view of the incremental evidence experience provides across the two cases has obvious attractions in the context of radical interpretation, where the aim is to identify some “evidence” independently of layer-2 facts about belief and desire.

For contrast, consider a dogmatist view on the increment of evidence provided by experience. On this view, we are justified in updating directly on the content there is a yellow crescent shaped thing to the front absent certain defeaters and undercutters. One such defeater could be: that one believes background conditions to be abnormal. So in effect, rationality would then impose a disjunctive constraint on subjects who have an experience with the content that there is a yellow crescent shaped thing to the front. Either they come to believe that content, or they have (already?) a belief that background conditions are abnormal. This dogmatist theory of evidence is perfectly compatible with radical interpretation, and doesn’t require anything of layer-1 intentionality that we have not provided for. Nevertheless, for convenience and concreteness, I’ll work with the common-factor account.

There is a question we face in the case of characterizing the perceptual relata of rationalization that has no obvious analogue in intention. The content of experience seems rich and analogue—I perceive a subtly varying colour profile of greens and yellows when I look at a tree. We might suppose that the content of this experience involves a particular number of perceived leaves, just as a picture may involve a particular number of painted leaves. But resource-constrained agents like you and I don’t update on all this information. I form the belief that the tree has lots of leaves, and that they range from green to yellow. But—for example—I wouldn’t take a bet at even odds that there were exactly 148 leaves on the tree, even if the totality of the facts perceptual represented by me now entails this. So the suggestion is this: the transition from terminal perceptual states to the evidence actually updated upon is lossy. And so one cannot simply characterize that incremental evidence as the totality of all the terminal perceptual states.

At this point, we are again back into questions of cognitive architecture that are ultimately empirical. It may be that there is a filtering within the perceptual system (by attention, say) which outputs some special set of perceptual states. Only the states with this distinctive causal role are passed on to central cognition (though other terminal states in the system may make a difference to perceptual phenomenology). But equally, it may be the architecture is indeed lossy as described. There’s no a priori reason, I think, to think our perception works one way rather than the other.

The right response is the following. Epistemological theory, in the general case, should not solely specify a relation between belief change and a proposition/propositions on which one updates and directly incorporates into belief (as it would, for example, if we took the Bayesian theory of conditionalization to be right format for a full theory). Instead, epistemological theory should relate a belief-change to the full content of the experience, without assuming that the full content is taken up as belief. An extra parameter is needed: the rational constraint on belief change is that one updates on those aspects of one’s experience to which one stands in the right functionally-characterized “uptake” relation. In that case, if q is the full content of Sally’s experience, then the interpretation of Sally will be constrained by a complex condition: for Sally to undergo a rational belief change, then there must be some p such that (i) Sally changes her beliefs by updating on p; (ii) p is entailed by q (/the fact that Sally has an experience with content q); (iii) Sally is standing in the right functional relation to p—e.g. attending to the p-aspect of her experience. Element (i) could still be cashed out in a Bayesian way, if one wished. Element (ii) keeps us honest by requiring that the story doesn’t go beyond facts given in experience. Element (iii) will be tailored in different ways to different perceptual architectures.

Having provided for the full range of cases, for reasons of simplicity and concreteness, going forward I will assume that Sally’s sensory-perceptual architecture already does the work of selecting, so that element (iii) is vacuous for her.

This leaves us with the following picture. The base facts about Sally’s “dispositions to act” are facts about her (low level) intention-formations, against the background of all the other (low level) intentions she might form. The base facts about Sally’s “courses of experience” are the fact that she has an experience, the relevant part of the content of which is that q. The rational constraints include a broadly decision-theoretic constraint that beliefs and desires in circumstances c determine probabilities and values which rationalize the dispositions to form intention x (rather than w,y,z) in c; and also a broadly Bayesian constraint that Sally’s change in belief between a pair of contexts c/c* (in which she undergoes experience e) is by conditionalization on the proposition that part of the content of e is that q.

NoR 3.3: Intention

This is one of a series of posts setting out my work on the Nature of Representation. You can view the whole series by following this link

Neander gives a theory of the representational contents of sensory-perceptual systems. She is explicit that this account is aimed to ground “original intentionality” in contrast to “derived intentionality”, where

“derived intentionality [is defined] as intentionality derived (constitutively) from other independently existing intentionality, and original intentionality [is defined as] intentionality not (constitutively) so derived”.

Neander’s view is that original intentionality belongs at least to sensory-perceptual states “…and might only belong to them”. On the contrary, I want to argue that certain other states have almost exactly this sort of original intentionality.

I will assume that our agents’ cognitive architecture includes an intentional-motor system, which takes as input representations from the a central cognitive system (intentions to do something), and outputs states to which we may have limited or no conscious access, but which control the precise behaviour needed to implement the intention. I suggest that original intentionality belongs also to this intentional-motor states, and the metaphysics of this sort of representation is again teleoinformational. Indeed, it will be a mirror-image of the story of the grounding of representation in sensory-perceptual states—the differences traceable simply to the distinct directions of fit of perception and intention.

Our starting point is thus the following:

  • A intentional-motor representation R in intentional-motor system M has [E occurs] as a candidate-content iff M has the function to produce E-type events (as such) as a result of an R-type state occurring.

This time, representation is analyzed as a matter of a production-function rather than a response-function, but this simply amounts to reversing the direction of causation that appeared in the account of perception.

We can illustrate this again with a non-biological example. Every shopper has a half-ticket-stub, and as their goods are brought up from storage, the other half of their ticket is hung up on a washing line in front of the desk. The system is functioning “as designed” when hanging up half of ticket number 150 causes the shopper with ticket number 150 to move forward and collect their goods.  (the causal mechanism is the shopholder collecting the goods, bringing them to the desk, and hanging up the matching ticket). This is a designed system where certain states (of tickets hanging on the line) have production functions.

A perceptual state has many causal antecedents, and many of these causal antecedents are intermediaries that produce the state “by design”. Just so, an intentional state has many causal consequences, many which produce the state “by design”. An intention to hail the taxi (or even: to raise and wave one’s arm) will produce motor states controlling the fine details the way the arm is raised and waved, as well as the bodily motion of the arm waving and finally the taxi being summoned. Again, the more proximal states produced “by design” are means to an end: producing the most distal state. To capture this, we mirror the account given in the case of perception:

  • Among candidate contents E1, E2 of an intentional-motor state R, let E1>E2 iff in S, the function to produce E1-type events as a result of an R-type state occurring is a means to the end of producing E2-type events as a result of an R-type state occurring, but not vice versa.
  • The content of R is the >-minimal candidate content (if there are many >-minimal contents, then the content of R is indeterminate between them).

Suppose that I intend to grasp a green sphere to my right, and suppose that the vehicle of this representation is a single state of my intentional-motor system (a state whose formation will trigger a good deal of further processing before bodily motion occurs). What grounds the fact that that token state represents what it does? On this account, it is because in the evolutionary history of this biological system, states of that type produced a hand prehending and attaching itself to some green sphere to the right of the perceiver—and this feature was selected for. Though there were other causal consequences of that states that were also selected for, they were selected as a means to the end of producing right-green-sphere-graspings.

I will be using this teleoinformational account as my treatment of the first-layer intentionality of action. So when we see appeal, in radical interpretation, to rationalizing *dispositions to act* given the experiences undergone, the “actions” are to be cashed out in terms of teleoinformational contents.

The focus here has been the contents of certain mental states—intentions, motor states and the like. Typical actions (raising an arm, hailing a taxi, etc) in part consist of physical movements of the body, so I haven’t yet quite earned the right to get from Sally-as-a-physical-system to Sally-as-acting-in-the-world. Further, there’s nothing in the account above that guarantees that states with content grounded in this way are personal-level and rationalizable, rather than subpersonal and arational. The exact prehension of my hand as it reaches for a cup is controlled, presumably, by states of my nervous system, and these states may have a function to produce the subtle movements. But the details are not chosen by me. I don’t, for example, believe that by moving my fingers just so I will grasp the cup, and hence form a person-level intention to move my fingers like that. Rather, I intend to grasp the cup, and rely on downstream processing to take care of the fine details.

So there’s work to be done in shaping the raw material of first-layer intentionality just described into a form where it can feed into the layer-2 story about radical interpretation that I am putting forward. This may involve refining the formulation of radical interpretation in addition to focusing in on that correct contentful states. It’s open to us to question whether actions are the things that need to be rationalized, or whether that’s just a hangover from the (behaviourist?) idea that overt bodily movements form a physicalistic basis for radical interpretation. Readers will now spot, however, that these are just salient examples of the same point we saw earlier with perception and experience. In both cases, we need to show how the material grounded in the teleosemantic account of sensation/perception and intention/motor states allow us to characterize the relata of the rationalization relation at the heart of radical interpretation.

In this post and the previous, I’ve given you my story about the foundations of layer-1 intentionality, in one case directly lifted from the teleosemantic tradition; in another, a mirror-image adapation of it. Three items now define our agenda for the rest of this subseries of.

  1. We need to explain how the raw materials are shaped into an account of the base facts for radical interpretation: the relata of substantive rationality.
  2. As flagged in the first post in this subseries, we need for an account of what our interpretee’s options were, those not taken as well as those taken, since we rationalize choices or actions against a backdrop of available options.
  3. The appeal to “functions” of elements of biological systems (specifically, sensory-perceptual and intentional-motor) is a working primitive of this account. That will continue to be the case, but I want to at least briefly look at the problems that may arise, to reassure ourselves that the account won’t be dead on arrival.

NoR 3.2: Experience

This is one of a series of posts setting out my work on the Nature of Representation. You can view the whole series by following this link

The job of the next few posts is to fill out the details of how source-intentionality is to be grounded. And as flagged, here I will draw on Karen Neander’s recent defence of a teleosemantic account of sensory-perceptual content.

This post lays out Neander’s approach to perceptual content. As mentioned, Neander concentrates on the representational content of sensory-perceptual states—so ones that occur within a particular cognitive system. Her story comprises two steps. The first is the following:

  • A sensory-perceptual representation R in sensory-perceptual system S has [E occurs] as a candidate-content iff S has the function to produce R-type events in response to E-type events (as such).

So let’s unpack this. The key notion here is the appeal to the function of something within a certain system. It’s this appeal that makes the account part of the teleosemantic tradition. Now, there’s a lot that could be said about what grounds facts of the form “x has function y in system z”. All we need, for now, is the assumption that these are “naturalistically respectable” and grounded prior to and independently of any representational facts. So for example, a theological account of functions, whereby the function of x is y in z iff God designed x to y in z, is out. More subtly, a stance-relative account of functions, whereby the function of x in y in z for an theorist t depends on theorist’s t’s projects and aims, is also out. But an etiological theory of functions, whereby the function of x in y is z iff x’s in z were evolutionary selected to do y, is an option. The details matter, of course (the details always matter) but for now, we’ll treat functions as a working primitive.

Neander’s proposal is that once we see Sally’s sensory-perceptual system as containing states with a variety of functions, it is response-functions that hold the key to analyzing perceptual content. The system is functioning “as designed” when a certain worldly event-type causes a specific state-type to be tokened within it. Consider the following non-biological example. Runners passing a checkpoint throw a tab with their number into a bucket. The system is functioning “as designed” when runner number 150 passing the checkpoint causes there to be a tab with 150 inscribed upon it in the bucket (the causal mechanism is the runner throwing a random tab from those on a loop on their belt into the bucket). Of course, things can go wrong (the runner can forget to throw the tab, they can miss the bucket, they may have been given a wrongly-inscribed tab at the start) but those would be cases of the system malfunctioning.

Designed systems, at least, can have “response-functions”.  In such cases it’s very natural to think that it’s in virtue of the response-function that the contents of the bucket records or represents the runners who have passed. Neander’s contention is that biological systems with etiological functions can work analogously. Because the grounding of the relevant functions doesn’t require intentions or design but just a pattern of selection in evolutionary history, this is a way of grounding such representation in non-representational facts.

Now, one famous challenge to naturalistic theories of representation (especially perceptual representation) was to distinguish those items in the causal history of an episode of perception which figure in the content of the perception, from those that do not. For example, a red cube observed from a given angle causes a certain pattern of retinal stimulation, which in turn causes a certain state R to obtain in the sensory-perceptual system. The perception has a content that concerns red cubes, not retinal stimluations. Yet it’s perfectly true that part of a well-functioning sensory-perceptual system is that it responds to retinal stimulations of a certain pattern by producing R. It’s also true that that the well-functioning system produces R in response to red cubes at the given angle, and this—indeed, within the system, the response to retinal stimulation is the means by which it responds to “distal” red cubes. But we better not analyze perceptual content as anything to which the perceptual system has a function to produce states in response to, else we’ll include proximal and distal events together. This is why the gloss above talks of “candidate contents” not “contents” simpliciter. Neander appeals to asymmetric means-end relation in the functioning of the system to narrow things down. Here is my reconstruction of her proposal:

  • Among candidate contents E1, E2, let E1>E2 iff in S, the function to produce R-type events as a response to E2 is a means to produce R-type events as a response to E1 but not vice versa.
  • The content of R is the >-minimal candidate content (if there are many >-maximal contents, then the content of R is indeterminate between them).

Suppose that I perceive a red cube to my right, and suppose that the vehicle of this representation is a single state of my sensory-perceptual system (presumably a state produced after a fair degree of processing has gone on). What grounds the fact that that token state represents what it does? On this account, it is because in the evolutionary history of this biological system, states of that type were produced in response to the presence of red cubes to the right of the perceiver, and this feature was selected for. The process by which the states were produced includes intermediary objects and properties, and the sensory-perceptual state was produced in response to those no less than the red cube  (perhaps the intermediary states include three mutually orthogonal red surfaces orientated towards the subject, a certain pattern of retinal stimulation in the subject, etc). However, the function to respond to such intermediaries is a mere means to the end of responding to the presence of *red cubes to the right*.

I will be using Neander’s theory as my account of the first-layer intentionality in perception. When we see appeal, in radical interpretation, to rationalizing dispositions to act given the *experiences* undergone, the “experiences” are to be cashed out in terms of teleoinformational contents. As I mentioned in the last post, there’s further work to be done in turning these representational raw materials into the kind of base facts that radical interpretation needs—identifying the relata of rationalization. How do we get from the content of possibly subpersonal representational states of the sensory-perceptual system, to the content of experience, and ultimately to the impact of that experience on rational belief? This will be addressed in future posts.

NoR 3.1: Source intentionality

This is one of a series of posts setting out my work on the Nature of Representation. You can view the whole series by following this link

Back in my first post, I set out an account of the nature of representation (or at least: core kinds of representation) that broke the task into three layers, each building on the next. I then started in the middle, by setting out a story about how representational properties of beliefs and desires are grounded (layer 2 representations). That story was radical interpretation, and the key thesis was this:

* The correct interpretation of an agent x is that one which best rationalizes x’s dispositions to act in the light of the courses of experience x undergoes.

To further unpack that story, I distinguished between the base and the selectional ideology. The base consisted of what our interpretee (Sally) is disposed to do, given various courses of experiences she might undergo (another base fact, somewhat more implicit, is facts about the reidentification of Sally over worlds and time). The selectional ideology is then whatever else is needed to pin down a correct interpretation from these base facts. The key notion here is that of “best rationalization”.

At that point, I set the question of the grounding of base facts aside, since the immediate problem that we encountered (the bubble puzzle) would arise on any plausible story about these facts and their nature. What we needed to get clear about to address this problem, I argued, was the other element of the story—rationalization. The key to resolving the bubble puzzle was to set aside a traditional construal of “rationalization” as picking out only formal, structural rationality. Instead, the needed selectional ideology is “substantive” rationality, which makes a broader appeal to what particular contents Sally ought believe, given her evidence (what she is justified in believing) and how she ought act, given a set of beliefs and desires. We then moved to investigate the consequences of that theoretical setting, showing radical interpretation offers quite specific predictions and explanations on the denotations of concepts of various types. Substantive rationality was again central here, since normative assumptions in epistemology or practical reason always played a key role, alongside assumptions about the internal cognitive architecture of the agents in question.

While rationalization has to this point played the starring role, it is only one half of the resources needed to get Radical Interpretation up and running. The base facts, as well as the selectional ideology, need to be in place. Indeed, Radical Interpretation can be viewed as a story about how one set of representational facts is “transformed” to bring about another. So we need those “source” representational/intentional facts to be in place so we have something to work with (I am here borrowing Adam Pautz’s nice terminology). That is why I think of Radical Interpretation as a story about a second layer of representation, built on and presupposing a more primordial kind of representation: that of perception and action.

My formulation assumes that facts about action and experience are representational facts. I think the true, layered structure of radical interpretation has been hidden from view by equivocation on this point. Both action and experience are closely related to other nonrepresentational facts—facts about motions of the body, and about patterns of sensory (e.g. retinal) stimulation. Just as there is a possible project—a cousin of my own—which reads “rationalization” as thin, structural rationalization, and seeks to develop radical interpretation on that basis, there is another possible project which seek to develop radical interpretation with only non-representational facts about behaviour and sensation in the base. We have already seen the primarily obstacle to the former approach—the bubble puzzle. This time, I’ll reverse the order, and first of all develop the positive account of first-layer intentionality which would underpin radical interpretation as I set it out, and only afterwards consider the relative attractions of an account build on the thinner, non-representational base. From now on, therefore, I will assume that to give a full account of radical interpretation, we need a prior and independent account of the first-layer, source intentionality of experience and action.

At this point, there is a fork in the road. Well, maybe more than a fork: we could head off-road in several directions, but here are what I see as the two theoretical highways.

  • We could, following Adam Pautz’s lead, pair radical interpretation with a non-reductive account of the intentionality of experience and action. More specifically, Pautz contends that we should take the intentional features of phenomenology of conscious experience as a metaphysical primitive.
  • We could preserve the original ambition to reduce representational facts to the non-representational. Having reduced belief/desire intentionality inter alia to representational facts about experience, we then stand in need of another reductive story about this “source intentionality”. This story will have to be prior to and independent of facts about belief and desire representation, so that we don’t go round in circles. And it won’t be radical interpretation, since that shot has already been fired.

My proposal is that we go for the second of these options. More specifically, I intend to build on Karen Neander’s work. This is an account of representation that sits squarely in a tradition often opposed to radical interpretation—teleosemantics. But Neander explicitly presents her theory just as an account of the intentionality of experience, and the narrowing of focus (setting aside the analysis of representational facts about belief and desire for another day) helps her defend the account against objections that bite against other views in that tradition. This looks like a match made in heaven! Neander has a story about what grounds (some) layer-1 representational facts. The Radical interpreter has a story about how layer-2 representational facts emerge from the layer-1 facts. Plug and play, and the job is done. (Well, actually, it’s not going to be that simple, as we’ll see).

First issue. One thing that stands out right from the start—and afflicts Pautz’s proposed primitivism as well as Neander’s reductionism—is that both accounts are developed as an account of the representational properties of experience. But radical interpretation, as I developed it, includes in its base the representational features of action, as well. So having a story about the representational features of perception is not enough—some extension or supplementation is called for. And, for the case of Neander’s treatment of perception, I’ll be providing the required extension in this subsequence of posts.

Second issue. Going back to the case of experience, even if we ground the representational content of some experiential states teleosemantically, it’s not automatic that those states and their content is suited to play the role demanding by radical interpretation. For example, I see a chicken with nine spots. My visual system may represent nine spots, but I do not attend to or count the spots. I may be unsure how many spots the chicken has. In this case, some of the representational content of my visual system has not been “uptaken” by the wider cognitive system. This is a place the radical interpreter must tread with care. On the simplest Bayesian models of rationality, for example, the “evidence” we need to extract from layer-1 intentionality is something on which we update by conditionalization, and so, post-update, we are certain that the world is that way.  On that model of rational update, the contents of the perceptual states are not suitable relata of the rationalization relation; they do not play that particular “evidence” role (this is even before we come to consider cases such as perceptual illusions and the like). Now, of course, the lesson to draw from this may just be that the simplest Bayesian models are wrong. Be that as it may, it illustrates that once we have layer-1 representation in place, we have further work to do to integrate it with the layer-2 story we’ve seen so far. (There are analogous issues to consider also on the action/intention side, where the output of a system of rational decision is presumably much coarser than the detailed content of motor states).

Third issue. Suppose we had a fix (somehow or other) on layer-1 facts about what an agent is experiencing, and what she is doing. Suppose we had succeeded in getting this at the right level of “grain” to mesh with belief and desire. There’s still a missing element that I will argue is critical to getting an adequate set of base facts for radical interpretation. This is to give an account of what the agents options were amongst which she chose (on the basis of her evidence) to do what she did. The agents options, in the sense that matters for rationalization, are not simply behaviours that are physically possible for her. It is possible for me to it the bullseye with a dart from ten metres away, but that doesn’t make it an option in the relevant sense (e.g. even if the dart hitting the bullseye would bring great benefits to me and no costs, the rational thing for me to do may be to put down the dart and walk away, for fear of the consequences of failing to hit the bullseye). Options in the relevant sense are things the agent has control over; what they can “do at will”. And this relation to what the subject wills or intends means that an adequate account of options is likely to involve representational resources. (Options don’t figure as such in the gloss on radical interpretation I gave above. By the end of this series of posts, we’ll be in a position to put forward a more refined version of way the various base facts, including options, show up).

Accounting for options is the challenge that I would pose to anyone wishing to claim the base for radical interpretation consists in non-representational facts about sensation and behaviour, non-representationally described. Options not taken have no behavioural signature. If I throw a ball at a window, then my limbs are moving in distinctive ways with relation to things in my environment. But if I have an option to throw a ball at the window, but do not take that option but continue typing, the trajectory of my body is keyboard-orientated, and I stand in no obviously special physical relation to the ball and window. So while there are purely physical correlates to experience and action, I simply don’t know how the advocate of the more austere alternative is planning to set up their theory.

I’ll be exploring the reductive approach to source intentionality. I hope I’ll also, down the linem have the chance to compare and contrast my approach to Pautzian primitivism, but for now, some initial remarks will have to do. Even at this stage, we can see that buying into primitive representational features of conscious experience is only the start of the commitments we would need to prosecute a primitivist approach to source intentionality. Actions/intentions also demand treatment, and it’s unclear from Pautz’s published work how he would cover that case. One approach is to multiply representational primitives. Perhaps the representational properties of intentions as well as experiences are metaphysical bedrock. An alternative is to seek a reduction of all kinds of source intentionality to the intentionality of experience—for example, by appealing to our experience of our own actions. Neither route is straightforward or cost-free, and those who are tempted to follow him should bear in mind also the need to provide not only for the representational properties of actions and intentions, but also for (intentional) truths about the agents’ options.