NoR section three supplemental: Options as abilities?

This is one of a series of posts setting out my work on the Nature of Representation. You can view the whole series by following this link

I earlier followed Hedden in identifying an agent’s options with a certain range of basic intentions available to them. This theoretical setting—assumed but not argued for—illustrate us one way of connecting source intentionality to radical interpretation. But it is not the only way. Here I’ll sketch an alternative account.

The alternative is as follows: an agents options are not the intentions that they form, but their basic, specific abilities. Let me explain what one of those is.

The agents basic abilities I will understand as picking out those acts that are the execution of a basic intention. A basic intention, in turn, is an intention that has a function to produce a certain result. But as is now familiar, something can have the function to produce phi, and yet not produce phi. This will happen when circumstances are “abnormal”. For example, one mental state of mine has the function to produce my arm rising. If my arms are tied behind my back, then the existence of such an intention does no good: my arms stay resolutely stuck to my sides. So though I have the general ability to raise my arms, I lack the specific ability to raise my arms in these circumstances.

What then, are an agent’s basic specific abilities in circumstances c? I propose they are just those among the agent’s general abilities whose associated “normal conditions” are met in c. Unpacking this, x having a specific ability to phi will entail that x can form an intention whose function is to bring about phi, and also that circumstances are normal for this function, so that if it were formed, phi would be brought about.

Clearly this is an account of basic specific abilities that relies very heavily on our account of the content of intentions. It is not something we could reconstruct without an account of source intentionality, though unlike the previous account, we don’t identify options with the representational states themselves.

I call all these things “basic abilities” since I want to emphasize that not every true claim of the form “Sally can phi” or “Sally is able to phi” is an option in the relevant sense. We can use these phrases—agentive modals—to report on a richer range of ‘extended’ abilities that Sally possesses, which often depend on features of her circumstances not at all to do with Sally or her functioning. Example: given Suzy’s maximum pace, and the maximum pace of Orcs, Suzy can outrun the Orcs chasing her. But without any malfunction on Suzy’s part, the speed of the Orcs could be different, and so an intention to outrun them doesn’t guarantee success.

It is very natural to think that there should be a connection between basic abilities and “extended abilities”—that we enact the latter by way of enacting the former. Recent work on the semantics of agentive modals by Mandelkern, Schultheis and Boylan is suggestive of such a connection. According to their story, an agent x is able to phi just in case there is some practically available act A such were x to try to A, they would phi. Suppose we insist that practically available acts [for an agent in world w and context c] must be those that correspond to the agents basic abilities. We could then follow Manderkern et al in adding further constraints that narrow down this basic repertoire to produce the contextually relevant set of aviailable acts [the authors suggest an epistemic constraint: that an act is only available if the agent knows it is a way to phi; and also that the set of acts will contextually vary, and by default should be compatible with the agent’s prior plans]. If this were the way that the semantics and metasemantics of agentive modals works, then although our everyday ascription of abilities goes well beyond the basic set, nevertheless basic abilities remain at the core of ability-ascriptions.

If options were an agent’s basic, specific abilities, what would be the consequences for rational decision making?

Well, to begin with, by construction if the agent forms the intention to execute an option to phi, then the agent will phi. It would not be an option unless circumstances were normal, and intention-formation is guaranteed to bring about its content in all normal circumstances. So, in a certain sense, the possibility of “trying and failing” is excluded.

However, the agent may lack knowledge of this. She may not know, for example, if her hands are bound.  If that is the case, then she can credence to a scenario where she forms the intention to raise her hands, and her hands do not rise. This is where the account of options-as-intentions and the account of options-as-abilities differ. On the former, in such circumstances, the relevant option to evaluate is: forming the intention to raise one hands. The agent will assess various possible outcomes of this, including those where her hands are bound and do not rise. On the latter, in the same circumstances, the relevant option to evaluate is: raising one’s hands. The agent will then assess what will happen conditional on enacting this, which excludes the possibility of one’s hands not going up.

This is a case where the agent is ignorant of her options because she does not know whether conditions are normal, and so doesn’t know whether her general ability to raise her arm translates into a specific ability in this circumstance. Another way an agent can be ignorant of what options are open to her, on the options-as-abilities account, is by lacking knowledge of her general abilities. She might have never tried a particular stressful bodily motion, or not attending while she tried to do so, so that when asked to reflect on whether she would succeed if she tried, she is not confident.

You might think that its treatment of ignorance of one’s abilities is a strike against this theory of options: that rational decision-making should factor in the possibility of trying-and-failing, even if in fact, in all normal circumstances including the actual one, the agent would never try-and-fail. If we were building a theory of options for structural rationality, I would have sympathy. But notice that it is really hard to construct a theory of options that excludes the possibility of ignorance of what those options are. For example, if options are abilities, one might not know which intentions it is possible for you to form. And on the version of the Hedden view that I suggested in the previous post, one might be ignorant of options because one did not know, of a given intention, whether it was appropriately basic.

At some point we will need to hold the line, I think, and insist that ignorance of options is just ignorance of what rationality recommends. For other determinants of substantive rationality, that is already familiar. X may be ignorant of what is morally or all things considered permissible for her, due to her ignorance or (justified?) false belief about moral demands. The suggestion here is that ignorance of options can work like ignorance of value.

Suppose that the agent knows that raising their arms would bring about the best consequences, but is unsure whether their arms are tied. The verdict I am suggesting is that they should raise their arms: enact the ability they do possess. Their ignorance is normatively relevant only insofar  as it is a decent excuse for not doing as they ought.

A positive attraction of the view is that it enables a straightforward reading of the idea that “ought implies can”: specific abilities are exactly the things the agent can do. And it also avoids an unattractive feature of the options as intentions view: that in order to be rational, an agent needs be opinionated about the likely consequences of them being in the mental state of intending. It seems to me that the natural view is that an agent could be rational by forming intentions, but not thinking about forming intentions.

The view just sketched is less plausible as a theory of the options relevant for structurally rational action [I’m here very grateful to a series of discussions with Gary Mullen on which the following is based—though Mullen is not to blame for any mistakes in what follows]. The problem here is that structural rational evaluation is supposed to be a commentary on patterns in the agent’s mental states. But if options depend on the circumstances an agent finds themselves in (e.g. Whether arms are tied) two mental duplicates could differ in rationality while making the same choice. This seems odd. So I do think we need a more internalistic notion of option for structural rationality. Suppose it were this: that our options for structurally rational action are those things we think are our basic abilities. And suppose that the theory of rationality is that we should try to enact whichever one of these abilities we think would lead to the best results. Then, in the special case where we have full knowledge of our abilities, the option-set for structural rational decision making and substantively rational decision making will coincide. Further, there will be rational pressure on us to keep track of our abilities, in order to preserve the possibility that taking the structurally rational option is also to take the substantively rational one.

Notice that if we identified options with those abilities we believe ourselves to have, then our theory of options, which should be layer one intentionality, would depend on what we believe, a layer two fact. So it would be a considerable headache for my account if we tried to collapse the account of options to the single “internalistic” case.

I haven’t argued for the correctness of options-of-abilities, any more than I argued for the correctness of options-as-intentions. But together, they give us two very different ways that we can wield the resources of source intentionality to generate a suitable basis on which radical interpretation can be founded.

NoR section 3 supplemental: functions II

This is one of a series of posts setting out my work on the Nature of Representation. You can view the whole series by following this link

In the last post, we reviewed a striking feature of etiological theory of functions. What functions organs or states of a creature have depends on their evolutionary history. So accidental creations—swamppeople or Boltzman creatures—that lack an evolutionary history would lack functions. This is very surprising. It is surprising even in the case of hearts—a perfectly functioning heart, a duplicate of your own, on this account lacks the function to pump blood. It is shocking in connection to representation, where in conjunction with teleosemantic account of perception and intention it means that accidental creations would not perceive or intend anything. Though teleosemanticists typically advise that we learn to live with this, and one can coherently add these claims to my favoured metaphysics—I would much prefer to avoid this. So here I look into this a little further.

Teleosemanticists emphasize an important foil to swamperson/Boltzmann creature cases which are a challenge to those of us who don’t want to take the hard line. Swamppeople are accidental replicas of fully-functioning systems, but we need to consider also accidental replicas of malfunctioning systems. To convey the flavour, I offer three cases involving artefacts:

First, take a clockwork watch, one of whose cogs has its teeth broken off. The faulty cog has a specific function within the watch, because that’s the role it would play if the watch were working as designed. That role is the one the cog was selected to play—though it isn’t functioning that way. But take a swamp duplicate of the watch. Does the smooth metal disk inside it *supposed* to play a certain role within the watch? It’s far from obvious on what grounds we would say so. Or consider a second case: a broken watch/swampwatch pair where all cogs are smoothed and mangled, so that it is impossible to reconstruct the “intended” functioning just from the intrinsic physical/stuctural setup. If we think that the parts of a badly broken watch still have their original functions (albeit functions they do not discharge due to damage) and the replica swampwatch, in the absence of the history, does not, that would demonstrate that function is not just a matter of the intrinsic physical setup.

Second, two watches which have different inner workings (hence different functions for the cogs) might both malfunction and be so distorted so that the resulting broken watches are duplicates of each other. But the functions of the cogs remain distinct. So, once more, functions can’t be preserved under physical duplication. This case dramatizes why we can’t merely “project” functions onto accidental replicas of damaged instances, since in this case different incompatible functions would be assigned by such a procedure.

Third, consider cases where a damaged instance of one artefact happens to be a physical duplicate of a fully functioning artefact whose purpose is different. Again, we’re left all at sea in determining which is the “normal pattern” for something that accidentally replicates both.

Each of the above points made about artefacts carry over to biological systems—in principle, a damaged version of some merely possible creature could physically duplicate actually existing creatures. And so again, we’re at sea in accidental creations in determining whether they have the functions of the former or the latter.

These cases it seems to me, do support the claim that functions of a malfunctioning system are an extrinsic matter.

I take it the argument at this point goes as follows: if it is a wildly extrinsic matter what the functions of system S are, when S is malfunctioning, then “having function F” is a wildly extrinsic matter in all cases. And so it is no cost to the teleosemantic account that it says the same for the case of perception.

There are two ripostes here. The first is that while these considerations may motivate the claim that the functions of perceptual/motor states are wildly extrinsic, that may just show that they are not suitable candidates for being the ground of representation, since representation [we still maintain] is not wildly extrinsic in this manner. The second riposte is that it is not true, in general, that because some instances of a property are wildly extrinsic, that all are. Consider the following property: either being a sphere, or being one hundred yards away from an elephant. A sphere possesses this property intrinsically. Whether I possess it depends on my location vis a vis elephants—a wildly extrinsic matter. I think that functions, and representation, may pattern just like this: being intrinsically possessed in “good” cases, but being extrinsically possessed in cases of malfunction. At the least, it would take further argument to show that this is not the right moral to draw from the foils.

To this point, I have been considering the etiological account of function. This is not the only game in town. Alongside the etiological (historical, selection-based) accounts of functions sit systematic-capacity accounts. In a recent development of the basic ideas of Cummins, Davies characterizes a version of this view as follows:

an item I within a system S has a function to F relative to S’s capacity to C iff there’s some analysis A such that: I can F; A appropriately and adequately accounts for S’s capacity to C in terms of the structure and interaction of lower-level components of S; A does this in part by appeal to I’s capacity to F (among the lower-level capacities); and A specifies physical mechanisms that instantiate the systematic capacities it appeals to.

Now, just at the level of form, we can see two important aspects of the systematic-capacity tradition. First, functions are relativized to specific capacities of a high-level system. And second, it’s a pesupposition of the account that items can discharge their functions—“broken cogs” or “malfunctioning” wings will not have their “normal” functions when the system is broken. If we were to try to appeal to this notion of function within the teleosemantic approach, we would have no problem with the original swampperson case, for swampperson would instantiate the same perceptual structure we do, and so functions of its components would be shared. But the two features just mentioned are problematic. The first appears to allow a embarrassing proliferation of functions (standard example: the capacity of the heart to produce certain sounds through a stethoscope may lead to attributing noise-making functions to contractions of the heart). I do not see this as a major problem for the teleosemanticist. After all, one can simply specify the capacities of the sensory-perceptual system or motor-intentional system in the course of the analysis—the interesting constraint here is that we be able to specify these overall capacities of perception or intention in non-representational terms. The second feature highlighted has been central, however. Part of the appeal of the teleosemantic approach was that it could account for cases of illusion and hallucination which involve malfunctions. But while some cases of illusion can be covered by the story [since a type might be functioning in a certain way in a system—e.g. being produced in response to red cubes in conditions C—even if a given token is not produced in response to a red cube, when conditions are not in C. But we can also have malfunctions at the level of types. In synaesthesia produced by brain damage, red experiences may—with external conditions perfectly standard—be produced in response to square shaped things. This systematic abnormal cause doesn’t undercut the fact that the squares are seen as red. An etiological theorist can point to the fact that the relevant states have the function of being produced in response to red things, and are currently malfunctioning. The systemic theorist lacks this resource.

A systemic capacity account of functions,would be an account of function independent of representation, and so fit to take a place as part of the grounds of layer-one intentionality. It also meets our desiderata: it is not wildly extrinsic, and it can underpin learned as well as innate functions, since what matters is the way in which the system is working to discharge its capacity, not on how the system came to be set up that way. But given the points just made, it may not count as a realism about “proper, normal” functions, if those are understood as allowing for wholesale malfunctioning of a state-type. We shouldn’t overstate this: not all cases of illusion or hallucination [misperception or intentions not realized] are wholesale malfunction. But it does seem that wholesale, type-level malfunction is involved in at least some cases of misrepresentation.

I don’t think this blows the systemic theory of functions out of the water as an underpinning for layer one intentionality. The etiological theorists, we saw in the previous post, were forced to drastic revision of intuitive verdicts over swamppeople. And if we’re in the game of overturning intuitive verdicts, the systemic theorist might simply deny that a synaesthete’s red experiences are misrepresentations in the first place. They could fill this out in a variety of ways: by saying that a synaesthete’s chromatic quale now represents a thing as having the disjunctive property of being red-or-square; or they could adopt an individual relativism about red, so that to be red for x is to be apt to produce red-qualia in x, in which case the right description is that the synaesthete’s experience accurately represents the square as being red-for-them. It’s important to the credibility of this that one grants my assertion above that mundane cases of illusion involving abnormal environments or visual cues can already be handled by the systemic function account. Once we’re into more unusual, harder cases, the revisionism looks not too costly at all.

Ultimately, then, the systemic capacity account does hold out the prospect of meeting all my commitments and desiderata simultaneously. And remember: my purpose is not to endorse it on any specific account of functions, but to explore their joint tenability.

While we were still discussing the etiological theory of functions, I noted that the etiological theorists had a decent case that in cases of drastic enough malfunction, functional properties are extrinsically possessed—it does seem that historical facts explain why we’re tempted to still say that the function of a watch cog is to turn adjacent cogs, even when smoothed off so it is no longer fit for that purpose. I also noted that it does not follow that functional properties are extrinsically possessed in all instances. We can emphasize this point by noting the possibility of a combined account of function that draws on both theories we have been discussing, thus:

I has uber-function to F (within a system with S and relative to capacity C) iff either I has a systemic-function to F (relative to S/C) or it has the etiological-function to have that systemic-function to F (relative to S/C).

Just as with the disjunctive property I discussed earlier, creatures who are fully functioning—you, me and swampman—will possess such properties independently of historical facts about the origin of our cognitive setup. But this account, unlike the pure systemic-function theory, provides for other creatures to possess the very same property in virtue of historical provenance. On this account, for example, a synaesthete’s red quale may represent the very same red property that yours and mine do, since the state was evolutionarily selected to be produced by the presence of that property. This combined account is not committed to the  revisionary implications of either of its components. So this again supports my contention that the commitments and desiderata of my deployment of functions can be jointly satisfied.

NoR section 3 supplemental: functions I

This is one of a series of posts setting out my work on the Nature of Representation. You can view the whole series by following this link

The account of source intentionality I offer appeals to a notion of a state (within a system) having a function to do this or that. Perceptual states have the function to be produced in response to specific environmental conditions. Those states can occur and represent what they do when the environmental conditions are missing (as a result of perceptual illusion or deliberate retinal stimulation by an evil scientist). So clearly it’s important that in the relevant sense, something can have a function it is not currently discharging: that it can malfunction when it is the wrong conditions or when interfered with. A state can have a function to X even when it’s not functioning by X-ing.

Functions like this (“normal, proper functions”) can look somewhat spooky. What makes it the case that the perceptual state is for representing red cubes, even in circumstances where it’s being trigged by something other than a red cube? What grounds this teleology?

Giving an answer to this question is strictly supererogatory. Relative to my aims, they can be for me a working primitive. It is even consistent with everything that has been said here that with these functions, we hit metaphysical bedrock—that facts about functioning are metaphysically fundamental. That would be a reductive account of the way that representation is grounded in a fundamentally non-representational world—but a non-representational world that includes unreduced teleological facts.  There are those who are interested in reductive projects because of an antecedent conviction that everything is grounded ultimately in [micro-]physics, and hitting rock bottom at teleological facts about macroscopic systems would count as a defeat. That is not my project, and for me, a reduction that bottoms out at this level would be perfectly acceptable. Arguably, it should count as a “naturalistic” reduction of representation—since biological theorizing prima facie is shot through with appeals to the function [the function of a heart being to circulate blood around the body—whether or not it is currently so functioning].

My commitments are as follows. First, I am committed to disagreeing with those who would deny the existence of normal proper functions and analyze quotidian and scientific function-talk in some other fashion—realism about proper functions. Second, on pain of circularity, the relevant functions can’t be grounded in representational facts—independence from representation.

I add a pair of desiderata. These are not commitment of the account, since a version of the project could run even if they are denied. So: third, I hold that functions can be established through learning: not all functions are fixed by innate biology. If this were denied, then I could not say what I said in previous posts about acquisition of behavioural or perceptual skills extending the range of perceivings and intendings available to a creature. While certain discussions need revisiting if functions couldn’t be acquired in this way, the overall shape of the project would remain.

Fourth and finally, I deny that representational facts are wildly extrinsic. I say that a perfect duplicate of a perfectly functioning human would perceive, believe, desire, and act—even if that duplicate was produced not by evolution but by a random statistical mechanical fluctuation [a “Boltzman creature”].

The job in what follows, then, is not to give you a theory of how functions are grounded, but to examine whether the commitments and desiderata are jointly realizable.

The first account of functions we’ll examine is the etiological account of function. This is the view that proper functions [of a type of state] are grounded in historical facts about how states of that type were selected. This is Neander’s favoured approach, and in her (1991) she characterizes the view as follows:

It is the/a proper function of an item (X) of an organism (0) to do that which items of X’s type did to contribute to the inclusive fitness of O’s ancestors, and which caused the genotype, of which X is the phenotypic expression, to be selected by natural selection.

The etiological account of proper functions meets the realism and independence commitments. But it violates both my supplementary desiderata. By tying functions to natural selection, it does not underwrite functions acquired in the lifetime of a single creature, and by making functions depend on evolutionary history, it is committed to denying that the states of Boltzmann creatures [or Davidson’s “swampman”] have functions in this sense—and given the teleosemantic account of layer-one representations, that is a violation of my desideratum.

The question is then whether some adjusted or extended variant of the etiological account of functions could meet the desiderata. Neander, for one, presents the account above as a precisification, appropriate for biological entities, of the vaguer analysis of the function of something being the feature for which the thing was selected. The vaguer analysis allows other precisifications: for example, she contends it also covers the functions to do something that elements of an artefact possess, in virtue of being selected by their artificer to do that thing. On a creationist metaphysics on which God is the artificer of creatures like us, we could still offer the teleosemantic story about the content of perception, but with this alternative understanding of what the function talk amounts to. I take it that the vaguer analysis also allows for selection within the lifetime of a creature—functions resulting from learning.

While this extension of the etiological account shows a way for it to meet the Learning desideratum, it is no longer clear that an extended account of function would be independent from representational notions. That’s familiar enough for artefacts and the creationist underpinning: unpacking “selection by an artificer” will appeal to the intentions and beliefs of that artificer. We can still include appeal to such functions in our account, but they can’t come in to explain layer one intentionality as I have characterized it, but must come downstream of having earned the right, at layer two, to the artificer’s representations [alternatively, the creationist might posit a new layer zero, consisting of the primitive? representational states of God].

What’s most interesting to me, however, is how this plays out with selection-by-learning. It could be that in at least some cases, this kind of selection does depend on the intentions and beliefs of one doing the learning. Let’s suppose that is so, in the interests of exploring a “worst case scenario”. This would indeed mean that such acquired functions wouldn’t be present at layer one. The moral may be that the idea of just two stages was oversimple. Better might be a looping structure: facts about evolutionary history ground innate functions which provide a base layer of perceptual and motor representation. These are the basis for radical interpretation to ground beliefs and desires of the agent. These belief and desire facts ground further acquired functions which give a suppementary layer of perceptual and motor representation. Radical Interpretation is then reapplied to ground a supplementary layer of beliefs and desires. The structure loops, and so long as every relevant representational state is grounded at some iteration of the loop, this is fine.

It is much harder to see how the etiological account can meet the no-wild-extrinsicality desideratum. In the literature, etiological teleosemanticists do not even make the attempt, but argue that we should learn to live with it. If you agree with them, you can stop worrying. I still worry.

So let’s take stock. I have explored the etiological theory of functions, not because I feel the need to provide a metaphysics of functions—after all, it’s fine by me if this kind of teleology is metaphysical bedrock. Rather, I want to test whether the commitments and desiderata of my deployment of functions are jointly realizable. One account that has been given of functions is the etiological one, and I have suggested that some changes [the introduction of a looping structure] would be needed if the learning desiderata was to be met in the context of that account. And, of course, the no-wild-extrinsicality desideratum is violated.

NoR 3.5: Relata of rationalization

This is one of a series of posts setting out my work on the Nature of Representation. You can view the whole series by following this link

In previous posts covering sensory/perceptual states, and intentional/motor states, I’ve provided a teleosemantic story of their layer-1 representational properties. The question now is move from this to characterize the base facts for radical interpretation, the “courses of experience” (E) and “dispositions to act” (A) that appeared in my formulations of that account of layer-2 representation. I’m not attached to vindicating that particular wording: what we are looking for is a refined proposal that’ll do the right job, more precisely:

  1. What we substitute for (E) and (A)  really do stand in rational relations to beliefs and desires.
  2. The resources developed in the last few posts enable us to characterize the formulations substituted for (E) and (A).

A disclaimer right at the start: I am not going to discuss here the possibility that the teleosemantic contents may fail to be the right relata for rationalization because they are in some sense “nonconceptual” in contrast to the “conceptual” contents of belief and desire. The teleosemantic story determines truth-conditional content, and radical interpretation seeks to say what it takes for beliefs and desires to have similar truth-conditional content. Issues of concepts (in the relevant, Fregean, sense) are not something I’ve broached so far. I’m aiming to maintain that track record.

I will start with the appeal to “dispositions to act”. Our discussion of options in the last post in effect has already introduced the theories and themes that are required. The account put forward there was that our options were intentions: when framing a decision problem, the item we assess for expected value is the formation of an intention, and moreover an intention that has a function to bring about states of the world. Various “high level” states we might call intentions in natural language do not qualify—it’s perfectly ok to say that Sally had the intention to run all the way to the bottom of the hill before she feel over, or that Suzy intended to insult Sally. But states with those kinds of high-level content do not have a function to indicate in the sense set out earlier—they can fail to be satisfied in the absence of any “malfunction”, if Sally or Suzy have false beliefs about their abilities or their target. Sally’s options, in a specific context, are all the intentions she might form in that context. The option Sally enacts is the intention she forms. On this account, what beliefs and desires rationalize is the formation of certain intentions, or better: the contrastive formation of one intention out of all the others possible for the agent.

This doesn’t quite pin down the characterization of the base facts, since there are can be plenty of intention/motor states with functions to produce states of the world which are not plausible “options” for an agent—since they characterize the fine details of motor control over which the subject typically has no access. In the cases that concern us, these subpersonal states are triggered by a person-level intention, but the relation they have to beliefs and desires is purely causal, not rational. So while this account of options tells us that they are to be found among those which are teleosemantically grounded, it doesn’t yet tell us which among these states count as options. To complete the account, I suggest we appeal to a causal-role characterization: that among those intentional/motor states teleosemantically grounded, options are those which trigger other intentional/motor states with functions-to-indicate but which are not themselves triggered by such states (perhaps a states can sometimes be triggered by another intentional/motor state, and sometimes comes into being without such triggering: this will suffice for it to count as an option is the relevant sense).

With this final piece in place, the proposed substitution for “dispositions to act” comes into view. Our interpretee, at a particular place and time, has an array of options (possible intentions-formations in the sense just defined). She forms one of the intentions in this set and not the others. The formation of this intention triggers further downstream intentional/motor states which cause and control bodily movements on the part of the agent. The belief/desire interpretation should attribute beliefs and desires to the agent at that place and time that rationalize this contrastive intention-formation. But of course, rationalizing a single-intention-formation episode is not the be all and end all: a belief/desire interpretation of Sally (attributing her beliefs and desires at arbitrary places and times) needs to (optimally) rationalize her contrastive intention formation dispositions with respect to every point. If we want a more accurate labelling than “disposition to act” we might go for: “dispositions for contrastive intention-formations”.

(Aside. The decision theoretic setting and the appeal to beliefs and desires rationalizing options makes this sounds all very internalist, and perhaps more suited to a theory built on structural rationalization rather than substantive rationalization. But there’s nothing inconsistent with using a decision-theoretic formalism for substantive rationality: the “value” functions can report not subjective degree of desire but objective value (or agent-relative value that does not match the same agent’s desires). The “probabilities” are equally open to a variety of interpretations. So the framework is extremely flexible. The appeal to a belief/desire interpretation that “rationalizes” options just expresses the presupposition that beliefs and desires are among the determinants of the probability and utility—which may be because the relevant probabilities are indeed degrees of belief, or that degrees of desire matter are at least a factor in determining value, or more broadly by a role for beliefs in determining what reasons you possess, or for personal projects to determine a wellspring of value that may vary psychology by psychology. Amidst all this flexibility, the very form of the calculations of expected value and the way they relate to options in Jeffrey’s formalism (and various related ones, such as causal decision theory a la Joyce) means that there is no contribution to expected value from contingencies that are inconsistent with the proposition that specifies the option. So the underlying drive to characterize options in a way that makes them certain (or better: probability 1, however that is to be interpreted) when pursued is baked into the form of the theory of rationalization independently of interpretation. And while it’s not inevitable that we respond to that by following Hedden and identifying options with intentions, that account retains its appeal even when we move beyond the structural rationalization interpretation of the formalism. End Aside).

Radical interpretation also requires that the attribution of belief and desires at each point mesh with one another; specifically, that the belief changes imputed be rational responses to the evidence made available by experience. This is where appeal to “course of experience” came in.

We have at our disposal teleosemantically grounded representational facts about perceptual states. Many of those states will be subpersonal intermediaries between retinal stimulation and the output of the perceptual system. In parallel with the discussion above in intention, I suggest we concentrate on perceptual states characterized by a terminal causal role: those which are do not themselves trigger further perceptual processing.

There’s another parallel with the discussion of intentions. There are plenty of states that we would ascribe in natural language as seeings, hearings, and so on, which involve high level content. We might talk of hearing the car return, or seeing that the dishwasher is finished. But clearly the content ascribed in such cases can be false even when there is no perceptual malfunction, but simply false beliefs. So even absent malfunction, such states need not be responses to worldly conditions matching their content, and that shows that are not states whose contents are teleosemantically grounded in the sense I have outlined. So a commitment of this framework that the relata of evidential rationalization, in the sense in which these appear as base facts in radical interpretation, need to be low-level, not cognitively-penetrated, perceptual states.

(Aside: I am not committed to denying there are perceptual states with high-level content, any more than I am committed to denying there are intentions (planning states) with high-level content. And I can allow that these stand in rational relations to beliefs and desires and lower-level perceivings; one might include the assignment of content to such states as an extra item in the interpretation selected by radical interpretation. But in each case, I am committed to denying that high level states are the only things that stand in relation to beliefs and desires—the critical thing if the account is to be applicable without further epicycles is that there be a layer of low level content in perception that rationally constrains the evolution of belief, and likewise, that beliefs and desires rationally constrain a layer of low level content. It’s worth noting, also, that the high level/low level boundary need not be fixed. I think it’s plausible that response-functions can be acquired. Just as we can expand the range of intentions which have functions-to-produce by internalizing and making automatic the skillful execution of complex routines, We can expand the range of perceptions that have response-functions by internalizing and making automatic the transition whereby they are triggered by more paradigmatically low-level perceptual states. The key thing, in both cases, is that the internalized routines are executed independently of what the agent beliefs or desires—a sufficient condition for this being the case would be the capacity for figuring in perceptual illusions. End Aside).

Suppose Sally is perceiving a yellow banana (better: is seeing that a yellow crescent-shaped thing in front of her). If we were to pursue the analogy to the case of intention fully, then we would suggest that the relata of rationalization, the “experiential evidence” is not to be identified with the content of this perception:

  • there is a yellow crescent shaped thing to the front.

but instead the following:

  • I am undergoing a perception with the content: that there is a yellow crescent shaped thing to the front

This would be the analogue of saying that the primary relata of practical rationality is not the action described in the content of an intention, but the intention itself.

The proposal has some independent appeal. The fact about perception truly describes both someone who is viewing a normal banana in normal conditions, and one who is viewing a white plastic banana under yellow lighting. It is something that could be straightforwardly uptaken into the beliefs of both parties, even if they knew their respective situations. This “common factor” view of the incremental evidence experience provides across the two cases has obvious attractions in the context of radical interpretation, where the aim is to identify some “evidence” independently of layer-2 facts about belief and desire.

For contrast, consider a dogmatist view on the increment of evidence provided by experience. On this view, we are justified in updating directly on the content there is a yellow crescent shaped thing to the front absent certain defeaters and undercutters. One such defeater could be: that one believes background conditions to be abnormal. So in effect, rationality would then impose a disjunctive constraint on subjects who have an experience with the content that there is a yellow crescent shaped thing to the front. Either they come to believe that content, or they have (already?) a belief that background conditions are abnormal. This dogmatist theory of evidence is perfectly compatible with radical interpretation, and doesn’t require anything of layer-1 intentionality that we have not provided for. Nevertheless, for convenience and concreteness, I’ll work with the common-factor account.

There is a question we face in the case of characterizing the perceptual relata of rationalization that has no obvious analogue in intention. The content of experience seems rich and analogue—I perceive a subtly varying colour profile of greens and yellows when I look at a tree. We might suppose that the content of this experience involves a particular number of perceived leaves, just as a picture may involve a particular number of painted leaves. But resource-constrained agents like you and I don’t update on all this information. I form the belief that the tree has lots of leaves, and that they range from green to yellow. But—for example—I wouldn’t take a bet at even odds that there were exactly 148 leaves on the tree, even if the totality of the facts perceptual represented by me now entails this. So the suggestion is this: the transition from terminal perceptual states to the evidence actually updated upon is lossy. And so one cannot simply characterize that incremental evidence as the totality of all the terminal perceptual states.

At this point, we are again back into questions of cognitive architecture that are ultimately empirical. It may be that there is a filtering within the perceptual system (by attention, say) which outputs some special set of perceptual states. Only the states with this distinctive causal role are passed on to central cognition (though other terminal states in the system may make a difference to perceptual phenomenology). But equally, it may be the architecture is indeed lossy as described. There’s no a priori reason, I think, to think our perception works one way rather than the other.

The right response is the following. Epistemological theory, in the general case, should not solely specify a relation between belief change and a proposition/propositions on which one updates and directly incorporates into belief (as it would, for example, if we took the Bayesian theory of conditionalization to be right format for a full theory). Instead, epistemological theory should relate a belief-change to the full content of the experience, without assuming that the full content is taken up as belief. An extra parameter is needed: the rational constraint on belief change is that one updates on those aspects of one’s experience to which one stands in the right functionally-characterized “uptake” relation. In that case, if q is the full content of Sally’s experience, then the interpretation of Sally will be constrained by a complex condition: for Sally to undergo a rational belief change, then there must be some p such that (i) Sally changes her beliefs by updating on p; (ii) p is entailed by q (/the fact that Sally has an experience with content q); (iii) Sally is standing in the right functional relation to p—e.g. attending to the p-aspect of her experience. Element (i) could still be cashed out in a Bayesian way, if one wished. Element (ii) keeps us honest by requiring that the story doesn’t go beyond facts given in experience. Element (iii) will be tailored in different ways to different perceptual architectures.

Having provided for the full range of cases, for reasons of simplicity and concreteness, going forward I will assume that Sally’s sensory-perceptual architecture already does the work of selecting, so that element (iii) is vacuous for her.

This leaves us with the following picture. The base facts about Sally’s “dispositions to act” are facts about her (low level) intention-formations, against the background of all the other (low level) intentions she might form. The base facts about Sally’s “courses of experience” are the fact that she has an experience, the relevant part of the content of which is that q. The rational constraints include a broadly decision-theoretic constraint that beliefs and desires in circumstances c determine probabilities and values which rationalize the dispositions to form intention x (rather than w,y,z) in c; and also a broadly Bayesian constraint that Sally’s change in belief between a pair of contexts c/c* (in which she undergoes experience e) is by conditionalization on the proposition that part of the content of e is that q.

NoR 3.4: Options

This is one of a series of posts setting out my work on the Nature of Representation. You can view the whole series by following this link

Sally runs at top speed down a scree slope. Whether that’s a reasonable thing for her to have done depends on multiple factors. It depends on the dangers of running downhill at that speed, and the likely effects of so doing. It depends on the (dis)value of the effects and risks. But, crucially, it depends also on what options are available to her. Perhaps running downhill at top speed would bring it about that Sally wins her race, running a certain level of risk of injury. Alternatively, Sally might have another, much safer, way of winning her race, taking a traversing route that avoids the risky downhill. Against the backdrop of options-not-taken, careering downhill is unreasonable. But if there’s no such alternative, her route-choice may be perfectly reasonable.

The actions that people perform typically have a behavioural signature. Sally couldn’t run at top speed down a mountain, without her body tracing a largely descending motion downhill, nearing the limit of her physiological capacities. But the actions that people do not perform—their options not taken—leave no signature in actual behaviour.

Options, moreover, matter to the rationality of a choice. Let’s return to our Bayesian model for an illustrative example of this. The standard way of representing Sally’s decision situation is to draw up a “decision table”, which schematically looks something like this:

The “states” that head the columns are understood to represent various background conditions. The actions that head the rows are the optionsavailable to Sally. Each cell will then represent the outcome of performing the action that heads its row, in the background condition that heads its column. Sally will desire each outcome to a greater or lesser extent. She will have more or less confidence that each background state obtains. Once we “frame” a decision situation in this way, it is natural toevaluatethe options Sally has, by somehow aggregating the desirabilities of the possible outcomes of so-acting. According to Jeffrey’s evidential decision theory, for rational agents the desirability of an action A will be a weighted average of the desirability of the possible outcomes O, where the weights are given by conditional credence the agent gives to O given A. The rational action(s) are the one(s) with maximum desirability.

Whether an act actually performed is rational is a contrastive matter: an act needs to be as desirable as any other option. Add or remove options, and one can change an optimal act into a suboptimal one; depict a rational agent irrational or vice versa. If one were utterly unconstrained about assigning options to each decision situation, there’d be a cheap trick available that will make absolutely any set of dispositions-to-act rational, no matter what the agents beliefs and desires are. One simply represents the action-taken as the only option for that agent at that time, ensuring, vacuously, it maximizes desirability. Fatalism delivers cheap, disreputable rationality.

This situation matters for radical interpretation. It suggests that the base “disposition to act” facts for determining a correct interpretation are not absolute (the agent did such-and-such) but contrastive (they did such-and-such as opposed to so-and-so,…).

The task of accounting for the agent’s options within our account of source intentionality  is made more complex by the fact there’s no consensus on what those options are. The theory of rationality itself can make very strong presuppositions about this—for example, Jeffrey’s way of evaluating the desirability of an option above ignores completely outcomes in which the option is pursued but not realized. That matters. For example, suppose that we described Sally’s options during her race as “run to the bottom of the hill” or “take the traversing route”. The expected Jeffrey-utility of the proposition: Sally runs to the bottom of the steep hill,  may be much better than that of the proposition: Sally takes the traversing route. But that clean win only comes about because a salient risk (of falling over, getting injured and failing to get to the bottom) of going for the first “option” is ignored by Jeffrey’s calculations, which only considering the various possible ways the propositions in questions might be made true. So if we’re to go Jeffrey’s way, it looks like the first-pass gloss on the options is wrong, and we need to move to some more “open” characterization of the options: starting to run down the hill, or starting the traversing route, or trying…  etc. And to emphasize: this constraint arises because of the way the decision theory is set up. Different theories of rationality might place different demands. Unless we hold the theory of rationality fixed, the “options” we need to underwrite are a moving target.

Still, we have a lot of raw materials to play with, however this goes. We have out our disposal not just raw behaviour and physical capacities of the agent, but also facts about what the agent intends, what it’s physically possible for them to intend, and what would result were they to so intend. And the key assumption the radical interpreter needs is that one way or another, this material suffices to fix the agents options. So long as all parties agree on this, they can happily fight out among themselves the right way of filling in the details, noting ahead of time that this will be contentious, largely because it will involve taking a stand on the contentious issue of option-individuation.

Let me give one example of a theory of options in the current literature, one which is motivated quite independently of considerations of metaphysics of interpretation. Brian Hedden, working within the decision theoretic model of rationality, argues that any bodily-movement-implying characterization of options will suffer from problems analogous the proposal that Sally has “run to the bottom of the hill” as an option. To avoid those sort of problems, one needs to be certain that one wouldn’t fail at enacting a given option if one went for it (otherwise one can cook up decision situations in which the consequences of “going for” that option and failing are severe enough to make “going for it” look unattractive, and evaluating it looking only at the case of success look utterly reckless). But—argues Hedden—one will never be completely certain of enacting any option which entails bodily movement, as description of overt actions do. In light of this and other constraints, Hedden proposes that the real options an agent has which are rationalized by beliefs and desires are mental states—what he calls decisions, which I will forthwith identify with the formation of intentions. So Sally’s route-choice options, strictly speaking are to form the intention to run to the bottom of the hill, or to form the intention to take the traversing route. Forming the intention is compatible with ultimately failing to fulfill the intention—and so on the Jeffrey story, that intention-formation evaluated in a way that factors in possibilities where Sally falls and injures herself. So risks and bodily fallibility are appropriately factored in.

But of course, the last post exactly laid down teleosemantic foundations for mental states of intending. So the Hedden proposal looks like the ideal pairing for this kind of story of layer-1 base facts (as would similar “internalist” accounts in the literature, involving “volitions” or “willings”, against a cognitive architecture where such states have the function to bring about certain states of the world).

It’s not quite as immediate as it might at first seem, however. It is one thing to identify the type of thing that an option is, and note that their representational properties have been provided for. It is another thing to lay down a theory that specifies what x’s options are in situation y. The options-as-intentions proposal addresses the first, but not the second. Consider Sally again—her options are all intention-formations, but which ones? Is, for example, intending to instantly teleport to the base of the hill an option for her? On the one hand, it’s not clear what would go wrong if we did include that intention among her options—after all, Sally will be rightly very sceptical that such an intention would be fulfilled, so it’s not going to challenge running to the bottom or traversing for top spot. On the other hand, it’s not even clear what it would be for Sally to form such an intention, given her knowledge of her own limitations (in contrast, for example, to Sally forming the intention to run the final flat 100m in 9 seconds. It’s a bit questionable even there whether she can form that intention, but at least it’s clear what it would be to make an attempt). The worry would be that if to identify the relevant set of intention-formations that constitute Sally’s options, we have to appeal to broader features of her psychology (her knowledge of her abilities, her ability to fill in a plan in sufficient detail, etc) then we’ll end up not being able to characterize Sally’s options without peeking into layer-2 representational facts, which are out of bounds at this stage of the project.

Another way in the match is not quite as perfect as it seems is that the teleosemantic foundations won’t ground the content of any arbitrary mental state we might in natural language label an “intention”. For remember, the intentions that get content in that way must have the function to bring about their content, which means that in all normal scenarios, where the mental state is formed, it causes the events identified by its content. That’s plausibly true for intentions to perform basic bodily movements—to make one arm go up, to turn one’s head, to put one foot in front of another. It is not plausibly the case for “intentions” like: to get to work by 12pm. In forming that intention, I trust that the trains are going to run approximately to schedule. But that trust could be misplaced, without any “malfunction” on my part, and trains breaking down are not abnormal elements in my environment. High level intentions like this bring about their contents only in a restricted set of normal possibilities—ones, for example, where associated beliefs are true.  But that isn’t provided for by the account of intention-content that the pure teleosemantic theory I have endorsed, and the role played by belief in an intuitive characterization of the connection here suggests that it won’t be available at layer-1.

These two problems interact in interesting ways. The second point deprives us of the right to intentions that we might have thought were involved in ordinary decision situations. But it also deprives us of the unrealistic intentions that cause the first set of problems. An intention to teleport to the bottom of the hill is paradigmatically a putative “intention” that does not (in us) have the function to bring about its fulfillment.

A principled—but committal—package deal suggests itself. Options are intentions, as Hedden suggests. But more than that, they are basic intentions, exactly those intentions which in us have the function to bring about their fulfillment. Such intentions are plausibly going to be restricted to low-level contents, though in the process of acquiring and internalizing skills, one might increase one’s repertoire. “Intending to run to the bottom of the hill” may be too high-level to really count as one of Sally’s options, but while a novice runner on the fells might have to have intentions that go foot-placement by foot-placement, for a skilled runner, a single mental state might have the function (via triggering automated downstream motor states) to bring it about that she is running downhill in a particular direction, an activity that involves all sorts of components. (Whether the story makes room for this attractive extension depends on the account will ultimately offer of functions, and in particular whether learning produces functions in the relevant sense). With the relevant kind of intentions thus constrained and sensitive to the skills and capacities of the agent, the proposal is that an agent’s set of options are all the possible intentions of this kind that it is possible for the agent to form, no matter how “irrational” it would be to do so. This will still include in the option set intentions that the agent is confident that she will be unable to enact. The options for an agent tied to a chair may include: forming the intention to stand up; and that agent will be very confident that if they formed that intention, they would fail to stand up due to the ropes that bind them. Such options would be quickly discarded or ignored by any efficient decision-making architecture, but what a decision-making architecture finds efficient to evaluate is a quite different question from what the normatively relevant options are.

(None of this, by the way, deprives us of the right to describe an agent, more or less loosely, as making plans or choosing options described in higher-level terms. But legitimating that sort of talk will have to wait till we’ve earned the right to higher-level psychological descriptions. The presupposition here is just that there is a basic level of description of how an agent acts which features them rationally forming these kind of basic intentions to guide their behaviour).

Wrapping up: in this post I’ve emphasized the role that options play in determining whether an agent is practically rational. As noted in a previous post, if we try to base rationalization only on overt behaviour in attempt to avoid the layer-1 representational facts of the last two posts, then we won’t get a fix on options, which leave no behavioural signal. And if we leave options unconstrained in interpretation, assigning a fatalistic set of options will always be available to produce cheap and disreputable rationalization. But once we earned the right to the representational facts concerning perception and intention, then we can tackle the problem of characterizing options head on. I have not argued for one theory over another here, and to do so will always involve contentious stances in practical normative theory, but I have worked through some ins and outs of one proposal, on which options are themselves intentions.

NoR 3.3: Intention

This is one of a series of posts setting out my work on the Nature of Representation. You can view the whole series by following this link

Neander gives a theory of the representational contents of sensory-perceptual systems. She is explicit that this account is aimed to ground “original intentionality” in contrast to “derived intentionality”, where

“derived intentionality [is defined] as intentionality derived (constitutively) from other independently existing intentionality, and original intentionality [is defined as] intentionality not (constitutively) so derived”.

Neander’s view is that original intentionality belongs at least to sensory-perceptual states “…and might only belong to them”. On the contrary, I want to argue that certain other states have almost exactly this sort of original intentionality.

I will assume that our agents’ cognitive architecture includes an intentional-motor system, which takes as input representations from the a central cognitive system (intentions to do something), and outputs states to which we may have limited or no conscious access, but which control the precise behaviour needed to implement the intention. I suggest that original intentionality belongs also to this intentional-motor states, and the metaphysics of this sort of representation is again teleoinformational. Indeed, it will be a mirror-image of the story of the grounding of representation in sensory-perceptual states—the differences traceable simply to the distinct directions of fit of perception and intention.

Our starting point is thus the following:

  • A intentional-motor representation R in intentional-motor system M has [E occurs] as a candidate-content iff M has the function to produce E-type events (as such) as a result of an R-type state occurring.

This time, representation is analyzed as a matter of a production-function rather than a response-function, but this simply amounts to reversing the direction of causation that appeared in the account of perception.

We can illustrate this again with a non-biological example. Every shopper has a half-ticket-stub, and as their goods are brought up from storage, the other half of their ticket is hung up on a washing line in front of the desk. The system is functioning “as designed” when hanging up half of ticket number 150 causes the shopper with ticket number 150 to move forward and collect their goods.  (the causal mechanism is the shopholder collecting the goods, bringing them to the desk, and hanging up the matching ticket). This is a designed system where certain states (of tickets hanging on the line) have production functions.

A perceptual state has many causal antecedents, and many of these causal antecedents are intermediaries that produce the state “by design”. Just so, an intentional state has many causal consequences, many which produce the state “by design”. An intention to hail the taxi (or even: to raise and wave one’s arm) will produce motor states controlling the fine details the way the arm is raised and waved, as well as the bodily motion of the arm waving and finally the taxi being summoned. Again, the more proximal states produced “by design” are means to an end: producing the most distal state. To capture this, we mirror the account given in the case of perception:

  • Among candidate contents E1, E2 of an intentional-motor state R, let E1>E2 iff in S, the function to produce E1-type events as a result of an R-type state occurring is a means to the end of producing E2-type events as a result of an R-type state occurring, but not vice versa.
  • The content of R is the >-minimal candidate content (if there are many >-minimal contents, then the content of R is indeterminate between them).

Suppose that I intend to grasp a green sphere to my right, and suppose that the vehicle of this representation is a single state of my intentional-motor system (a state whose formation will trigger a good deal of further processing before bodily motion occurs). What grounds the fact that that token state represents what it does? On this account, it is because in the evolutionary history of this biological system, states of that type produced a hand prehending and attaching itself to some green sphere to the right of the perceiver—and this feature was selected for. Though there were other causal consequences of that states that were also selected for, they were selected as a means to the end of producing right-green-sphere-graspings.

I will be using this teleoinformational account as my treatment of the first-layer intentionality of action. So when we see appeal, in radical interpretation, to rationalizing *dispositions to act* given the experiences undergone, the “actions” are to be cashed out in terms of teleoinformational contents.

The focus here has been the contents of certain mental states—intentions, motor states and the like. Typical actions (raising an arm, hailing a taxi, etc) in part consist of physical movements of the body, so I haven’t yet quite earned the right to get from Sally-as-a-physical-system to Sally-as-acting-in-the-world. Further, there’s nothing in the account above that guarantees that states with content grounded in this way are personal-level and rationalizable, rather than subpersonal and arational. The exact prehension of my hand as it reaches for a cup is controlled, presumably, by states of my nervous system, and these states may have a function to produce the subtle movements. But the details are not chosen by me. I don’t, for example, believe that by moving my fingers just so I will grasp the cup, and hence form a person-level intention to move my fingers like that. Rather, I intend to grasp the cup, and rely on downstream processing to take care of the fine details.

So there’s work to be done in shaping the raw material of first-layer intentionality just described into a form where it can feed into the layer-2 story about radical interpretation that I am putting forward. This may involve refining the formulation of radical interpretation in addition to focusing in on that correct contentful states. It’s open to us to question whether actions are the things that need to be rationalized, or whether that’s just a hangover from the (behaviourist?) idea that overt bodily movements form a physicalistic basis for radical interpretation. Readers will now spot, however, that these are just salient examples of the same point we saw earlier with perception and experience. In both cases, we need to show how the material grounded in the teleosemantic account of sensation/perception and intention/motor states allow us to characterize the relata of the rationalization relation at the heart of radical interpretation.

In this post and the previous, I’ve given you my story about the foundations of layer-1 intentionality, in one case directly lifted from the teleosemantic tradition; in another, a mirror-image adapation of it. Three items now define our agenda for the rest of this subseries of.

  1. We need to explain how the raw materials are shaped into an account of the base facts for radical interpretation: the relata of substantive rationality.
  2. As flagged in the first post in this subseries, we need for an account of what our interpretee’s options were, those not taken as well as those taken, since we rationalize choices or actions against a backdrop of available options.
  3. The appeal to “functions” of elements of biological systems (specifically, sensory-perceptual and intentional-motor) is a working primitive of this account. That will continue to be the case, but I want to at least briefly look at the problems that may arise, to reassure ourselves that the account won’t be dead on arrival.

NoR 3.2: Experience

This is one of a series of posts setting out my work on the Nature of Representation. You can view the whole series by following this link

The job of the next few posts is to fill out the details of how source-intentionality is to be grounded. And as flagged, here I will draw on Karen Neander’s recent defence of a teleosemantic account of sensory-perceptual content.

This post lays out Neander’s approach to perceptual content. As mentioned, Neander concentrates on the representational content of sensory-perceptual states—so ones that occur within a particular cognitive system. Her story comprises two steps. The first is the following:

  • A sensory-perceptual representation R in sensory-perceptual system S has [E occurs] as a candidate-content iff S has the function to produce R-type events in response to E-type events (as such).

So let’s unpack this. The key notion here is the appeal to the function of something within a certain system. It’s this appeal that makes the account part of the teleosemantic tradition. Now, there’s a lot that could be said about what grounds facts of the form “x has function y in system z”. All we need, for now, is the assumption that these are “naturalistically respectable” and grounded prior to and independently of any representational facts. So for example, a theological account of functions, whereby the function of x is y in z iff God designed x to y in z, is out. More subtly, a stance-relative account of functions, whereby the function of x in y in z for an theorist t depends on theorist’s t’s projects and aims, is also out. But an etiological theory of functions, whereby the function of x in y is z iff x’s in z were evolutionary selected to do y, is an option. The details matter, of course (the details always matter) but for now, we’ll treat functions as a working primitive.

Neander’s proposal is that once we see Sally’s sensory-perceptual system as containing states with a variety of functions, it is response-functions that hold the key to analyzing perceptual content. The system is functioning “as designed” when a certain worldly event-type causes a specific state-type to be tokened within it. Consider the following non-biological example. Runners passing a checkpoint throw a tab with their number into a bucket. The system is functioning “as designed” when runner number 150 passing the checkpoint causes there to be a tab with 150 inscribed upon it in the bucket (the causal mechanism is the runner throwing a random tab from those on a loop on their belt into the bucket). Of course, things can go wrong (the runner can forget to throw the tab, they can miss the bucket, they may have been given a wrongly-inscribed tab at the start) but those would be cases of the system malfunctioning.

Designed systems, at least, can have “response-functions”.  In such cases it’s very natural to think that it’s in virtue of the response-function that the contents of the bucket records or represents the runners who have passed. Neander’s contention is that biological systems with etiological functions can work analogously. Because the grounding of the relevant functions doesn’t require intentions or design but just a pattern of selection in evolutionary history, this is a way of grounding such representation in non-representational facts.

Now, one famous challenge to naturalistic theories of representation (especially perceptual representation) was to distinguish those items in the causal history of an episode of perception which figure in the content of the perception, from those that do not. For example, a red cube observed from a given angle causes a certain pattern of retinal stimulation, which in turn causes a certain state R to obtain in the sensory-perceptual system. The perception has a content that concerns red cubes, not retinal stimluations. Yet it’s perfectly true that part of a well-functioning sensory-perceptual system is that it responds to retinal stimulations of a certain pattern by producing R. It’s also true that that the well-functioning system produces R in response to red cubes at the given angle, and this—indeed, within the system, the response to retinal stimulation is the means by which it responds to “distal” red cubes. But we better not analyze perceptual content as anything to which the perceptual system has a function to produce states in response to, else we’ll include proximal and distal events together. This is why the gloss above talks of “candidate contents” not “contents” simpliciter. Neander appeals to asymmetric means-end relation in the functioning of the system to narrow things down. Here is my reconstruction of her proposal:

  • Among candidate contents E1, E2, let E1>E2 iff in S, the function to produce R-type events as a response to E2 is a means to produce R-type events as a response to E1 but not vice versa.
  • The content of R is the >-minimal candidate content (if there are many >-maximal contents, then the content of R is indeterminate between them).

Suppose that I perceive a red cube to my right, and suppose that the vehicle of this representation is a single state of my sensory-perceptual system (presumably a state produced after a fair degree of processing has gone on). What grounds the fact that that token state represents what it does? On this account, it is because in the evolutionary history of this biological system, states of that type were produced in response to the presence of red cubes to the right of the perceiver, and this feature was selected for. The process by which the states were produced includes intermediary objects and properties, and the sensory-perceptual state was produced in response to those no less than the red cube  (perhaps the intermediary states include three mutually orthogonal red surfaces orientated towards the subject, a certain pattern of retinal stimulation in the subject, etc). However, the function to respond to such intermediaries is a mere means to the end of responding to the presence of *red cubes to the right*.

I will be using Neander’s theory as my account of the first-layer intentionality in perception. When we see appeal, in radical interpretation, to rationalizing dispositions to act given the *experiences* undergone, the “experiences” are to be cashed out in terms of teleoinformational contents. As I mentioned in the last post, there’s further work to be done in turning these representational raw materials into the kind of base facts that radical interpretation needs—identifying the relata of rationalization. How do we get from the content of possibly subpersonal representational states of the sensory-perceptual system, to the content of experience, and ultimately to the impact of that experience on rational belief? This will be addressed in future posts.