NoR 3.3: Intention

This is one of a series of posts setting out my work on the Nature of Representation. You can view the whole series by following this link

Neander gives a theory of the representational contents of sensory-perceptual systems. She is explicit that this account is aimed to ground “original intentionality” in contrast to “derived intentionality”, where

“derived intentionality [is defined] as intentionality derived (constitutively) from other independently existing intentionality, and original intentionality [is defined as] intentionality not (constitutively) so derived”.

Neander’s view is that original intentionality belongs at least to sensory-perceptual states “…and might only belong to them”. On the contrary, I want to argue that certain other states have almost exactly this sort of original intentionality.

I will assume that our agents’ cognitive architecture includes an intentional-motor system, which takes as input representations from the a central cognitive system (intentions to do something), and outputs states to which we may have limited or no conscious access, but which control the precise behaviour needed to implement the intention. I suggest that original intentionality belongs also to this intentional-motor states, and the metaphysics of this sort of representation is again teleoinformational. Indeed, it will be a mirror-image of the story of the grounding of representation in sensory-perceptual states—the differences traceable simply to the distinct directions of fit of perception and intention.

Our starting point is thus the following:

  • A intentional-motor representation R in intentional-motor system M has [E occurs] as a candidate-content iff M has the function to produce E-type events (as such) as a result of an R-type state occurring.

This time, representation is analyzed as a matter of a production-function rather than a response-function, but this simply amounts to reversing the direction of causation that appeared in the account of perception.

We can illustrate this again with a non-biological example. Every shopper has a half-ticket-stub, and as their goods are brought up from storage, the other half of their ticket is hung up on a washing line in front of the desk. The system is functioning “as designed” when hanging up half of ticket number 150 causes the shopper with ticket number 150 to move forward and collect their goods.  (the causal mechanism is the shopholder collecting the goods, bringing them to the desk, and hanging up the matching ticket). This is a designed system where certain states (of tickets hanging on the line) have production functions.

A perceptual state has many causal antecedents, and many of these causal antecedents are intermediaries that produce the state “by design”. Just so, an intentional state has many causal consequences, many which produce the state “by design”. An intention to hail the taxi (or even: to raise and wave one’s arm) will produce motor states controlling the fine details the way the arm is raised and waved, as well as the bodily motion of the arm waving and finally the taxi being summoned. Again, the more proximal states produced “by design” are means to an end: producing the most distal state. To capture this, we mirror the account given in the case of perception:

  • Among candidate contents E1, E2 of an intentional-motor state R, let E1>E2 iff in S, the function to produce E1-type events as a result of an R-type state occurring is a means to the end of producing E2-type events as a result of an R-type state occurring, but not vice versa.
  • The content of R is the >-minimal candidate content (if there are many >-minimal contents, then the content of R is indeterminate between them).

Suppose that I intend to grasp a green sphere to my right, and suppose that the vehicle of this representation is a single state of my intentional-motor system (a state whose formation will trigger a good deal of further processing before bodily motion occurs). What grounds the fact that that token state represents what it does? On this account, it is because in the evolutionary history of this biological system, states of that type produced a hand prehending and attaching itself to some green sphere to the right of the perceiver—and this feature was selected for. Though there were other causal consequences of that states that were also selected for, they were selected as a means to the end of producing right-green-sphere-graspings.

I will be using this teleoinformational account as my treatment of the first-layer intentionality of action. So when we see appeal, in radical interpretation, to rationalizing *dispositions to act* given the experiences undergone, the “actions” are to be cashed out in terms of teleoinformational contents.

The focus here has been the contents of certain mental states—intentions, motor states and the like. Typical actions (raising an arm, hailing a taxi, etc) in part consist of physical movements of the body, so I haven’t yet quite earned the right to get from Sally-as-a-physical-system to Sally-as-acting-in-the-world. Further, there’s nothing in the account above that guarantees that states with content grounded in this way are personal-level and rationalizable, rather than subpersonal and arational. The exact prehension of my hand as it reaches for a cup is controlled, presumably, by states of my nervous system, and these states may have a function to produce the subtle movements. But the details are not chosen by me. I don’t, for example, believe that by moving my fingers just so I will grasp the cup, and hence form a person-level intention to move my fingers like that. Rather, I intend to grasp the cup, and rely on downstream processing to take care of the fine details.

So there’s work to be done in shaping the raw material of first-layer intentionality just described into a form where it can feed into the layer-2 story about radical interpretation that I am putting forward. This may involve refining the formulation of radical interpretation in addition to focusing in on that correct contentful states. It’s open to us to question whether actions are the things that need to be rationalized, or whether that’s just a hangover from the (behaviourist?) idea that overt bodily movements form a physicalistic basis for radical interpretation. Readers will now spot, however, that these are just salient examples of the same point we saw earlier with perception and experience. In both cases, we need to show how the material grounded in the teleosemantic account of sensation/perception and intention/motor states allow us to characterize the relata of the rationalization relation at the heart of radical interpretation.

In this post and the previous, I’ve given you my story about the foundations of layer-1 intentionality, in one case directly lifted from the teleosemantic tradition; in another, a mirror-image adapation of it. Three items now define our agenda for the rest of this subseries of.

  1. We need to explain how the raw materials are shaped into an account of the base facts for radical interpretation: the relata of substantive rationality.
  2. As flagged in the first post in this subseries, we need for an account of what our interpretee’s options were, those not taken as well as those taken, since we rationalize choices or actions against a backdrop of available options.
  3. The appeal to “functions” of elements of biological systems (specifically, sensory-perceptual and intentional-motor) is a working primitive of this account. That will continue to be the case, but I want to at least briefly look at the problems that may arise, to reassure ourselves that the account won’t be dead on arrival.

Comments are closed.