NoR 3.4: Options

This is one of a series of posts setting out my work on the Nature of Representation. You can view the whole series by following this link

Sally runs at top speed down a scree slope. Whether that’s a reasonable thing for her to have done depends on multiple factors. It depends on the dangers of running downhill at that speed, and the likely effects of so doing. It depends on the (dis)value of the effects and risks. But, crucially, it depends also on what options are available to her. Perhaps running downhill at top speed would bring it about that Sally wins her race, running a certain level of risk of injury. Alternatively, Sally might have another, much safer, way of winning her race, taking a traversing route that avoids the risky downhill. Against the backdrop of options-not-taken, careering downhill is unreasonable. But if there’s no such alternative, her route-choice may be perfectly reasonable.

The actions that people perform typically have a behavioural signature. Sally couldn’t run at top speed down a mountain, without her body tracing a largely descending motion downhill, nearing the limit of her physiological capacities. But the actions that people do not perform—their options not taken—leave no signature in actual behaviour.

Options, moreover, matter to the rationality of a choice. Let’s return to our Bayesian model for an illustrative example of this. The standard way of representing Sally’s decision situation is to draw up a “decision table”, which schematically looks something like this:

The “states” that head the columns are understood to represent various background conditions. The actions that head the rows are the optionsavailable to Sally. Each cell will then represent the outcome of performing the action that heads its row, in the background condition that heads its column. Sally will desire each outcome to a greater or lesser extent. She will have more or less confidence that each background state obtains. Once we “frame” a decision situation in this way, it is natural toevaluatethe options Sally has, by somehow aggregating the desirabilities of the possible outcomes of so-acting. According to Jeffrey’s evidential decision theory, for rational agents the desirability of an action A will be a weighted average of the desirability of the possible outcomes O, where the weights are given by conditional credence the agent gives to O given A. The rational action(s) are the one(s) with maximum desirability.

Whether an act actually performed is rational is a contrastive matter: an act needs to be as desirable as any other option. Add or remove options, and one can change an optimal act into a suboptimal one; depict a rational agent irrational or vice versa. If one were utterly unconstrained about assigning options to each decision situation, there’d be a cheap trick available that will make absolutely any set of dispositions-to-act rational, no matter what the agents beliefs and desires are. One simply represents the action-taken as the only option for that agent at that time, ensuring, vacuously, it maximizes desirability. Fatalism delivers cheap, disreputable rationality.

This situation matters for radical interpretation. It suggests that the base “disposition to act” facts for determining a correct interpretation are not absolute (the agent did such-and-such) but contrastive (they did such-and-such as opposed to so-and-so,…).

The task of accounting for the agent’s options within our account of source intentionality  is made more complex by the fact there’s no consensus on what those options are. The theory of rationality itself can make very strong presuppositions about this—for example, Jeffrey’s way of evaluating the desirability of an option above ignores completely outcomes in which the option is pursued but not realized. That matters. For example, suppose that we described Sally’s options during her race as “run to the bottom of the hill” or “take the traversing route”. The expected Jeffrey-utility of the proposition: Sally runs to the bottom of the steep hill,  may be much better than that of the proposition: Sally takes the traversing route. But that clean win only comes about because a salient risk (of falling over, getting injured and failing to get to the bottom) of going for the first “option” is ignored by Jeffrey’s calculations, which only considering the various possible ways the propositions in questions might be made true. So if we’re to go Jeffrey’s way, it looks like the first-pass gloss on the options is wrong, and we need to move to some more “open” characterization of the options: starting to run down the hill, or starting the traversing route, or trying…  etc. And to emphasize: this constraint arises because of the way the decision theory is set up. Different theories of rationality might place different demands. Unless we hold the theory of rationality fixed, the “options” we need to underwrite are a moving target.

Still, we have a lot of raw materials to play with, however this goes. We have out our disposal not just raw behaviour and physical capacities of the agent, but also facts about what the agent intends, what it’s physically possible for them to intend, and what would result were they to so intend. And the key assumption the radical interpreter needs is that one way or another, this material suffices to fix the agents options. So long as all parties agree on this, they can happily fight out among themselves the right way of filling in the details, noting ahead of time that this will be contentious, largely because it will involve taking a stand on the contentious issue of option-individuation.

Let me give one example of a theory of options in the current literature, one which is motivated quite independently of considerations of metaphysics of interpretation. Brian Hedden, working within the decision theoretic model of rationality, argues that any bodily-movement-implying characterization of options will suffer from problems analogous the proposal that Sally has “run to the bottom of the hill” as an option. To avoid those sort of problems, one needs to be certain that one wouldn’t fail at enacting a given option if one went for it (otherwise one can cook up decision situations in which the consequences of “going for” that option and failing are severe enough to make “going for it” look unattractive, and evaluating it looking only at the case of success look utterly reckless). But—argues Hedden—one will never be completely certain of enacting any option which entails bodily movement, as description of overt actions do. In light of this and other constraints, Hedden proposes that the real options an agent has which are rationalized by beliefs and desires are mental states—what he calls decisions, which I will forthwith identify with the formation of intentions. So Sally’s route-choice options, strictly speaking are to form the intention to run to the bottom of the hill, or to form the intention to take the traversing route. Forming the intention is compatible with ultimately failing to fulfill the intention—and so on the Jeffrey story, that intention-formation evaluated in a way that factors in possibilities where Sally falls and injures herself. So risks and bodily fallibility are appropriately factored in.

But of course, the last post exactly laid down teleosemantic foundations for mental states of intending. So the Hedden proposal looks like the ideal pairing for this kind of story of layer-1 base facts (as would similar “internalist” accounts in the literature, involving “volitions” or “willings”, against a cognitive architecture where such states have the function to bring about certain states of the world).

It’s not quite as immediate as it might at first seem, however. It is one thing to identify the type of thing that an option is, and note that their representational properties have been provided for. It is another thing to lay down a theory that specifies what x’s options are in situation y. The options-as-intentions proposal addresses the first, but not the second. Consider Sally again—her options are all intention-formations, but which ones? Is, for example, intending to instantly teleport to the base of the hill an option for her? On the one hand, it’s not clear what would go wrong if we did include that intention among her options—after all, Sally will be rightly very sceptical that such an intention would be fulfilled, so it’s not going to challenge running to the bottom or traversing for top spot. On the other hand, it’s not even clear what it would be for Sally to form such an intention, given her knowledge of her own limitations (in contrast, for example, to Sally forming the intention to run the final flat 100m in 9 seconds. It’s a bit questionable even there whether she can form that intention, but at least it’s clear what it would be to make an attempt). The worry would be that if to identify the relevant set of intention-formations that constitute Sally’s options, we have to appeal to broader features of her psychology (her knowledge of her abilities, her ability to fill in a plan in sufficient detail, etc) then we’ll end up not being able to characterize Sally’s options without peeking into layer-2 representational facts, which are out of bounds at this stage of the project.

Another way in the match is not quite as perfect as it seems is that the teleosemantic foundations won’t ground the content of any arbitrary mental state we might in natural language label an “intention”. For remember, the intentions that get content in that way must have the function to bring about their content, which means that in all normal scenarios, where the mental state is formed, it causes the events identified by its content. That’s plausibly true for intentions to perform basic bodily movements—to make one arm go up, to turn one’s head, to put one foot in front of another. It is not plausibly the case for “intentions” like: to get to work by 12pm. In forming that intention, I trust that the trains are going to run approximately to schedule. But that trust could be misplaced, without any “malfunction” on my part, and trains breaking down are not abnormal elements in my environment. High level intentions like this bring about their contents only in a restricted set of normal possibilities—ones, for example, where associated beliefs are true.  But that isn’t provided for by the account of intention-content that the pure teleosemantic theory I have endorsed, and the role played by belief in an intuitive characterization of the connection here suggests that it won’t be available at layer-1.

These two problems interact in interesting ways. The second point deprives us of the right to intentions that we might have thought were involved in ordinary decision situations. But it also deprives us of the unrealistic intentions that cause the first set of problems. An intention to teleport to the bottom of the hill is paradigmatically a putative “intention” that does not (in us) have the function to bring about its fulfillment.

A principled—but committal—package deal suggests itself. Options are intentions, as Hedden suggests. But more than that, they are basic intentions, exactly those intentions which in us have the function to bring about their fulfillment. Such intentions are plausibly going to be restricted to low-level contents, though in the process of acquiring and internalizing skills, one might increase one’s repertoire. “Intending to run to the bottom of the hill” may be too high-level to really count as one of Sally’s options, but while a novice runner on the fells might have to have intentions that go foot-placement by foot-placement, for a skilled runner, a single mental state might have the function (via triggering automated downstream motor states) to bring it about that she is running downhill in a particular direction, an activity that involves all sorts of components. (Whether the story makes room for this attractive extension depends on the account will ultimately offer of functions, and in particular whether learning produces functions in the relevant sense). With the relevant kind of intentions thus constrained and sensitive to the skills and capacities of the agent, the proposal is that an agent’s set of options are all the possible intentions of this kind that it is possible for the agent to form, no matter how “irrational” it would be to do so. This will still include in the option set intentions that the agent is confident that she will be unable to enact. The options for an agent tied to a chair may include: forming the intention to stand up; and that agent will be very confident that if they formed that intention, they would fail to stand up due to the ropes that bind them. Such options would be quickly discarded or ignored by any efficient decision-making architecture, but what a decision-making architecture finds efficient to evaluate is a quite different question from what the normatively relevant options are.

(None of this, by the way, deprives us of the right to describe an agent, more or less loosely, as making plans or choosing options described in higher-level terms. But legitimating that sort of talk will have to wait till we’ve earned the right to higher-level psychological descriptions. The presupposition here is just that there is a basic level of description of how an agent acts which features them rationally forming these kind of basic intentions to guide their behaviour).

Wrapping up: in this post I’ve emphasized the role that options play in determining whether an agent is practically rational. As noted in a previous post, if we try to base rationalization only on overt behaviour in attempt to avoid the layer-1 representational facts of the last two posts, then we won’t get a fix on options, which leave no behavioural signal. And if we leave options unconstrained in interpretation, assigning a fatalistic set of options will always be available to produce cheap and disreputable rationalization. But once we earned the right to the representational facts concerning perception and intention, then we can tackle the problem of characterizing options head on. I have not argued for one theory over another here, and to do so will always involve contentious stances in practical normative theory, but I have worked through some ins and outs of one proposal, on which options are themselves intentions.

Comments are closed.