NoR section 3 supplemental: functions I

This is one of a series of posts setting out my work on the Nature of Representation. You can view the whole series by following this link

The account of source intentionality I offer appeals to a notion of a state (within a system) having a function to do this or that. Perceptual states have the function to be produced in response to specific environmental conditions. Those states can occur and represent what they do when the environmental conditions are missing (as a result of perceptual illusion or deliberate retinal stimulation by an evil scientist). So clearly it’s important that in the relevant sense, something can have a function it is not currently discharging: that it can malfunction when it is the wrong conditions or when interfered with. A state can have a function to X even when it’s not functioning by X-ing.

Functions like this (“normal, proper functions”) can look somewhat spooky. What makes it the case that the perceptual state is for representing red cubes, even in circumstances where it’s being trigged by something other than a red cube? What grounds this teleology?

Giving an answer to this question is strictly supererogatory. Relative to my aims, they can be for me a working primitive. It is even consistent with everything that has been said here that with these functions, we hit metaphysical bedrock—that facts about functioning are metaphysically fundamental. That would be a reductive account of the way that representation is grounded in a fundamentally non-representational world—but a non-representational world that includes unreduced teleological facts.  There are those who are interested in reductive projects because of an antecedent conviction that everything is grounded ultimately in [micro-]physics, and hitting rock bottom at teleological facts about macroscopic systems would count as a defeat. That is not my project, and for me, a reduction that bottoms out at this level would be perfectly acceptable. Arguably, it should count as a “naturalistic” reduction of representation—since biological theorizing prima facie is shot through with appeals to the function [the function of a heart being to circulate blood around the body—whether or not it is currently so functioning].

My commitments are as follows. First, I am committed to disagreeing with those who would deny the existence of normal proper functions and analyze quotidian and scientific function-talk in some other fashion—realism about proper functions. Second, on pain of circularity, the relevant functions can’t be grounded in representational facts—independence from representation.

I add a pair of desiderata. These are not commitment of the account, since a version of the project could run even if they are denied. So: third, I hold that functions can be established through learning: not all functions are fixed by innate biology. If this were denied, then I could not say what I said in previous posts about acquisition of behavioural or perceptual skills extending the range of perceivings and intendings available to a creature. While certain discussions need revisiting if functions couldn’t be acquired in this way, the overall shape of the project would remain.

Fourth and finally, I deny that representational facts are wildly extrinsic. I say that a perfect duplicate of a perfectly functioning human would perceive, believe, desire, and act—even if that duplicate was produced not by evolution but by a random statistical mechanical fluctuation [a “Boltzman creature”].

The job in what follows, then, is not to give you a theory of how functions are grounded, but to examine whether the commitments and desiderata are jointly realizable.

The first account of functions we’ll examine is the etiological account of function. This is the view that proper functions [of a type of state] are grounded in historical facts about how states of that type were selected. This is Neander’s favoured approach, and in her (1991) she characterizes the view as follows:

It is the/a proper function of an item (X) of an organism (0) to do that which items of X’s type did to contribute to the inclusive fitness of O’s ancestors, and which caused the genotype, of which X is the phenotypic expression, to be selected by natural selection.

The etiological account of proper functions meets the realism and independence commitments. But it violates both my supplementary desiderata. By tying functions to natural selection, it does not underwrite functions acquired in the lifetime of a single creature, and by making functions depend on evolutionary history, it is committed to denying that the states of Boltzmann creatures [or Davidson’s “swampman”] have functions in this sense—and given the teleosemantic account of layer-one representations, that is a violation of my desideratum.

The question is then whether some adjusted or extended variant of the etiological account of functions could meet the desiderata. Neander, for one, presents the account above as a precisification, appropriate for biological entities, of the vaguer analysis of the function of something being the feature for which the thing was selected. The vaguer analysis allows other precisifications: for example, she contends it also covers the functions to do something that elements of an artefact possess, in virtue of being selected by their artificer to do that thing. On a creationist metaphysics on which God is the artificer of creatures like us, we could still offer the teleosemantic story about the content of perception, but with this alternative understanding of what the function talk amounts to. I take it that the vaguer analysis also allows for selection within the lifetime of a creature—functions resulting from learning.

While this extension of the etiological account shows a way for it to meet the Learning desideratum, it is no longer clear that an extended account of function would be independent from representational notions. That’s familiar enough for artefacts and the creationist underpinning: unpacking “selection by an artificer” will appeal to the intentions and beliefs of that artificer. We can still include appeal to such functions in our account, but they can’t come in to explain layer one intentionality as I have characterized it, but must come downstream of having earned the right, at layer two, to the artificer’s representations [alternatively, the creationist might posit a new layer zero, consisting of the primitive? representational states of God].

What’s most interesting to me, however, is how this plays out with selection-by-learning. It could be that in at least some cases, this kind of selection does depend on the intentions and beliefs of one doing the learning. Let’s suppose that is so, in the interests of exploring a “worst case scenario”. This would indeed mean that such acquired functions wouldn’t be present at layer one. The moral may be that the idea of just two stages was oversimple. Better might be a looping structure: facts about evolutionary history ground innate functions which provide a base layer of perceptual and motor representation. These are the basis for radical interpretation to ground beliefs and desires of the agent. These belief and desire facts ground further acquired functions which give a suppementary layer of perceptual and motor representation. Radical Interpretation is then reapplied to ground a supplementary layer of beliefs and desires. The structure loops, and so long as every relevant representational state is grounded at some iteration of the loop, this is fine.

It is much harder to see how the etiological account can meet the no-wild-extrinsicality desideratum. In the literature, etiological teleosemanticists do not even make the attempt, but argue that we should learn to live with it. If you agree with them, you can stop worrying. I still worry.

So let’s take stock. I have explored the etiological theory of functions, not because I feel the need to provide a metaphysics of functions—after all, it’s fine by me if this kind of teleology is metaphysical bedrock. Rather, I want to test whether the commitments and desiderata of my deployment of functions are jointly realizable. One account that has been given of functions is the etiological one, and I have suggested that some changes [the introduction of a looping structure] would be needed if the learning desiderata was to be met in the context of that account. And, of course, the no-wild-extrinsicality desideratum is violated.


Comments are closed.