NoR section 3 supplemental: functions II

This is one of a series of posts setting out my work on the Nature of Representation. You can view the whole series by following this link

In the last post, we reviewed a striking feature of etiological theory of functions. What functions organs or states of a creature have depends on their evolutionary history. So accidental creations—swamppeople or Boltzman creatures—that lack an evolutionary history would lack functions. This is very surprising. It is surprising even in the case of hearts—a perfectly functioning heart, a duplicate of your own, on this account lacks the function to pump blood. It is shocking in connection to representation, where in conjunction with teleosemantic account of perception and intention it means that accidental creations would not perceive or intend anything. Though teleosemanticists typically advise that we learn to live with this, and one can coherently add these claims to my favoured metaphysics—I would much prefer to avoid this. So here I look into this a little further.

Teleosemanticists emphasize an important foil to swamperson/Boltzmann creature cases which are a challenge to those of us who don’t want to take the hard line. Swamppeople are accidental replicas of fully-functioning systems, but we need to consider also accidental replicas of malfunctioning systems. To convey the flavour, I offer three cases involving artefacts:

First, take a clockwork watch, one of whose cogs has its teeth broken off. The faulty cog has a specific function within the watch, because that’s the role it would play if the watch were working as designed. That role is the one the cog was selected to play—though it isn’t functioning that way. But take a swamp duplicate of the watch. Does the smooth metal disk inside it *supposed* to play a certain role within the watch? It’s far from obvious on what grounds we would say so. Or consider a second case: a broken watch/swampwatch pair where all cogs are smoothed and mangled, so that it is impossible to reconstruct the “intended” functioning just from the intrinsic physical/stuctural setup. If we think that the parts of a badly broken watch still have their original functions (albeit functions they do not discharge due to damage) and the replica swampwatch, in the absence of the history, does not, that would demonstrate that function is not just a matter of the intrinsic physical setup.

Second, two watches which have different inner workings (hence different functions for the cogs) might both malfunction and be so distorted so that the resulting broken watches are duplicates of each other. But the functions of the cogs remain distinct. So, once more, functions can’t be preserved under physical duplication. This case dramatizes why we can’t merely “project” functions onto accidental replicas of damaged instances, since in this case different incompatible functions would be assigned by such a procedure.

Third, consider cases where a damaged instance of one artefact happens to be a physical duplicate of a fully functioning artefact whose purpose is different. Again, we’re left all at sea in determining which is the “normal pattern” for something that accidentally replicates both.

Each of the above points made about artefacts carry over to biological systems—in principle, a damaged version of some merely possible creature could physically duplicate actually existing creatures. And so again, we’re at sea in accidental creations in determining whether they have the functions of the former or the latter.

These cases it seems to me, do support the claim that functions of a malfunctioning system are an extrinsic matter.

I take it the argument at this point goes as follows: if it is a wildly extrinsic matter what the functions of system S are, when S is malfunctioning, then “having function F” is a wildly extrinsic matter in all cases. And so it is no cost to the teleosemantic account that it says the same for the case of perception.

There are two ripostes here. The first is that while these considerations may motivate the claim that the functions of perceptual/motor states are wildly extrinsic, that may just show that they are not suitable candidates for being the ground of representation, since representation [we still maintain] is not wildly extrinsic in this manner. The second riposte is that it is not true, in general, that because some instances of a property are wildly extrinsic, that all are. Consider the following property: either being a sphere, or being one hundred yards away from an elephant. A sphere possesses this property intrinsically. Whether I possess it depends on my location vis a vis elephants—a wildly extrinsic matter. I think that functions, and representation, may pattern just like this: being intrinsically possessed in “good” cases, but being extrinsically possessed in cases of malfunction. At the least, it would take further argument to show that this is not the right moral to draw from the foils.

To this point, I have been considering the etiological account of function. This is not the only game in town. Alongside the etiological (historical, selection-based) accounts of functions sit systematic-capacity accounts. In a recent development of the basic ideas of Cummins, Davies characterizes a version of this view as follows:

an item I within a system S has a function to F relative to S’s capacity to C iff there’s some analysis A such that: I can F; A appropriately and adequately accounts for S’s capacity to C in terms of the structure and interaction of lower-level components of S; A does this in part by appeal to I’s capacity to F (among the lower-level capacities); and A specifies physical mechanisms that instantiate the systematic capacities it appeals to.

Now, just at the level of form, we can see two important aspects of the systematic-capacity tradition. First, functions are relativized to specific capacities of a high-level system. And second, it’s a pesupposition of the account that items can discharge their functions—“broken cogs” or “malfunctioning” wings will not have their “normal” functions when the system is broken. If we were to try to appeal to this notion of function within the teleosemantic approach, we would have no problem with the original swampperson case, for swampperson would instantiate the same perceptual structure we do, and so functions of its components would be shared. But the two features just mentioned are problematic. The first appears to allow a embarrassing proliferation of functions (standard example: the capacity of the heart to produce certain sounds through a stethoscope may lead to attributing noise-making functions to contractions of the heart). I do not see this as a major problem for the teleosemanticist. After all, one can simply specify the capacities of the sensory-perceptual system or motor-intentional system in the course of the analysis—the interesting constraint here is that we be able to specify these overall capacities of perception or intention in non-representational terms. The second feature highlighted has been central, however. Part of the appeal of the teleosemantic approach was that it could account for cases of illusion and hallucination which involve malfunctions. But while some cases of illusion can be covered by the story [since a type might be functioning in a certain way in a system—e.g. being produced in response to red cubes in conditions C—even if a given token is not produced in response to a red cube, when conditions are not in C. But we can also have malfunctions at the level of types. In synaesthesia produced by brain damage, red experiences may—with external conditions perfectly standard—be produced in response to square shaped things. This systematic abnormal cause doesn’t undercut the fact that the squares are seen as red. An etiological theorist can point to the fact that the relevant states have the function of being produced in response to red things, and are currently malfunctioning. The systemic theorist lacks this resource.

A systemic capacity account of functions,would be an account of function independent of representation, and so fit to take a place as part of the grounds of layer-one intentionality. It also meets our desiderata: it is not wildly extrinsic, and it can underpin learned as well as innate functions, since what matters is the way in which the system is working to discharge its capacity, not on how the system came to be set up that way. But given the points just made, it may not count as a realism about “proper, normal” functions, if those are understood as allowing for wholesale malfunctioning of a state-type. We shouldn’t overstate this: not all cases of illusion or hallucination [misperception or intentions not realized] are wholesale malfunction. But it does seem that wholesale, type-level malfunction is involved in at least some cases of misrepresentation.

I don’t think this blows the systemic theory of functions out of the water as an underpinning for layer one intentionality. The etiological theorists, we saw in the previous post, were forced to drastic revision of intuitive verdicts over swamppeople. And if we’re in the game of overturning intuitive verdicts, the systemic theorist might simply deny that a synaesthete’s red experiences are misrepresentations in the first place. They could fill this out in a variety of ways: by saying that a synaesthete’s chromatic quale now represents a thing as having the disjunctive property of being red-or-square; or they could adopt an individual relativism about red, so that to be red for x is to be apt to produce red-qualia in x, in which case the right description is that the synaesthete’s experience accurately represents the square as being red-for-them. It’s important to the credibility of this that one grants my assertion above that mundane cases of illusion involving abnormal environments or visual cues can already be handled by the systemic function account. Once we’re into more unusual, harder cases, the revisionism looks not too costly at all.

Ultimately, then, the systemic capacity account does hold out the prospect of meeting all my commitments and desiderata simultaneously. And remember: my purpose is not to endorse it on any specific account of functions, but to explore their joint tenability.

While we were still discussing the etiological theory of functions, I noted that the etiological theorists had a decent case that in cases of drastic enough malfunction, functional properties are extrinsically possessed—it does seem that historical facts explain why we’re tempted to still say that the function of a watch cog is to turn adjacent cogs, even when smoothed off so it is no longer fit for that purpose. I also noted that it does not follow that functional properties are extrinsically possessed in all instances. We can emphasize this point by noting the possibility of a combined account of function that draws on both theories we have been discussing, thus:

I has uber-function to F (within a system with S and relative to capacity C) iff either I has a systemic-function to F (relative to S/C) or it has the etiological-function to have that systemic-function to F (relative to S/C).

Just as with the disjunctive property I discussed earlier, creatures who are fully functioning—you, me and swampman—will possess such properties independently of historical facts about the origin of our cognitive setup. But this account, unlike the pure systemic-function theory, provides for other creatures to possess the very same property in virtue of historical provenance. On this account, for example, a synaesthete’s red quale may represent the very same red property that yours and mine do, since the state was evolutionarily selected to be produced by the presence of that property. This combined account is not committed to the  revisionary implications of either of its components. So this again supports my contention that the commitments and desiderata of my deployment of functions can be jointly satisfied.

Advertisements

Comments are closed.