NoR, 1.1b. Precedents and methodology.

This is one of a series of posts setting out my work on the Nature of Representation. You can view the whole series by following this link

In the last post, I introduced the question of the nature of representation, and introduced the way I propose to tackle three core “layers” of representation. Layer (1) concerns the representational properties of perception and action/intention. Layer (2) concerns the representational properties of belief and desire. Layer (3) concerns the representational properties of language.

Precedents.
The strategies that I sketched have two main precedents. The approach to layer (1) is an a teleoinformational account of content. Indeed, in the case of perception, to a first approximation I will just borrow Karen Neander’s teleoinformational account of the content of sensory-perceptual representations. I will have things to say about ways to adjust and tweak the theory of perceptual content into my favoured setting, but the main thing I need to do is convince my readers that the teleoinformational view (Neander-style) has a natural generalization to action-intentional content, and so can serve as a complete theory of first-layer representation.

My approach to layers (2) and (3) are instances of interpretationist accounts of representation. Here the central precedent, for me, is David Lewis’s fascinating, highly influential but at times frustratingly incomplete work on the topic. Lewis separated layers (2) and (3), and at least at the level of generality spelled out above, he gave very similar analyses to those I outline above. When we dive into the details, however, we find that Lewis’s account is schematic in many ways. What we can extract from Lewis is a space of theories of the nature of representation to explore, not a single, fully-fleshed out account. The project here is to work up a specific theory, and tease out specific predictions. It will make sense to compare and contrast to Lewis’s work (especially given the way that it has been picked up, endorsed and applied in the literature), but my project is not exegetical. I will riff on a theme that Lewis provides. Even just looking at the overview we have so far, we already see the first case of this, since Lewis never gave us details of how he proposed to handle layer (1), or explained how to do without such a layer.

Scope.
My three layers leave much unsaid. I cover only three of the examples that introduced this section (you won’t find a discussion either of photographs nor memories in what follows). Along with memories, there are many other mental states that are not discussed—among them affective intentional states like fearing the blob, hoping for release, and objectual states like admiring Nelson or attending to a laser-pointer. Some of these states may be analyzed into combinations of the representational states I do analyze, plus other material. Some of them could be given their own grounding using the resources herein deployed (it’s natural to try out a teleoinformational account for memories as a widening of stage 1, and to try covering additional intentional attitudes by adding extra dimensions to the interpretations grounded at stage 2). But it may be that new ideas are needed.

The human-made world is replete with further representations beyond the words and sentences that are my focus. There are non-verbal signals where a generalization of my favoured approach to sentences—conventional expression of thoughts—has long been a popular option. Other artefacts may be better treated by an extension of the teleoinformational approach. Photographs, prima facie, seem more informational and less conventional. Within linguistic representation, there are plenty more challenges to explore. Consider stretches of dialogue, which surely have representational properties whose relation to the representational properties of the words and sentences used is complex (consider the mechanisms by which anaphor is fixed, or co-indexing of variables). Or consider novels and stories: what is represented to be the case by a written work of fiction surely relates in some way to what its sentences say, but the generation of fictional truths is a complex business that is rightly a topic of study in its own right. And who’s to say what the best story would be about the representational content of a (more or less abstract) painting? But again and again, in thinking through such cases it is natural to draw on a toolkit of experience, intention, belief, desire and language. So I think of my chosen topics as a core. If we can get the nature of these kinds of representation right, that will be a platform for generalization and reduction, and a necessary foil for any autonomous treatment of the metaphysics of other sorts of representational phenomena.

The role of empirical data
I make few appeals to empirical results in cognitive science, biology, psychology or linguistics in building up my theory. Though the teleoinformational tradition—and the work of Neander that I borrow—does draw upon this, I won’t be relying on this aspect of that work. That is not because I think these things are philosophically irrelevant; on another day, engaged in another project, I would happily dive into the details. But there are trade offs in each research project, and by suppressing certain questions, we can focus more intently on others. The question I set myself here is a “how possible” one. How is possible, in principle, for facts about philosophically central sorts of representation to arise in a fundamentally physical world? I offer an account of one way that it could happen. That account will work for creatures with a certain kind of belief/desire psychology that relates in the ways I will go on to describe to perception/action and language. As we will see, articulating this in adequate detail already generates a fascinating landscape of questions.

Let’s suppose you agree that my project has been successful. Then there’ll be a further question—is this the way it works in us flesh and blood human beings? Is it the way it works in frogs? Does it go for the hyperintelligent aliens inhabiting yonder distant planet? It could turn out that the model I worked with really is a good description of some or all such cases, but more likely, it will need tailoring to fit the details of this or that case. (So I here distance myself from the “analytic” version of a project like mine, where the models I construct in the armchair have special authority because they are laying out what is implicit in the very concept of belief/desire/representation, etc). It would be nice if the tailoring proved to be modest, involving “more of the same”, for example, further kinds of layer-1 teleoinformational states, a more complex interpretation with more subtly interrelated attitude types in layer-2, and refinements at level-3 to suit the latest developments in linguistic semantics. Perhaps the surgery could be more radical. In the limit, maybe, although representation could arise in the way I describe, perhaps it arises in a quite different configuration in our case. So be it! Theorists need to speculate to accumulate, and I am happy taking the theoretical gamble involved in the how-possible project in which I am engaged.

There is a place in my project where some more specific, and contingent, assumptions become important. Though the layer-2 story about belief and desire is compatible with many different assumptions about the underlying cognitive architecture of the states which carry this kind of content, we get much more specific predictions about certain species of belief/desire (singular thought, general thought, normative thought, etc) when we add in specific assumptions. So at various points I’ll be assuming that beliefs and desires have vehicles with language-like “conceptual” structure, which enter into inferential relations (I’ll go on to specify what these “inferential roles” might be). Together with the general radical interpretation framework, this allows me to derive results about what the posited “concepts” pick out. What I’m aiming to establish in these sections are conditionals: that if such-and-such a cognitive architecture is present, then so-and-so content will be assigned. Those conditionals are of interest to those who have independent arguments for the cognitive architecture in question, for they can modus ponens to get the results. Those neutral on the architecture but sympathetic to the conclusions about content derived, on the other hand, can view this as indirect support for the architecture in question—an explanatory/predictive success.

Advertisements

Comments are closed.