Category Archives: Nature of Representation.

NoR, 1.1b. Precedents and methodology.

This is one of a series of posts setting out my work on the Nature of Representation. You can view the whole series by following this link

In the last post, I introduced the question of the nature of representation, and introduced the way I propose to tackle three core “layers” of representation. Layer (1) concerns the representational properties of perception and action/intention. Layer (2) concerns the representational properties of belief and desire. Layer (3) concerns the representational properties of language.

Precedents.
The strategies that I sketched have two main precedents. The approach to layer (1) is an a teleoinformational account of content. Indeed, in the case of perception, to a first approximation I will just borrow Karen Neander’s teleoinformational account of the content of sensory-perceptual representations. I will have things to say about ways to adjust and tweak the theory of perceptual content into my favoured setting, but the main thing I need to do is convince my readers that the teleoinformational view (Neander-style) has a natural generalization to action-intentional content, and so can serve as a complete theory of first-layer representation.

My approach to layers (2) and (3) are instances of interpretationist accounts of representation. Here the central precedent, for me, is David Lewis’s fascinating, highly influential but at times frustratingly incomplete work on the topic. Lewis separated layers (2) and (3), and at least at the level of generality spelled out above, he gave very similar analyses to those I outline above. When we dive into the details, however, we find that Lewis’s account is schematic in many ways. What we can extract from Lewis is a space of theories of the nature of representation to explore, not a single, fully-fleshed out account. The project here is to work up a specific theory, and tease out specific predictions. It will make sense to compare and contrast to Lewis’s work (especially given the way that it has been picked up, endorsed and applied in the literature), but my project is not exegetical. I will riff on a theme that Lewis provides. Even just looking at the overview we have so far, we already see the first case of this, since Lewis never gave us details of how he proposed to handle layer (1), or explained how to do without such a layer.

Scope.
My three layers leave much unsaid. I cover only three of the examples that introduced this section (you won’t find a discussion either of photographs nor memories in what follows). Along with memories, there are many other mental states that are not discussed—among them affective intentional states like fearing the blob, hoping for release, and objectual states like admiring Nelson or attending to a laser-pointer. Some of these states may be analyzed into combinations of the representational states I do analyze, plus other material. Some of them could be given their own grounding using the resources herein deployed (it’s natural to try out a teleoinformational account for memories as a widening of stage 1, and to try covering additional intentional attitudes by adding extra dimensions to the interpretations grounded at stage 2). But it may be that new ideas are needed.

The human-made world is replete with further representations beyond the words and sentences that are my focus. There are non-verbal signals where a generalization of my favoured approach to sentences—conventional expression of thoughts—has long been a popular option. Other artefacts may be better treated by an extension of the teleoinformational approach. Photographs, prima facie, seem more informational and less conventional. Within linguistic representation, there are plenty more challenges to explore. Consider stretches of dialogue, which surely have representational properties whose relation to the representational properties of the words and sentences used is complex (consider the mechanisms by which anaphor is fixed, or co-indexing of variables). Or consider novels and stories: what is represented to be the case by a written work of fiction surely relates in some way to what its sentences say, but the generation of fictional truths is a complex business that is rightly a topic of study in its own right. And who’s to say what the best story would be about the representational content of a (more or less abstract) painting? But again and again, in thinking through such cases it is natural to draw on a toolkit of experience, intention, belief, desire and language. So I think of my chosen topics as a core. If we can get the nature of these kinds of representation right, that will be a platform for generalization and reduction, and a necessary foil for any autonomous treatment of the metaphysics of other sorts of representational phenomena.

The role of empirical data
I make few appeals to empirical results in cognitive science, biology, psychology or linguistics in building up my theory. Though the teleoinformational tradition—and the work of Neander that I borrow—does draw upon this, I won’t be relying on this aspect of that work. That is not because I think these things are philosophically irrelevant; on another day, engaged in another project, I would happily dive into the details. But there are trade offs in each research project, and by suppressing certain questions, we can focus more intently on others. The question I set myself here is a “how possible” one. How is possible, in principle, for facts about philosophically central sorts of representation to arise in a fundamentally physical world? I offer an account of one way that it could happen. That account will work for creatures with a certain kind of belief/desire psychology that relates in the ways I will go on to describe to perception/action and language. As we will see, articulating this in adequate detail already generates a fascinating landscape of questions.

Let’s suppose you agree that my project has been successful. Then there’ll be a further question—is this the way it works in us flesh and blood human beings? Is it the way it works in frogs? Does it go for the hyperintelligent aliens inhabiting yonder distant planet? It could turn out that the model I worked with really is a good description of some or all such cases, but more likely, it will need tailoring to fit the details of this or that case. (So I here distance myself from the “analytic” version of a project like mine, where the models I construct in the armchair have special authority because they are laying out what is implicit in the very concept of belief/desire/representation, etc). It would be nice if the tailoring proved to be modest, involving “more of the same”, for example, further kinds of layer-1 teleoinformational states, a more complex interpretation with more subtly interrelated attitude types in layer-2, and refinements at level-3 to suit the latest developments in linguistic semantics. Perhaps the surgery could be more radical. In the limit, maybe, although representation could arise in the way I describe, perhaps it arises in a quite different configuration in our case. So be it! Theorists need to speculate to accumulate, and I am happy taking the theoretical gamble involved in the how-possible project in which I am engaged.

There is a place in my project where some more specific, and contingent, assumptions become important. Though the layer-2 story about belief and desire is compatible with many different assumptions about the underlying cognitive architecture of the states which carry this kind of content, we get much more specific predictions about certain species of belief/desire (singular thought, general thought, normative thought, etc) when we add in specific assumptions. So at various points I’ll be assuming that beliefs and desires have vehicles with language-like “conceptual” structure, which enter into inferential relations (I’ll go on to specify what these “inferential roles” might be). Together with the general radical interpretation framework, this allows me to derive results about what the posited “concepts” pick out. What I’m aiming to establish in these sections are conditionals: that if such-and-such a cognitive architecture is present, then so-and-so content will be assigned. Those conditionals are of interest to those who have independent arguments for the cognitive architecture in question, for they can modus ponens to get the results. Those neutral on the architecture but sympathetic to the conclusions about content derived, on the other hand, can view this as indirect support for the architecture in question—an explanatory/predictive success.

NoR: 1.1. Layers of Representation.

This is one of a series of posts setting out my work on the Nature of Representation. You can view the whole series by following this link

The experience of running downhill. The recollection of the coffee you drank this morning. The belief that everything is grounded in the physical. The desire to stop eating meat. A photograph of your mother. The words “it’s behind you”.

All share a common feature. They are all representations of something else, they have the spooky feature of “aboutness”. The photograph is a thing that can be put in an album and stored on a shelf. But it represents something else entirely—an individual human being. The belief (let’s assume) is a certain configuration of your brain, but it represents a general feature that that the world we inhabit as a whole might have or fail to have. We have one thing (states of our brain, other biological systems or artefacts) representing quite different things: hills, coffee, monsters, or whatever. You won’t be able to think of anything that can’t be represented (suppose you were able to: then in thinking about it, you’d be representing it).

I want to know about the nature of representation—what representation is, how it gets generated, and how different kinds of representation relate to one another. What is the basis for representation and how does it arise out of (“get grounded in”, “reduce to”) that basis?

In this series of posts, I’m going to present an answer to that question. Here is the overarching structure of how the story goes.

(1) The most primitive kind of representation is the “aboutness” we find in perception and action/intention—the two most basic modes in which we and the world interact. This layer of representation consists in states of our head which if functioning properly are produced by particular aspects of our environment (perception) or bring about changes in our environment (action/intention). It is to be analyzed, say I, into a combination of teleological (“functioning properly”) and causal (“produced by/bringing about”) features of things. We can understand how this kind of representation can exist in a fundamentally physical world so long as we have an independent, illuminating grip on functions and causation.

(2) The next kind of representation is the “aboutness” of (degrees of) belief and desire. Where the representational content of perception and action was tightly bound to visible or manipulable features of the immediate environment, beliefs and desires can represent anything (certainly anything you can think of). This layer of representation consists in states of our head which (inter alia) update in response to the information that comes in from perceptual states, and which in combination lead to the formation of states of intention. It is to be analyzed, say I, by giving a story about the correct belief/desire interpretation of the agent—with the critical question being giving an illuminating gloss on what makes an interpretation “correct”. This is where the layering comes in. In my telling of the story, the correct interpretation of an agent is that one which makes their actions/intentions, given their perceptual evidence, most reasonable. Accordingly, the story about belief and desire presupposes a prior and independent story about perception and action. We can understand how the belief and desire kinds of representation can exist in a fundamentally physical world so long as we have an independent illuminating grip on “reasonableness” and so long as the first layer of representation is available to cash out appeals to perception and action/intention.

(3) The final layer of representation I discuss is the “aboutness” of words and sentences. So here, I’ll be stepping outside the head to consider the representational features of a very special class of human artefacts. This layer of representation consists in blasts of sound, bodily movements or marks on paper which express mental states, as for example, asserting “grass is green” expresses a belief that grass is green.  It can be analyzed by analyzing the notion of a sentence “expressing” an attitude and, say I, this is a matter of what conventional regularities relate sentences to thoughts in the linguistic community in question. We can understand how this kind of representation can exist in a fundamentally physical world, so long as we have an independent illuminating grip on conventions and the attitudes expressed and so long as the second layer of representation is available to cash out the appeal therein made to representational mental states.

So that’s the overview from 60,000 feet. The reality is a bit more complex than this simple three-layer structure suggests. There are many more details to be given, loose ends to be tied, puzzles and objections to be sorted through, and applications to be explored. The macro-structure itself may need some qualification, once we’ve worked this through—for example, it may be that for some attitudes and parts of language, the relative priority of layers (2) and (3) will be reversed. There will be time enough to sort through this later, so I won’t say more now.