# Category Archives: Uncategorized

## Conventions vs. ideal theory

I’m often in the market for a metaphysics of semantic properties of language. And what I’m shopping for is the best instance I can find of “top down” or “interpretationist” accounts. These have a two-step pattern. First: you identify a target set of pairings of sentences with some kind of semantic property. Second: you give a story about how those target pairings select the correct interpretation of all linguistic expressions.

One instance of this is a form of Lewis’s conventionalism. On this story, there is a collection of sentences-in-use, X, and for each of these, there are conventions of truthfulness and trust that link each S in X to a specific proposition p (construed as a set of possible worlds). That gives you the target pairings that we need for Fit. Truthfulness is a conventionally-entrenched regularity of uttering S only if one believes p, and trust is a similarly entrenched regularity of forming the belief p if one hears S uttered.

In the selection step, the correct interpretation of the whole language—not just sentences but individual words, and sentences that are never used—is fixed as the *simplest* interpretation that “fits with” the pairings.

Clearly, there’ll be many difficulties and nuances in spelling this out, but for my purposes here, I’ll assume that to fit with a set of sentence-propositions pairs, the selected interpretation needs to mapping the first element of each such pair to the second element—so that the sentence, according to the interpretation, expresses the proposition it is conventionally associated with. I’ll also assume that considerations of fit take lexical priority over considerations of simplicity, so simplicity’s role is just to select among fitting interpretations.

Here’s another instance of the two-step pattern. On this story, from language-use we find a privileged set of sentences, an “ideal theory”. The pairing of sentences with semantic properties this induces is just the following: every sentence in the ideal theory is paired with The True. The second step is as before: the correct overall interpretation is the simplest theory that fits with these pairings.

The latter is the kind of story that sometimes goes under the label “global descriptivism” and is associated with Lewis’s response to Putnam. There’s controversy about whether it ever was a view that Lewis endorsed. In that context, the appeal to simplicity in the selection story is replaced by an appeal to naturalness or eligibility. I think that these amount to the same thing, given Lewis’s understanding of simplicity. But I won’t argue or further explain that here.

Are these stories compatible? Might they amount to the same thing? (I’m grateful to discussions here with Fraser MacBride that prompted these questions). This all depends on what ideal theory amounts to. Consider the set of sentences-in-use, X. At every world w, the set of sentence-proposition pairs induces a map from X to truth values. Let $D_w$ be a complete world-description of w in some privileged world-making sublanguage L. Let the $I_w$ be a set of sentences of the form “Necessarily, if $D_w$, then S”, for each S which is mapped to the True at w. Let $I$ be the union of the $I_w$ for arbitrary w.

Consider an interpretation that fits with the sentence-proposition pairs. This will make a S sentence in X true at w iff  “Necessarily, if $D_w$, then S” is in $I$—that’s guaranteed by the way we constructed I. Does that interpretation make the sentence true, as truth-maximization would demand? Yes it does, on the condition that the interpretation is correct for the world-making sublanguage and “necessarily” and the connectives “if… then…” and “not”. In those circumstances the antecedent of the conditional is true only at world w,  since S is true at w, the strict conditional is true.  Conversely, suppose that we have an interpretation that makes true all of $I$, and which is again faithful to the world making language, “necessarily” and “if then”. Since the “Necessarily, if $D_w$, then S” is $I$, it is made true, and that requires that the sentence S is true at w. In sum: making this particular “ideal theory” true is equivalent to fitting with the sentence-proposition pairs from which it is built—though only among a restricted range of interpretations that are already guaranteed to be faithful to the worldmaking language, etc.

I’m not sure that the need to be faithful to the worldmaking language is too big a deal, on this repackaging. One way of thinking about this is that we start with a set of expressions to interpret—a certain signature $\sigma$. The sentences-in-use S are included within this set. Then we as theorists consider an expanded signature $\sigma^+$, which we get by adjoining a new set of terms (the necessity operator, the conditional, the worldmaking vocabulary) for which we explicitly stipulate an interpretation. Using the expanded signature, we build the ideal theory, and then indirectly get a fix on the correct interpretation of the original signature by requiring that the ideal theory in the expanded signature be true. Since we have stipulated the interpretation of the added vocabulary, we introduce no new parameters.

In the above, I started with sentence-proposition pairs fixed by convention and extracted an “ideal theory”. One could reverse the process, if our ideal theory already consists in a bunch of strict conditionals whose antecedents are world descriptions and whose consequents are sentences in a set X. It’ll help if we assume the ideal theory is X-complete in the sense that for each world-description $D_w$, and sentence S in X, either “Necessarily, if $D_w$, then S”  or “Necessarily, if $D_w$, then not-S” is in the set. Each S in X can now be paired with the proposition consisting of all the worlds w such that  “Necessarily, if $D_w$, then S” occurs in the ideal theory. The same reasoning as before will show that maximizing the truth of ideal theory is equivalent to fitting the sentence-proposition pairs.

If we wanted to give a metasemantics that incorporated this second direction of explanation, then we have some additional challenges. We need an independent fix on what it takes for a conditional “Necessarily, if $D_w$, then S” to be included in ideal theory. Answers could be given: for example, we could say that such a conditional is included in the ideal theory relavant to interpreting agent x iff that agent is disposed to endorse sentence S, conditional on believing the world to satisfy D_w. I think of this as a Chalmers-style approach to these questions, though I haven’t yet done the work of going back to pin down how it relates to the manifold distinctions and projects included in his book “Constructing the World”. Here again, the actual language to be interpreted might not include the world-making vocabulary—that could be reserved to the theorist. But in this case, in giving the story about constructing the ideal theory, the theorist needs to use that vocabulary in specifying part of a psychological state of an individual–a possible belief state. So to apply this, even in principle, we would need some independent fix what it takes for an individual to have propositional attitudes with contents corresponding to elements of the world-making language.

In Chalmers, we find the suggestion that there is a privileged set of concepts whose meaning is fixed by acquaintance, in a thick, Russellian sense. So one option would be to run the above acquaintance-based story as the metasemantics for a basic chunk of the language of thought, and then run (scare-quotes) “global” descriptivism for the rest of that same language.

There is a more Lewisian way to run the story though. Here we will firmly distinguish between interpreting public language and ascribing mental content. The convention-based story notoriously leans heavily on this, anyway. Our starting point for the metasemantics of public language will be a fix on the psychological states of individuals sufficiently rich to make sense of psychological states whose content we theorists describe using the world-making language. Dispositions of subjects to endorse public-language sentences under those conditions then look like legitimate resources for us to use. And using them, we can give a principled characterization of an ideal theory which (as argued previously) will be equivalent to fitting with certain sentence-proposition pairs.

So truth-maximization (of certain strict conditional sentences) and proposition-fit maximization do seem compatible if the targets are related in the right way–even equivalent. And it may even be that what we get from looking to the sentence-proposition pairs fixed by Lewisian conventions is the same as sentence-proposition pairs extracted from an ideal theory constructed by the above method, and vice versa—at least for subjects who were within a community where conventions of truthfulness and trust prevailed in the first place. That would, however, take further argument.

An interpretation of Lewis that I’ve favoured elsewhere was that he really believed in the convention-based metasemantics, and the stuff about global descriptivism and truth-maximization was just something adopted for dialectical purposes in the context of a discussion with Putnam (this is something that Wo Schwarz has pushed). A lot of time in the literature one finds the global descriptivist/truth-maximizing theory being worked with, but with “ideal theory” being handled fairly loosely—when I do this myself, for example, I think of it as something like a global folk theory of the world. But given the above, I guess one interpretative option here is that Lewis had in mind the sort of equivalences described above, and so was happy to discuss the account in either formulation.

And here’s a final thought about this. Though dispositions-to-endorse might line up with conventions where such conventions exist, it’s pretty clear that subjects can have dispositions to endorse sentences even where the conditions for conventionality are violated. So one way of presenting this is that the ideal theory characterization, grounded in dispositions-to-endorse, is a general metasemantics for language that coincides *in the limit where there are conventions* with Lewis’s convention-based story, but which has far wider application. The prospect of that kind of generalization seems to me a good reason to look closer at ways to characterize this kind of ideal theory metasemantics and study its relation to convention.

## What’s functionalism anyway?

In reading up for my new project on Group Thinking, I’ve found that people attaching a certain label to a view of the metaphysics of group belief and desire that I find quite attractive. That label is “functionalism”. I’ve found myself very confused about what that common label means, so what follows is where I’ve got to in sorting that out.

Now, at a really rough level, I expect anything deserving the name “functionalism” to have at least two theoretical categories: roles and realizers. For example, if you’re going to be a functionalist about the property being in pain, you’ll be committed to (i) the idea that there is a functional role associated with pain; (ii) if anything is to be in pain, then it needs to have a realizor property i.e. to instantiate a property that plays the functional role.

That allows us a lot of flexibility on how we flesh out the details beyond this. We might have various accounts of what sort of theories of functional roles to give. We might have various accounts of what the realization relation is—and whether we need to allow for multiple realisors, imperfect realizers, etc etc. We might differ in whether we identify the original property of being in pain with the role, the realizor, or something else. But unless we have an account that has the two part structure, it isn’t functionalism as I was taught it or as I teach it.

Okay, with that as the setup, let me say something about the kind of functionalism that I understand best. This starts with Lewis’s story about how to find explicit definitions of theoretical terms. We start with a theory that neologizes—that introduces a set of terms for the first time. That theory will also reuse some old vocabulary. Lewis assumed that the theory is regimented so that all the new terms are names. The old vocabulary will include predicates like “…has the property…” or “…stands in relation …. to …”, if necessary, so that we can do the work of new predicates by means of new names for the relevant properties. If we start with a theory $T(t_1,...,t_n)$, where $t_i$ are the old terms, then the following is the unique-realization sentence for T:

$\exists y_1\ldots \exists y_n \forall x_1\ldots x_n(T(x_1,...,x_n)\leftrightarrow (x_1=y_1\wedge \ldots \wedge x_n=y_n))$

The following one-place predicate is then what we’ll mean by “the theoretical role of $t_1$“, or the “$t_1$“-role:

$\exists y_n\ldots \exists y_n \forall x_1\ldots x_n(T(x_1,...,x_n)\leftrightarrow (x_1=y_1\wedge \ldots \wedge x_n=y_n))$

The explicit definition of the new terms in old vocabulary that Lewis offered was just as the property that played the relevant theoretical role. Using an iota for the definite description operator, for $t_1$ the definition is:

$t_1:=\iota y_1\exists y_2 \ldots \exists y_n \forall x_1\ldots x_n(T(x_1,...,x_n)\leftrightarrow (x_1=y_1\wedge \ldots \wedge x_n=y_n))$

Informally, the definition says that $t_1$ is the property that plays the $t_1$-role.

Now, Lewis proves several nice results about these definitions and their relation to the original theory $T$, using a certain understanding of the definite description operator. I won’t get into that here.

One last thing that will be important: the definite description on the right hand side of the definition sentences is, in general, a non-rigid designator. Since $T$ may be uniquely realized by definite tuples of properties in different worlds, the definite description will in general pick out different properties at different worlds. And sometimes—with empirical investigation—we will be able to say something informative about the property that happens to be picked out at the actual world. For some name $N$ in our old vocabulary, rigidly designating a property, we may discover:

$\exists y_2 \ldots \exists y_n \forall x_1\ldots x_n(T(N,x_n,...,x_n)\leftrightarrow (x_1=y_1\wedge \ldots \wedge x_n=y_n))$

From this and the definition sentence, it will follow that:

$t_1=N$

So here we have a model for how the identification of new theoretical terms with old, familiar terms could go. In these circumstances we would call $N$ the realizer of the $t_1$-role at the actual world. In general, $N_w$ will be the realizer of this role at world w iff the following holds at w: $\exists y_2 \ldots \exists y_n \forall x_1\ldots x_n(T(N_w,x_n,...,x_n)\leftrightarrow (x_1=y_1\wedge \ldots \wedge x_n=y_n))$

It’s up for debate whether $t_1$ is a rigid or non-rigid designator. If it’s a rigid designator, then $t_1=N$ will be necessary if true, but the definition sentence will be contingent (presumably, an example of the contingent a priori). $t_1$ could equally be taken to be non-rigid, allowing the definition sentence to be necessarily true (as well as apriori). In that case, $t_1=N$ will be non-rigid (as well as a posteriori). It seems we could go either way on this, consistent with the rest of the framework.

I’ve introduced both role and realizer terminology in connection to the Lewis account of the definitions of theoretical terms. It is the model for how I understand role and realizor terminology in the context of functionalism. However, discussion of theoretical neologisms is one thing, and discussion of “functional” vocabulary is another. Lewis’s topic in “how to define theoretical terms” is the former, and comes, and that gives us a particular take on the way that theory and definition sentences relate. For Lewis, the definitions are “implicitly asserted” when we put forward $T$ as a term-introducing theory—presumably we’re doing something that’s equivalent to stipulating that they are to be (a priori) true. This is not an account that can be directly applied to terms—theoretical or otherwise—that are already in common currency. It is not an account, for example, of “pain”. In the case of pain, if “definitions” are to be offered, they have to be offered as a product of analysis, not as the product of stipulation.

Let’s turn, therefore, to a context where we are working only with terms that are already common currency. And let’s suppose that we have found a theory $T$, such that for a suitable set of target vocabulary $t_1,\ldots,t_n$, both $T(t_1,\ldots,t_n)$ and the unique realization sentence is true. The following will be true:

$t_1:=\iota y_1\exists y_2 \ldots \exists y_n \forall x_1\ldots x_n(T(x_1,...,x_n)\leftrightarrow (x_1=y_1\wedge \ldots \wedge x_n=y_n))$

We shouldn’t call these “definition sentences” since it’s not clear in what sense if any they are “definitions”. To highlight this, note that as a limiting case, our “theory” could simply consist in saying “Red is Arnold’s favourite colour”, with Red as the target vocabulary . The unique realization sentence is then that there is an y such that for all x, x is Arnold’s favourite colour, iff x=y—which is true enough. And the putative “definition sentence” would say: Red is the y such that for all x, x is Arnold’s favourite colour iff y=x. But though this is is a true identity, this is quite clearly not a “definition” of the term Red, and is obviously contingent and a posteriori.

Not any old uniquely-realized theory of target old vocabulary will do, therefore. I take it that the step to an “analytic” functionalism of a Lewisian sort imposes the following constraint: we take an analytic/apriori $T(t_1,\ldots,t_n)$. Now if, in addition, the unique realization sentence for this vocabulary is analytic/apriori, then the “definition sentences” will be analytic/apriori. Even if the unique realization sentence is not analytic/apriori, then the conditional whose antecedent is the unique realization sentence and whose consequent is a definition sentence will be analytic/apriori. So we could plausibly claim the definition sentences as “an analysis” of the relevant target vocabulary–perhaps an analysis modulo the assumption of unique realization.  The conjecture, for the special case of analytic functionalism about pain, etc, will be that we could pull off this trick by letting T be systematization of a set of a priori “platitudes” that uniquely characterize the typical causal role of the property of being in pain in causing distinctive kinds of behaviour, and being caused by distinctive kinds of stimuli, and which interacts with other (targeted) mental states in typical kinds of ways.

The assumption that we can find an (a priori) theory T that does the job just described is a major one. But if we can do it, then we can import all the distinctions and terminology from the theoretical terms case. We will have a one-place predicate that is a “theoretical role” for the target term “pain”—which given the nature of the T we’re envisaging we could aptly call a causal-functional role of “pain”. We would be up for discovering that the role is satisfied by a property rigidly designated by some N—say, C-fibres firing. And we could reason, in the fashion Lewis and Armstrong taught us, from the “definition sentence” for pain, plus the putative empirical fact that C-fibres play the pain role, to an identification of the property of being in pain with having one’s C-fibres fire.

So that’s the way I understand analytic functionalism. And I can understand other forms of functionalism as variations on the theme. For example, we could start with a metaphysically necessary (but not analytic or a priori) theory which necessarily uniquely characterizes a set of target vocabulary, and extract definition sentences from it, obtaining necessarily true (but not analytic or a priori) “definition sentences” that we might go on to present as counting as “metaphysical analyses”. We could take a scientific theory—a theory which uniquely characterizes a set of target terms with nomic necessity, and then extract “nomic analyses”, and so forth. In each case, distinctive functionalist structure of role and realizer, and the relation between them, will be well understood. If functionalism is to be amended (e.g. to allow for imperfect realization, or non-unique realization) then I will want to figure out how to adjust the above theory to make the necessary changes.

It’s one thing to say that functionalisms can be represented as an instance of the how-to-define-theoretical-terms model of extracting definitions from theories. It’s quite another to say that every successful application of that model to common currency terms would be a functionalism. That further claim seems false to me.

For example, suppose we applied this kind of account to a term that for which we already have an analysis ready-to-hand: the property of being a bachelor. An a priori uniquely characterizing theory  says (let’s suppose): bachelorhood is the property of being male and being umarried. So the “definition sentence” here is: bachelorhood is the y such that for all x, x is the intersection of being male and being unmarried iff y=x. What of the role and realizor properties here? The role property is being a y such that for all x, x is the intersection of being male and being unmarried iff y=x. What’s the realizer property?

Well, here’s a way of specifying a property that realizes the role in the minimal sense in which I introduced the terminology earlier: being a bachelor. Here’s another: the property that is the intersection of being unmarried and being male. But this seems dreadfully fishy. It doesn’t seem illuminating in the way typical identifications of realizors of functional roles would and should be. It might be true to say that pain realizes the pain role, and that the property of actually playing the pain role realizes the pain role. But in that paradigm case of functionalism what we are really interested in, and trust to be available, is some more illuminating characterization: e.g. that C-fibres firing plays the pain-role. And what we see from the bachelorhood case, I think, is that it’s entirely possible to apply all this analysis and for there to be no such illuminating identification of the realizor to be given at the end of the day.

To sum this up. In the paradigm cases of functionalism, we expect a two-step methodology. There’s first the step of identifying a relevant uniquely characterizing theory, from which by turning a crank we can extract “functional roles”. And then, we expect a second stage, where we or others do further non-trivial work (in the paradigm cases, empirical work) that gives us an illuminating way of identifying the realizors of those roles, using a vocabulary that differs from that used in characterizing the role itself. The realizors will be some relatively natural “kind” or natural enough property, relative to a somehow-privileged vocabulary. In the paradigm functionalisms, there’s also a suitable distance between the vocabulary used to specify the role, and the vocabulary used in the illuminating identification of the realizor.

Here’s a way of thinking about all this. There’s a genus-level notion of role and realizor here, which we find in functionalism, in understanding theoretical neologisms, and so forth. But in order to have a functionalism worthy of the name, we need more than such minimal roles and realizors—we need roles that are genuinely “functional” and which contrast sufficiently with their natural-enough “realizors”. That vague characterization is probably enough for us to get on with the hard work of finding examples that fit this bill.

But if this is the right way to think of things, then we should resist the thought that whenever we extract definitions  from a theory in the Lewis-style, that we’re engaged in functionalist analysis. And I definitely want to resist the thought that in undertaking that first kind of project, we are committed to there being “realizors” of the theoretical roles used in those definitions in a more-than-minimal sense. Sometimes, perhaps, it will follow from the content of the characterizing theory that realizors of the roles will be more-than-minimal—e.g. perhaps that role is a causal one, and we are independently committed to thinking that only sufficiently natural properties can stand in causal relations. Perhaps part of the characterizing theory itself is the claim that the relevant property is natural enough. That might guarantee that if successful, the analysis will turn out to be a functionalist one. But this needs to be argued out on a case by case basis.

To go back to the beginning: when people talk about functionalist analyses of believing that p and desiring that q, whether in application to groups or individuals, I think that often what they’re picking out are definitions of belief and desire that are extracted from an overall theory of belief and desire in the “theoretical role” way. But it’s a huge step from that to assume that one is committed to full-blown functionalism about belief and desire, with its more-than-minimal realizors of the roles so-characterized. I think it’s misleading to label accounts that aren’t committed to more-than-minimal realizors as kinds of functionalism, and I think that’s one reason that I got myself puzzled at the way the terminology is (sometimes) used in this area.

## Nature of Representation book draft

… is now fully in being. This is a much reworked version of the themes of the series of blog posts below, themselves a distillation of work over the last five years.

## NoR 4.5: the base–words, population, convention.

This is one of a series of posts setting out my work on the Nature of Representation. You can view the whole series by following this link

Previous posts in this subsequence have taken convention as basic, and worked forward from that to an account of languages in use, correct compositional interpretation, attitude expressed, and the like. In this post, I’m going to outline the account of base facts, to hook this account of layer-3 representational facts back in to the facts about mental representation established at layer-2. I’ll also sketch how (in joint work with Gail Leckie) we have proposed extending this account to give a treatment of some other elements of the “base” for selecting the correct linguistic interpretation: the words that are interpreted, and the language-using population.

Lewis’s account of convention was as follows. A regularity R is a convention in a population P iff within P, the following hold, with at most a few exceptions:

1. Everyone in P conforms to R.
2. Everyone in P believes that everyone in P conforms to R.
3. This belief gives everyone in P a good reason to conform to R himself.
4. There is a general preference in P for general conformity to R rather than slightly-less-than-general conformity to R
5. There is an alternative possible regularity R’ such that if it met (1) and (2), it would also meet (3) and (4)
6. All of (1-5) are common knowledge.

The relevant regularities, generalized to allow for states of acceptance of enriched content, are the following:

• (Truthfulness) Members of P utter s only if they accept p, where L(s)=p.
• (Trust) If a member of P hears another member of P utter s, she tends to come to accept p, where L(s)=p.

And so what naturally suggests itself is the following account the linguistic “source intentionality”, the language-in-use appealed to in our previous discussions:

• (Lewis) Given an exogenously fixed specification of population P1 and typing of sentences, T1, L is the language of P1 for T1 iff there are conventions of (Truthfulness) and (Trust) in L in P1 for T1

Let me note some features of this. First, the characterization of convention is full of appeals to attitudes of members of the population: their beliefs and preferences, together with normative facts about reasons for conforming to a regularity. Together with Truthfulness and Trust and the way that they appeal to psychological facts about agents, clearly the work done to ground belief/desire and other facts about mental representation are being drawn on heavily at this point.

I am not going to engage in detail with the various worries one might have about this account of convention, or the modifications one might introduce to evade it. It doesn’t really matter to me whether this is a good account of convention in general, so long as it’s a good characterization of the features of regularities in language use that feed into linguistic source intentionality. And any other characterization of convention that appealed to intentional resources and delivered the same results on our target cases would do just as well, at least to this point. But just as my previous handling of mental source intentionality, my interest will be on extending the scope of the appeals to convention.

The need for extension is prompted by the appeal, in the account as currently formulated, for exogeneous typing of sentences and identification of language-using populations. But we don’t get facts about sentence-types of language using populations for free. But what grounds facts concerning when two blasts of sound are of the same sentence type, or when two people belong to a single language-using population? As is familiar in the specialist literature on this, it’s extremely implausible that we have any way to identify sentence-types by types of shapes or sounds (for an excellent review of problem cases and the relevant literature, Nick Tasker’s PhD thesis and papers should be a first port of call). The worry is that there’s no way to pick out sentence-types independently of semantic facts. What other than semantic facts makes ambiguous homophones/homographs “bank” and “bank” distinct words? It is no easier to imagine independent way of picking out a population that uses a single language, except by the fact that they are all users of that very language. But of course, the latter is a semantic fact that could not feature in a exogeneous characterization of populations (I’m grateful to Leeds’ Roger White for alerting me to that to several years ago).

Leckie and I suggest a different model:

• (Endogenous) Given an utterance u, <P, T, L> is a language in use in utterance u iff P is a population and T a typing relation relative to which there are conventions of (Truthfulness) and (Trust) in L, and the speaker/hearer of u is a member of the population P; and u is a member of some equivalence class of the typing relation T.

Instead of determining L after fixing a particular population and typing relation, (Endogenous) treats the population and typing relation as variables whose values are fixed however is necessary to produce conventions of (Truthfulness) and (Trust). The correct word typing for English is as described by the T role in a language-in-use for a population that includes the utterance I am presently making. The membership of the language-using population of which I am a part is as described by the P-role in that same language-in-use. And finally, the content-sentence pairings that constitute linguistic source intentionality for English can be read off that same triple.

It is important to understand that endorsing this account does not foreclose saying other, more immediately illuminating things about words and populations. If you thought you had an exogeneous way of specifying a language-using population and a typing relation that feature in linguistic conventions, then all the better for (Endogeneous)—that typing relation and population will be an illuminating independent specification of a typing relation that features in a language-in-use, according to our formulation. But of course, pessimism on that front motivated the shift to this one. But it’s much more plausible is that one could, via appeal to semantic facts, give a more illuminating characterization of the language-using population and a typing relation. For example, Nick Tasker’s PhD dissertation an intriguing account of the nature of words is offered, building on work in the metaphysics of artefacts by Amie Thomasson. An account of word-individuation (or at least, various necessary and sufficient conditions) is offered as part of the package, built on the more general model of individuation of artefact-kinds. But Tasker is clear from the start that among the determinants of word-individuation for him are facts about the semantic properties of the individual word tokens, their recognizability to a certain audience, and so forth. Tasker’s account might be exactly what we need to understand how words work, but also entirely unsuitable to be slotted in as an “exogeneous” account of word-individuation as per the original model. But so long as word-types as he characterizes them figure in linguistic conventions, his account is consistent with (Endogenous).

In sum: since the reductive characterization of words and populations is given by (Endogenous) and not by an exogenous characterization, the project of saying interesting things about types and populations that figure in languages in use doesn’t have to be burdened by any reductive constraint. Metaphysically speaking, the bounds of the population, the relevant types, and the contents conventionally associated with sentences, are all jointly and simultaneously grounded in facts about patterns of linguistic usage and attitudes of speakers and hearers.

The worry about this kind of account is not that it’ll fail to count genuine sentences and language-using populations as sentences and populations. The worry to have is that it will overgenerate. After all, by choosing crazy typing relations and gerrymandered populations, we may be able to find all sorts of dubious regularities connecting uses of sentences (so typed) to attitudes. In the Leckie/Williams paper, we consider a number of different ways this might happen, for example, by subdividing genuine populations and types (typing utterances by brown-eyed people separately from blue-eyed people); merging separate types together, or tailoring the population or typing so as to bias the resulting regularity (e.g. by restricting it to population who apply “red” to more orangey things than is the norm). Our strategy in response is to work through such examples, and argue that none of them produces a genuine example of overgeneration. They are gerrymandered regularities of truthfulness and trust, sure—but we argue, they each violate one or more clauses of the characterization of convention Lewis gave.

Suppose the Leckie/Williams project succeeds. Then revised characterization of “language in use” means that we remove the need to list in addition the typing of sentences and the identification of populations as among our the base facts of the metaphysics of linguistic representation. And with that, the last tie between the layers of representation has been put in place.

## NoR 4.4: Beyond belief.

This is one of a series of posts setting out my work on the Nature of Representation. You can view the whole series by following this link

In the previous three posts, I’ve given my favoured interpretationist account of linguistic content: fixed by that explanation fitting the base conventions of truthfulness and trust that best manages the trade off between (subject-sensitive) simplicity and strength.

The base facts were conventions linking utterances to belief. So as well as feeding into an account of how words get their meaning, the conventions give us an excellent handle on what it might mean to say that in asserting a sentence, we are “expressing a belief”. Notice, though, that the beliefs conventionally expressed in this way will not necessarily have the same content as that of the sentence—this is the phenomenon we covered in the post Fixing Fit.

Is the state of mind expressed by a sentence always a belief? Might moral sentences express pro and con-attitudes, epistemic modals a state of uncertainty, and conditional sentences states of conditional belief? There is a very natural way to understand what such claims mean, in this framework, respectively:

• There is a conventional regularity of uttering “x ought phi” only if one’s contingency plan for x’s situation, is to phi, and coming to so plan when one hears the utterance “x ought phi”.
• There is a conventional regularity of uttering “It might be that p” only if p is compatible with what one believes, and of adjusting one’s beliefs to make p compatible with them when one hears the utterance “It might be that p”.
• There is a conventional regularity of uttering “if p, q” only if one believes q on the supposition that p, and of adjusting one’s beliefs to come to believe q on the supposition that p, when one hears the utterance “if p, q”.

These are not the only ways of formulating the connections in the conventional framework—for example, perhaps closer to the dialogical role of epistemic modals would be to present them as a test: the “sincerity” condition remains the same, but on the “trust” end, the convention is that the speaker checks that p is already compatible with their beliefs, or else challenges the utterer.

Are there regularities of this kind? One might wonder whether someone uttering the words “Harry ought to phi” regularly leads their audience to plan to phi in Harry’s circumstances. Attention is naturally drawn to cases where these normative claims are contested—where someone is saying that Harry ought to change career, or do more exercise, or avoid white lies. In those cases, we don’t respond to bare assertions simply by incorporating the plans expressed into our own—we would tend to ask for a bit more explanation and justification. But of course, the same could be said of contested factual claims. If someone claims that the government will fall next week, we want to know how they know before we’ll take them at their word. These are violations of the “trust” regularity for belief, unless we add the caveats mentioned in an earlier post: to restrict it to situations where the speakers have no interest in deception and hearers regard speakers as relevantly authoritative. The same qualifications are, unsurprisingly, necessary in this case. And once we make that adjustment, then it may well be that the cases above are indeed conventional regularities. (It’s worth remembering one can come to plan to phi in x’s situation, without changing your contingency plans for any situation. For example, if you hear someone say “Harry ought to change career”, one might hold fixed one’s opinion about the circumstances in which changing-career is the thing to do, and simply come to believe that Harry is now in one of those circumstances. Lots of information-exchange using normative sentences can take this form.)

So there is a plausible way of extending the talk about sentences expressing beliefs to a more general account of sentence-types expressing attitudes of various kinds.

Now, this is all perfectly compatible with following being true, at one and the same time:

• There is a conventional regularity of uttering “x ought phi” only if one believes x ought to phi, and of coming to believe x ought to phi upon hearing “x ought phi”.
• There is a conventional regularity of uttering “It might be that p” only if one believes it might be that p, and of coming to believe this upon hearing “it might be that p”.
• There is a conventional regularity of uttering “if p, q” only if one believes that if p, q, and of coming to believe this upon hearing “if p, q”.

After all, I already flagged that on this account, sentences of any kind will be conventionally associated with many different beliefs. So why not beliefs and other attitudes too?

There are questions here, however, about the appropriate order of explanation, and it’s hear we make contact with work on broadly expressivist treatments of language. For one might think that our layer-2 story about fixing thought content did not get us to a point where we had a grip on thoughts with modal, deontic or conditional content. If that is the case, then although the later three conventional associations with belief will come out eventually as true commentaries on an agent, they are not something to which we had earned a right at layer 2 of our metaphysics of representation. On the other hand, we might think that from patterns of belief and desires, we will have a fix on states of belief, supposition and planning (planning states with factual content are not something that I’ve covered so far, but I’d be happy to add them as an additional element to the kind of psychology that radical interpretation will select).

That leaves us in the following position: as we approach linguistic content, for some sentences, we are not in a position to appeal to belief-centric conventions of truthfulness and trust, since the belief-states that they might express have not yet been assigned content. The semantic facts that we can ground by means of the story just given will then be restricted to a fragment of language that leaves out these problematic bits of vocabulary. So we need to go back to the grindstone.

Expressivists about deontic and epistemic modals and conditionals will tend to think that this is the situation we find ourselves in—and they have a solution to offer. Rather than building our metasemantics for the problematic terms by looking at features of the belief expressed, they propose to work directly on the other attitudes—the plans, suppositions or ignorance—that stand to these sentences just as ordinary beliefs stand to factual sentences. Let us consider how this might go.

To start with, the story I have been giving would need to be generalized. The key “datapoints” that a semantic theory had to “fit” were the range of propositions that I said were “conventionally associated” with each sentence. Those contents are associated with a sentence because by being the contents of the beliefs the sentence expresses (i.e. that figure in conventions of truthfulness and trust). We can’t just mechanically transfer this to other attitudes: for example, the content of the plan expressed by “Harry ought to change career” might: changing career in Harry’s circumstances. But we will get entirely the wrong results if we required our semantic content to assign to normative content the factual proposition that one changes career in Harry’s circumstances. The semantics needs to fit with a planning state, not a factual belief that the plan has been executed.

Let us take a leaf out of Gibbard’s book here. Let a world-hyperplan pair be the combination of a complete description of how the world is factually, together with function from all possible choice situations to one of the options available in that situation. To accept a world-hyperplan pair is to believe that the world fits the description given by the world component, and to plan to do x in circumstance c iff the hyperplan maps c to x. To accept a set of world-hyperplan pairs is to rule out any combination of world and hyperplan that is outside that set—this amounts, in the general case, to a set of conditional commitments to plan a certain way if the world is thus-and-such. (Okay, you might want more details here. This is not the place to defend the possibility of such a redescription: if it is not legitimate, then that’s a problem for Gibbardian expressivists independent of my kind of metasemantics).

I will assume that our metaphysics of layer-2 representations gets us to a point where we can read off what a subject’s conditional plans are, in this sense. We can then redescribe the combined belief/planning states of this agent in terms of which sets of world-hyperplan pairs they accept. And that means we will have earned the right to redescribe this:

• There is a conventional regularity of uttering “x ought phi” only if one believes x ought to phi, and of coming to believe x ought to phi upon hearing “x ought phi”.

as follows for a suitable q (set of world-hyperplan pairs):

• There is a conventional regularity of uttering “x ought phi” only if one accepts q, and of coming to accept q upon hearing “x ought phi”.

And once in this format, we can extract the data the semantic theory is to fit, since we now have a new, more general kind of conventionally associated content: the combined belief/planning states q. The correct compositional interpretation will then be as before: the (subject-sensitive) simplest, strongest interpretation that fits these base facts. And contemporary expressivist semantic theory is exactly a specification of functions of this kind.

The crucial technique here is the Gibbardian transformation of a description of a subject’s psychology as the acceptance of enriched content—that’s what allows us to articulate the convention in a way that provides a target for compositional interpretations. So if we want to replicate this metasemantics for other kinds of expressive content, we need to perform the analogous redescription. It might be, for example, that to underpin epistemic modals, we need to describe an agent as accepting a set of world-information state pairs, representing combinations of factual states of the world and states of their own factual information to which they are open. To underpin epistemic modals, we need to credit the agent as accepting a set of world-update function pairs. And if this is to be a single story across the board, we will need to combine these and others to a highly complex summary of a possible opinionated psychology: world-hyperplan-information-update-etc, and then, on the basis of the facts about mental content already established at layer 1 in more familiar terms, say which of these possible opinionated psychologies are ones to which the subject is open.

None of this is easy or uncontentious! The existence of the conventions that tie the target sentences to non-doxastic attitudes, the Gibbardian redescriptions, and the availability of compositional interpretations of language are all points at which one might balk. But those who favour an expressivist theory of the discourse in question are likely to be sympathetic to these kinds of claims, and my central point in this post has been to show that the convention-based metasemantics can underwrite that project just as much as it can underwrite more traditional cognitivist projects.

(I’m very grateful to discussions with Daniel Elstein and Will Gamester that have shaped my thinking about the above. They are not to be blamed.)

## NoR 4.3: Subject-sensitive simplicity

This is one of a series of posts setting out my work on the Nature of Representation. You can view the whole series by following this link

In the previous post, I presented three underdetermination challenges. These amount to different ways of assigning the same truth-conditions to the sentences we use. Each is definitely incorrect, either because it assigns the wrong referents to words, or the wrong compositional rules to the language. But we can’t explain why it is incorrect by pointing to a lack of fit (or lack of strength) with the base facts (conventional associations between sentences and coarse-grained propositions). And for at least the last of the challenges—twisted compositional rules—we can’t explain why it is incorrect even if we suppose that our work on layer-2 intentionality earns us the right to a much more fine-grained set of facts about belief content.

My goal in this post is to review the obvious saving constraint: simplicity. I’ll recap the way that in Lewis’s hands, this turned into the idea that “natural properties” were “reference magnets” (we already saw this simplicity-naturalness connection as it arose in the context of inductive concepts). And I’ll put forward a very different treatment of simplicity, a “subject-sensitive” version on which each agent’s cognitive repertoire defines a set of contents that are reference-magnetic for their language. The effect is a new and better way to use the work already done on thought content to fix determinate linguistic content.

So first of all: simplicity. I have considered it once before. When considering concepts that were used in inductive generalizations, I sketched a way of deriving a constraint on content, via the rationality of inference to the best explanation. The best explanation, again, was determined by strength, fit with data, and simplicity. Applied to that case, the prediction was that the interpretation of the subject should be one which makes the content that they are treating as an explanatory generalization a simple hypothesis (all else equal). Overly “disjunctive”, less simple, interpretations of the concepts subjects deploy in such contexts are thus disfavoured.

Notice that in this case of belief/desire interpretation, there is no direct constraint that the interpreter’s story about “rationalization” should be simple. Rather, the constraint is that it rationalize the agent, inter alia making their beliefs justified. Simplicity entered the picture only when the subject’s cognitive architecture made simplicity epistemically relevant. By contrast, the proposal in the case of linguistic representation is that a role for simplicity is falling out of the fact that we appealed to best explanation directly in the selectional ideology itself. That will mean that the way that theses about simplicity play out here will be rather different to what we saw before. In particular, simplicity in content-assignment is non-contingently, always and everywhere a determinant of content—there are no restrictions to special classes of words, as there was to inductive concepts in the earlier account.

I will be assuming that simplicity is in the first instance a property of theories, not of abstract functions. So in order to make sense of that idea that ranking compositional interpretations (abstract functions, remember) as more or less simple, we need to do some work. We look to the ways those functions can be most concisely expressed—by an axiomatic specifications of the referents of lexical items, plus compositional rules. As well as some measure of the compactness of a given axiomatic specification, we will also need to make sure the specification is presented in some canonical language. In all this, I follow what I take to be Lewis’s treatment of simplicity (independently motivated, and deployed in e.g. his Humean theory of laws). And as I noted earlier, if we make one final move we can explain a famous feature of Lewis’s account of language.

That move is to identify the canonical language with “ontologese”, a language that features predicates only for metaphysically fundamental properties and relations (plus various bits of kit such as broadly logical concepts—Lewis was never very clear what resources were in the canonical language beyond the natural properties (the best guess is that he thought he would do enough by just listing them). Sider, the coiner of the term “ontolegese”, suggested that we get a more principled and satisfactory theory by generalizing the idea of fundamentality, so that quantifiers, connectives etc can be fundamental or not. On Sider’s version of the view, every primitive expression in the canonical language should denote something metaphysically fundamental.

Note the following (which I first presented in my “Eligibility and Inscrutability” paper from 2007). Suppose we have a pair of compositional interpretations of L, differing only in their interpretations of a single predicate “F”. The first says it denotes P, the second says it denotes Q. And suppose that the shortest way to define P in ontologese is longer than the shortest way to define Q in ontologese. Then the second interpretation will be more compactly expressable in ontologese—simpler—than the first. If the two theories are otherwise tied (on grounds of fit, predictiveness, etc) at the top as candidates for being the best interpretation of L, then on these grounds, the second will win. So we derive that length of definability in ontologese—what Lewis calls “relative naturalness”—of the semantic values assigned as referents to words is one of the determinants of correct interpretation. We can also see that compositional rules, no less than reference-assignments, can be evaluated for relative naturalness, and on exactly the same grounds: contribution to simplicity.

Consider the Kripkenstein problem of deviant compositional rules. We have every reason to believe that the deviant rule takes longer to write out than the original—after all, the way we have of writing it out is to write down the original, add a conjunct restricting its application, and then add a further disjunction. So we have every reason to believe it’s a less natural compositional rule. So the theory that uses it is less simple. Since it has no compensating virtues over the standard interpretation, it is incorrect. Similar stories can be run for the skolemite and permuted interpretations, if those have not already been dealt with at an earlier stage of the metaphysics of representation.

I highlight again that one can accept much of this putative resolution of the underdetermination challenges without going all the way to relative naturalness. The identification of simplicity with compactness of expression in ontologese is a theory: and a very contentious one (even in application to theories in metaphysics and fundamental physics, and certainly for higher-level theories of social phenomena like language). We might short-cut all this simply by stopping with the very first claim: that simplicity partially determines best explanation. Add the assumption that the Kripkensteinian compositional rule is less simple than the “straight” alternative. If that is true (never mind what grounds that fact) our problem is over. There is work to do for those with an interest in the theory of simplicity, but the metaphysician of content can pack up and go home. The same structure applies also to permuted and skolemite interpretation—those interpretations can be ruled out if we assume that the interpretations involved are less simple than the standard.

The needed assumptions about simplicity are very plausible. So there’s a good case to be made that at this level of description, the Lewisian solution just works. And if one is content to treat it as another working primitive, we are done. But of course, if simplicity turned out to be some kind of hyper-subjective property linked to what each of us feels comfortable working with, then there’s a danger that linguistic content will inherit this hyper-subjectivity. And one might worry that it will be impossible to articulate simplicity as it applies to linguistic theories, without appealing to facts about linguistic representation. So there’s good reason to dig a little deeper. That also has the advantage of making the account more predictive—it’s a great virtue of the full Lewisian package that we can start from on which we have an independent grip (what is more or less natural, in his sense) and derive consequences for linguistic representation. It would be nice to recover similar explanatory power.

One can dig deeper without going all the way to the point that Lewis reaches. Indeed, one can endorse the general identification of simplicity with compactness-of-expression-in-a-canonical-language without saying the canonical language is ontologese. Now in other work (“Lewis on reference and eligibility”, 2016), I floated the idea of “parochial” simplicity. This involves the theorist specifying—in a quite ad hoc and parochial manner—some language C that they favour for measuring simplicity. Relative to that choice of C, simplicity facts can be handled as before, and shortness of definability from C becomes a determinant of content (“reference magnetism”). Of course, if different theorists select different C, they may pick different interpretations as correct, and so in principle come up with different candidate accounts of semantic content. So this approach makes facts about linguistic content (insofar as they go beyond what we can extract from the constraint to “fit with the conventional base”), if not wildly subjective, at least parochial. I don’t find that as abhorrent as vast undetermination of reference. Indeed, I think it’s the best version of a deflationary approach to linguistic representation. But I do not think it counts as a realist theory of linguistic content—and that is my present target.

Accordingly, I float another option. Let the canonical language be fixed not by the theorist’s choice, but by the subject’s conceptual repertoire. For this to make sense, we need to know what their conceptual repertoire is, and it needs to be in some medium in which it makes sense to carry out definitions. So here I am going back to the work we did earlier in layer 2 metaphysics of representation, and adding the assumption that prior to public language, there is a sufficiently language-like medium for thought, whose components have fairly determinate content—the story of how they acquire that content is as given in the subsequence 2 of posts. I propose that ew now let the simplicity of a theory (for subject x) be the compactness of its expression in x’s medium of thought. So, if x’s medium of thought is mentalese, with a certain range of basic concepts, then we can let the simplicity of a property be its minimal length of definition in mentalese, from that basic range. When it comes to language, the things that each agent can think about via an atomic concept will be reference magnetic, for them. How this kind of subject-sensitive magnetism relates to naturalness is entirely deferred to to the level of metaphysics for thought-content.

(You might worry that the interpretation will be inexpressible for theorists who lack semantic and mathematical vocabulary involved in setting out the semantic theory. If that’s the case, then I will simply build into this account of simplicity for semantic theories that it should be judged by the subject’s conceptual repertoire supplemented with standard semantic and mathematical resource. This is analogous to Lewis’s supplementation of predicates with natural properties with other general-purpose resources, in fixing his “ontologese”).

My proposal gives up on the idea that “simplicity” is a subject-independent theoretical virtue, and so takes seriously the common idea that what is simple for me may not be simple for you, and vice versa. But given your conceptual repertoire and mind, we will both agree that the twisted compositional interpretation is less simple than the straight one, and that the permuted and skolemite interpretations are more complex than the standard. The agreement arises only because our differing conceptual resources overlap to a considerable extent: we both have the capacity to generalize unrestrictedly, for example.

There is a wrinkle in this proposal to make simplicity subject-sensitive. We are targeting a metaphysics of public language, and a public language involves a diverse population, each with a potentially idiosyncratic conceptual repertoire. So who within this population gets to set the standards of simplicity? I propose: no one person does. Simplicity relative to the population as a whole is indeterminate,  with each member of the population contributing their own precisification of the notion. Nevertheless, language-using populations will tend to overlap in conceptual resources, and so will agree on central verdicts about the relative simplicity of one theory over another—in particular, the permuted, skolemite and twisted interpretations are determinately less simple than the standard alternative.

(A good challenge to me, for enthusiasts to press: how can I see this about simplicity at the level of language, and also appeal to simplicity as a constraint on the content of inductive concepts. This is a challenge to which I hope to return.).

## NoR 4.2: Fixing Fit

This is one of a series of posts setting out my work on the Nature of Representation. You can view the whole series by following this link

Let us suppose that the base facts about whole-sentence-content (for those sentences which feature in actual patterns of use) have been established. The story about how we fix word-content was the following: that the correct compositional interpretation was whatever “best explains” the facts about whole-sentence content.

So what makes one compositional interpretation of a language better than another?  I will work with a familiar model: that betterness is determined by the tradeoff between virtues of fit, strength and simplicity. If the datapoints are facts about linguistic content (for a constrained range of sentences) then the contents assigned by the compositional theory should explain that data, which minimally requires being consistent with it. The theory should be strong, in that it should predict as much of this data as possible. But this is balanced by simplicity. As in an earlier post, the model for this is compactness of expression, when the interpretation is expressed in some canonical language.

“Fitting” the base facts about sentential content does not mean the the compositional interpretation should assign the same content to whole sentences that the base facts supply. It would be bold to bet that there is a unique content conventionally associated with each sentence, and an even bolder bet that that will turn out to be the semantic content that can be generated compositionally. No, “fit” is a looser, more interesting criterion that this. The constraint as I understand it is the following: given the content that the compositional interpretation assigns to the sentence, plus suitable auxiliarly hypotheses (general information about rational agents and conventions of cooperation, special purpose pragmatic principles) as much as possible of the range of contents that are conventionally associated with the sentence should be explicable. It’s easy to explain why there will be a regularity of truthfulness and trust connecting “Harry is a bachelor” to the content that Harry is male, on the basis of a compositional interpretation of it as having the content that Harry is a bachelor, since generally we believe the obvious logical consequences of what we believe. The reverse would not be easy. Again, general information about Gricean conventions of orderliness together with the standard compositional content will explain why “Harry had five drinks and drove home” is conventionally associated with the content, inter alia that the drinking preceded the driving. So even if these regularities of truthfulness and trust count as conventions, the standard interpretations fit them well.

If Fit and Strength were the only determinants of “best explanation” of the language in use, then the account would be subject to counterexample. It is well known that the same sentence-level truth-conditions can be generated by a variety of lexical interpretations, some of which are obviously crazy. I introduced two in an earlier post:

• Permuted interpretations. Where the standard interpretation has “Tibbles” referring to Tibbles, and “is sleeping” picking out the property of sleeping, the permuted interpretation says that “Tibbles” refers to the image under the permutation p of Tibbles, and “is sleeping” picks out the property of being the image under p of something which is sleeping. Systematically implemented, the permuted interpretation can be shown to produce exactly the same truth-conditions at the level of whole sentences as does the standard interpretation. But, apparently, that means that they fit with and predicts the same sentence-level facts about conventions. So we can’t explain on this basis the obvious fact that the permuted interpretation is obviously and determinately incorrect.
• Skolemite interpretations. Where the standard interpretation has the quantifier “everything” (in suitable contexts) ranging over absolutely everything, the skolemite interpretation takes it to be quantify restrictedly only over a countable domain (this domain may vary counterfactually, but relative to every counterfactual situation its domain is countable). And (with a few caveats) we can show that the skolemite interpretation and the original are truth-conditionally equivalent. But, apparently, that means that they fit with and predict the same sentence-level facts about conventions. So we can’t explain on this basis the obvious fact that the skolemite interpretation is obviously and determinately incorrect.

We met these kind of deviant interpretations in the context of the metaphysics of mental representation. Under the assumption that thought had a language-like structure that allows us to pose such puzzles, I argued that normative facts about the way in which we handle universal quantification in thought and inductively generalize would solve the problem.

Now Lewis denied the starting point of this earlier story. He stuck resolutely to theorizing thought content in a coarse-grained way (as a set of worlds, and later a set of centred worlds/individuals), ignoring issues of individual representational states and any compositional structure they might have. That only delayed the day of reckoning, since once he reached this point in the story—with public language and its compositional structure—he had to face the challenge head on. Further, nothing that Lewis did earlier on helps him here. Remember, the the raw materials for selecting interpretations are conventions of the form: utter S only if one believes p; and come to believe p if one hears someone utter S. And since for Lewis the “p” here picks up a coarse-grained truth condition. But the permuted and skolemite interpretations fit that sort of data perfectly. So underdetermination looms for him.

(There is one way in which we might try to replay those earlier thoughts. Among the conventions of truthfulness and trust will be an association between, say, “everything is physical” and Jack being physical. That is obviously not going to be the semantic content, but we need to explain why there is a convention of truthfulness and trust there, given the semantic content we assign. Here is the suggestion: a restricted interpretation, even one that de facto includes Jack in the domain D of the quantifier, won’t afford such an explanation. That’s because the audience couldn’t rationally infer just from the information that everything in D is physical, to Jack being physical, except under the presupposition that Jack is in D (so what we should expect, under a restricted interpretation, is that we only get conventional association with: if Jack is in D, then he’s physical). This is a natural way to try to lever the earlier account I gave into the current setting. But unfortunately, I don’t think Lewis can appeal to it. For him, the contents that everything is physical and that everything in D is physical are identical—since they’re truth conditionally equivalent, denoting the same set of worlds. So it’s not an option for Lewis to say that believing one supports believing Jack is physical, while believing the other supports believing only the conditional—that’s a distinction we can only make if we presuppose a finer-grained individuation of the two contents.)

So Lewis needs something more than fit and strength in his account of better explanation. But so does everyone else. At first it might not seem this way. After all, if we’ve already solved these puzzles at the level of thought-content, then surely linguistic content should be able to inherit this determinate content? There are two problems with the proposal. First, it’s surprisingly tricky to get a workable story of how determinate content is inherited by language from thought. And second, there are further underdetermination challenges beyond the two just mentioned for which this strategy won’t help.

On the first point, let us assume, pace Lewis, that the structure of thought was itself language-like, and that we confronted and solved the problem of permuted and skolemite interpretations as it arose at this lower layer of representation in the way I described earlier. We will have already earned the right to ascribe to agents, for example, a truly universal belief that everything is physical (modelled perhaps by a structured proposition, rather than a Lewisian set of worlds).  The “inheritance” gambit then works as follows: there will be a regularity of uttering “everything is physical” only when the utterer believes that truly universal structured content. And to the extent that this regularity is entrenched in the beliefs and preferences of the community so that it counts as a convention (plausible enough), the constraint on linguistic interpretation will not simply be that we fit/predict data characterized by coarse-grained content, but that it fits and predicts fine-grained data. Ta-da!

But we have already seen that “fit” cannot simply be a matter of identity between the content of thought and language. And theorizing thought in a fine-grained way amplifies this. None of our assumptions entail that for each sentence in public language, there is a belief whose content exactly matches its structured content. Here’s a toy example: suppose we have no single concept “bachelor” in our language, but do have the concepts “unmarried” and “adult male”. Then the fine-grained belief content conventionally associated with “Harry is a bachelor” may be a conjunctive structured proposition: Harry is unmarried and Harry is an adult male. But we shouldn’t require a semantic theory to compositionally assign that particular proposition to the atomic sentence in question—it may be impossible to do that. What seems attractive is to say that the assignment of the stuctured proposition <Harry, being a bachelor> to the sentence explains the conventional data well enough: after all, at the level of truth-conditions, it is obviously equivalent to the conventionally associated content. But certainly the structured contents ascribed by the permuted interpretation are also obviously truth-conditionally equivalent to the structured contents conventionally associated with the sentence, so fit equally well. Maybe matters are less obvious in the case of the skolemite interpretation, but it’s still necessarily and a priori equivalent. Given the needed flexibility in how to measure “fit”, it’s far from obvious we are on solid ground in insisting that the fine-grain content of thought must be the semantic content of the sentences. (There are moves one can make, of course: arguing that something fits better the closer the content assigned is to conventional content. But this is not obviously independently motivated, and as we’re about to see, won’t do all the work).

But the real killer is the following challenge:

• Compositionally twisted interpretations. O’Leary-Hawthorne once asked Lewis to say what fixed the compositional rules that must be part of any compositional interpretation. One version of this challenge is as follows: the standard compositional rules say, for example, that if the semantic value of “a” is x, and the semantic value of “F” is the function f (at each world, mapping an object to a truth value), then the semantic value of the sentence “Fa” is the function that maps a world w to the True iff at w, f maps o to the True (or, if one wants a fine-grained content, then it is the structured proposition <f,a>). But a twisted rule might take the following disjunctive form: if “Fa” is a sentence that is tokened in the community, the semantic value of the whole is determined just as previously. But if “Fa” is never tokened, then its semantic value is a function that maps a world w to the False iff at w, f maps o to the true (respectively for fine-grained content: it is the structured proposition <neg(f),a>, where neg(f).

Now, the trouble here is that the standard and the twisted interpretation agree on all the datapoints. They can even agree on the fine-grained structured content associated with each sentence ever used. So they’ll fit and predict the sentential data equally well (remember: the language in use which provides the data is restricted to those sentences where there are actually existing regularities). The “constraining content” we inherit from the work we did on pinning down determinate thoughts is already exhausted by this stage: that at most constrains how our interpretation relates to the datapoints, but the projection beyond this to unused sentences is in the purview of the metaphysics of linguistic content alone. This “Kripkensteinian” underdetermination challenge will remain, even if we battle to relocate some of the others to the level of thought.

Something more is required. And if what determines best explanation is fit, strength and simplicity, it looks like simplicity must do the job.