Category Archives: Semantics

Fundamental and derivative truths

I’ve posted a new version of my paper “Fundamental and derivative truths“. The new version notes a few more uses for the fundamental/derivative distinction, and clears up a few points.

As before, the paper is concerned with a way of understanding the—initially pretty hard to take—claim that tables exist, but don’t really exist. I think that that claim at least makes good sense, and arguably the distinction between what is really/fundamentally the case and what is merely the case is something we should believe in whether or not we endorse the particular claim about tables. I think in particular that it leads to a particularly attractive view on the nature of set theory, since it really does seem that we do want to be able to “postulate sets into existence” (y’know how things form sets? well consider the set of absolutely everything. On pain of contradiction that set can’t be something that existed beforehand…) The framework I like lets us make sober sense of that.

The current version tidies up a bunch of things, it pinpoints more explicitly the difference between comparatively “easy cases”—defending the compatibility of set theoretic truths with a nominalist ontology—-and “hard cases”—defending the compatibility of the Moorean corpus with a microphysical mereological nihilist ontology. I’ve got another paper focusing on some of the technicalities of the composition case.

This project causes me much grief, since it involves many many different philosophically controversial areas: philosophy of maths, metaphysics of composition, theory of ontological commitment, philosophy of language and in particular metasemantics, and so forth. That makes it exciting to work on, but hard to present to people in a digestible way. Nevertheless, I’m going to have another go at the CSMN workshop in Olso later this month, focusing on the philosophy of language/theory of meaning aspects.

Kripkenstein’s monster

Though I’ve thought a lot about inscrutability and indeterminacy (well, I wrote my PhD thesis on it) I’ve always run a bit scared from the literature on Kripkenstein. Partly this is because the literature is so huge and sometimes intimidatingly complex. Partly it’s because I was a bit dissatisfied/puzzled with some of the foundational assumptions that seemed to be around, and was setting it aside until I had time to think things through.

Anyway, I’m now thinking about making a start on thinking about the issue. So this post is something in the way of a plea for information: I’m going to set out how I understand the puzzle involved, and invite people to disabuse me of my ignorance, recommend good readings or where these ideas have already been worked out.

To begin with, let’s draw a rough divide between three types of facts:

  1. Paradigmatically naturalistic facts (patterns of assent and dissent, causal relationships, dispositions, etc).
  2. Meaning-facts. (Of the form: “+” means addition, “67+56=123” is true, “Dobbin” refers to Dobbin.)
  3. Linguistic norms. (Of the form: One should utter “67+56=123” in such-and-such circs).

Kripkenstein’s strategy is to ask us to show how facts of (A) can constitute facts of kind (B) and (C). (An oddity here: the debate seems to have centred on a “dispositionalist” account of the move from A to B. But that’s hardly a popular option in the literature on naturalistic treatments of content, where variants of radical interpretation (Lewis, Davidson), of causal (Fodor, Field) and teleological (Millikan) theories are far more prominent. Boghossian in his state of the art article in Mind seems to say that these can all be seen as variants of the dispositionalist idea. But I don’t quite understand how. Anyway…)

One of the major strategies in Kripkenstein is to raise doubts about whether this or that constitutive story can really found facts of kind (C). Notice that if one assumes that (B) and (C) are a joint package, then this will simultaneously throw into doubt naturalistic stories about (B).

In what sense might they be a joint package? Well, maybe some sort of constraint like the following is proposed: unless putative meaning-facts make immediately intelligible the corresponding linguistic norms, then they don’t deserve the name “meaning facts” at all.

To see an application, suppose that some of Kripke’s “technical” objections to the dispositionalist position were patched (e.g. suppose one could non-circularly identify a disposition of mine to return the intuitively correct verdicts to each and every arithmetical sum). Still, then, there’s the “normative” objection: why are those the verdicts the ones one should return in those circumstances? And (right or wrongly) the Kripkenstein challenge is that this normative explanation is missing. So (according to the Kripkean) these ain’t the meaning-facts at all.

There’s one purely terminological issue I’d like to settle at this point. I think we shouldn’t just build it into the definition of meaning-facts that they correspond to linguistic norms in this way. After all, there’s lot of other theoretical roles for meaning other than supporting linguistic norms (e.g. a predicative/explanatory role wrt understanding, for example). I propose to proceed as follows. Firstly, let’s speak of “semantic” or “meaning” facts in general (picked out if you like via other aspects of the theoretical role of meaning). Secondly, we’ll look for arguments for or against the substantive claim that part of the job of a theory of meaning is to subserve, or make immediately intelligible, or whatever, facts like (C).

Onto details. The Kripkenstein paradox looks like it proceeds on the following assumptions. First, three principles are taken as target (we can think of them as part of a “folk theory” of meaning)

  1. the meaning-facts to be exactly as we take them to be: i.e. arithmetical truths are determinate “to infinity”; and
  2. the corresponding linguistic norms are determinate “to infinity” as well; and
  3. (1) and (2) are connected in the obvious way: if S is true, then in appropriate circumstances, we should utter S.

The “straight solutions” seem to tacitly assume that our story should take the following form. First, give some constitutive story about what fixes facts of kind (B). Then (supposing there’s no obvious counterexamples, i.e. that the technical challenge is met). Then the Kripkensteinian looks to see whether this “really gives you meaning”, in the sense that we’ve also got a story underpinning (C). Given our early discussion, the Kripkensteinian challenge needs to be rephrased somewhat. Put the challenge as follows. First, the straight solution gives a theory of semantic facts, which is evaluated for success on grounds that set aside putative connections to facts of kind (C). Next, we ask the question: can we give an adequate account of facts of kind (C), on the basis of what we have so far? The Kripkensteinian suggests not.

The “sceptical solution” starts in the other direction. It takes as groundwork facts of kind (A) and (C) (perhaps explaining facts of kind (C) on the basis of those of kind (A)?) and then uses this in constructing an account of (something like) (B). One Kripkensteinian thought here is to base some kind of vindication of (B)-talk on the (C)-style claim that one ought to utter sentences involving semantic vocabulary such as ” ‘+’ means addition”.

The basic idea one should be having at this point is more general however. Rather than start by assuming that facts like (B) are prior in the order of explanation to facts like (C), why not consider other explanatory orderings? Two spring to mind: linguistic normativity and meaning-facts are explained independently; or linguistic normativity is prior in the order of explanation to meaning-facts.

One natural thought in the latter direction is to run a “radical interpretation” line. The first element of a radical interpretation proposal is identify a “target set” of T-sentences, which the meaning-fixing T-theory for a language is (cp) constrained to generate. Davidson suggests we pick the T-sentences by looking at what sentences people de facto hold true in certain circumstances. But, granted (C)-facts, when identifying the target set of T-sentences one might instead appeal to what person’s ought to utter in such and such circs.

There’s no obvious reason why such normative facts need be construed as themselves “semantic” in nature, nor any obvious reason why the naturalistically minded shouldn’t look for reductions of this kind of normativity (e.g. it might be a normativity on a par with that involved with weak hypothetical imperatives, e.g. in the claim that I should eat this food, in order to stay alive, which I take to be pretty unscary.). So there’s no need to give up on reductionist project in doing things this way. Nor is it only radical interpretation that could build in this sort of appeal to (C)-type facts in the account of meaning.

One nice thing about building normativity into the subvening base for semantic facts in this way is that we make it obvious that we’ll get something like (a perhaps restricted and hedged) form of (iii). Running accounts of (B) and (C) separately would make the convergence of meaning-facts and linguistic norms seem like a coincidence, if it in fact holds in any form at all.)

Is there anything particularly sceptical about the setup, so construed? Not in the sense in which Kripke’s own suggestion is. Two things about the Kripke proposal (as I suggested we read it): it’s clear that we’ve got some kind of projectionist/quasi-realist treatment of the semantic going on (it’s only the acceptability of semantic claims that’s being vindicated, not “semantic facts” as most naturalistic theories of meaning would conceive them). Further, the sort of norms to which we can reasonably appeal will be grounded in practices of praise and blame in a linguistic community to which we belong, and given the sheer absence of people doing very-long sums, there just won’t be a practice of praise and blaming people for uttering “x+y=z” for sufficiently large choices of x, y and z. The linguistic norms we can ground in this way might be much more restricted than one might at first think: maybe only finitely many sentences S are such that something of the following form holds: we should assert S in circs c. Though there might be norms governing apparently infinitary claims, there is no reason to suppose in this setup that there are infinitely many type-(C) facts. That’ll mean that (2) and (3) are dropped.

In sum, Kripke’s proposal is sceptical in two senses: it is projectionist, rather than realist, about meaning-facts. And it drops what one might take to be a central plank of folk-theory of meaning, (2) and (3) above.

On the other hand, the modified radical interpretation or causal theory proposal I’ve been sketching can perfectly well be a realist about meaning-facts, having them “stretch out to infinity” as much as you like (I’d be looking to combine the radical interpretation setting sketched earlier with something like Lewis’s eligibility constraints on correct interpretation, to secure semantic determinacy). So it’s not “sceptical” in the first sense in which Kripke’s theory is: it doesn’t involve any dodgy projectivism about meaning-facts. But it is a “sceptical solution” in the other sense, since it gives up the claims that linguistic norms “stretch out” to infinity, and that truth-conditions of sentences are invariably paired with some such norm.

[Thanks (I think) are owed to Gerald Lang for the title to this post. A quick google search reveals that others have had the same idea…]

Supervaluations and revisionism once more

I’ve just spent the afternoon thinking about an error I found in my paper “supervaluational consequence” (see this previous post). I’ve figured out how to patch it now, so thought I’d blog about it.

The background is the orthodox view that supervaluational consequence will lead to revisions of classical logic. The strongest case I know for this (due to Williamson) is the following. Consider the claim “p&~Determinately(p)”. This (it is claimed) cannot be true on any serious supervaluational model of our language. Equivalently, you can’t have both p and ~Determinately(p) both true in a single model. If classical reductio were an ok rule of inference, therefore, you’d be able argue from ~Determinately(p) to ~p. But nobody thinks that’s supervaluationally valid: any indeterminate sentence will be a counterexample to it. So classical reductio should be given up.

This is stronger than the more commonly cited argument: that supervaluational semantics vindicates the move from p to Determinately(p), but not the material conditional “if p then Determinately(p)” (a counterexample to conditional proof). The reason is that, if “Determinately” itself is vague, arguably the supervaluationist won’t be committed to the former move. The key here is the thought that as well as things that are determinately sharpenings of our language, their may be interpretations which are borderline-sharpenings. Perhaps interpretation X is an “admissible interpretation of our language” on some sharpenings, but not on others. If p is true at all the definite sharpenings, but false at X, then that may lead to a situation where p is supertrue, but Determinately(p) isn’t.

But orthodoxy says that this sort of situation (non-transitivity in the accessibility relation among interpretations of our language) does nothing to undermine the case for revisionism I mentioned in the first paragraph.

One thing I do in the paper is construct what seems to me a reasonable-looking toy semantics for a language, on which one can have both p and ~Determinately p. Here it is.

Suppose you have five colour patches, ranging from red to orange (non-red). Call them A,B,C,D,E.

Suppose that our thought and talk makes it the case that only interpretations which put the cut-off between B and D are determinately “sharpenings” of the language we use. Suppose, however, that there’s some fuzziness around in what it is to be an “admissible interpretation”. For example, an interpretation that places the cut-off between B and C, thinks that both interpretations placing the cut-off between C and D, and interpretations placing the cut-off between A and B, are admissible. And likewise, an interpretation that place the cut-off between C and D think that interpretations that place the cut-off between B and C are admissible, but also thinks that interpretations that place the cut-off between D and E are admissible.

Modelling the situation with four interpretations, labelled AB, BC, CD, DE, for where they place the red/non-red cut-off, we can express the thought like this: each intepretation accesses (thinks admissible) itself and its immediate neighbours, but nothing else. But BC and CD are the sharpenings.

My first claim is that all this is a perfectly coherent toy model for the supervaluationist: nothing dodgy or “unintended” is going on.

Now let’s think about the truths values assigned to particular claims. Notice, to start with, that the claim “B is red” will be true at each sharpening. The claim “Determinately, B is red” will be true at the sharpening CD, but it won’t be true at the sharpening BC, for that accesses an interpretation on which B counts as non-red (viz. AB).

Likewise, the claim “D is not red” will be true at each sharpening, but “Determinately, D is not red” will be true at the sharpening BC, but fails at CD, due to the latter seeing the (non-sharpening) interpretation DE, at which D counts as red.

In neither of these atomic cases do we find “p and ~Det(p)” coming out true (that’s where I made a mistake previously). But by considering the following, we can find such a case:

Consider “B is red and D is not red”. It’s easy to see that this is true at each of the sharpenings, from what’s been said above. But also “Determinately(B is red and D is not red)” is false at each of the sharpenings. It’s false at BC because of the accessible interpretation AB at which B counts as non-red. It’s false at CD because of the accessible interpretation DE at which D counts as red.

So we’ve got “B is red and D is not red, & ~Determinately(B is red and D is non-red).” And we’ve got that in a perfectly reasonable toy model for a language of colour predicates.

(Why do people think otherwise? Well, the standard way of modelling the consequence relation in settings where the accessibility relation is non-transitive is to think of the sharpenings as *all the interpretations accessible from some designated interpretation*. And that imposes additional structure which, for example, the model just sketch doesn’t satisfy. But the additional structure seems to me totally unmotivated, and I provide an alternative framework in the paper for freeing oneself from those assumptions. The key thing is not to try and define “sharpening” in terms of the accessibility relation.).

The conclusion: the best extant case for (global) supervaluational consequence being revisionary fails.

Gavagai again again

A new version of my discussion of Quine’s “argument from below” is now up online (shorter! punchier! better!) Turns out it was all to do with counterpart theory all along.

Here’s the blurb: Gavagai gets discussed all the time. But (unless I’m missing something in the literature) I’ve never seen an advocate of gavagai-style indeterminacy spell out in detail what exactly the deviant interpretations or translations are, that incorporating the different ways of dividing reference (over rabbits, rabbit-stages or undetached rabbit-parts). And without this it is to say the least, a bit hard to evaluate the supposed counterexamples to such interpretations! So the main job of the paper is to spell out, for a significant fragment of language, what the rival accounts of reference-division amount to.

One audience for the paper (who might not realize they are an audience for it initially) are folks interested in the stage theory/worm theory debate in the philosophy of persistence. The neuvo-Gavagai guy, according to me, is claiming that there’s no fact of the matter whether our semantics is stage-theoretic or worm-theoretic. I think there’s a reasonable chance that that he’s right.

Stronger than this: so long as there are both 4D worms and instantaneous temporal parts thereof around (even if they’re “dependent entities” or “rabbit histories” or “mere sums” as opposed to Real Objects), the Gavagai guy asks you to explain why our words don’t refer to those worms or stages rather than whatever entity you think *really are* rabbits (say, enduring objects wholly present at each time).

By the way, even if these semantic indeterminacy results were right, I don’t think that this forecloses the metaphysical debate about which of endurance, perdurance or exdurance is the right account of *persistence*. But I do think that it forces us to think hard about what the difference is between semantic and metaphysical claims, and what sort of reasons we might offer for either.

Fundamental and derivative truths

After a bit of to-ing and fro-ing, I’ve decided to post a first draft of “Fundamental and derivative truths” on my work in progress page.

I’ve been thinking about this material a lot lately, but I’ve found it surprisingly different to formulate and explain. I can see how everything fits together: just not sure how best to go about explaining it to people. Different people react to it in such different ways!

The paper does a bunch of things:

  • offering an interpretation of Kit Fine‘s distinction between things that are really true, and things that are merely true. (So, e.g. tables might exist, but not really exist).
  • using Agustin Rayo‘s recent proposal for formulating a theory of requirements/ontological commitments in explication.
  • putting forward a general strategy for formulating nihilist-friendly theories of requirements (set theoretic nihilism and mereological nihilisms being the illustrative cases used in the paper).
  • using this to give an account of “postulating” things into existence (e.g. sets, weirdo fusions).
  • sketching a general answer to the question: in virtue of what do our sentences have the ontological commitments they do (i.e. what makes a theory of requirements *the correct one* for this or that language?)

This is exploratory stuff: there’s lots more to be said about each of these, and plenty more issues (e.g. how does this relate to fictionalist proposals?) But I’m at a stage where feedback and discussion are perhaps the most important things, so making it public seems a natural strategy…

I’m going to be talking in more detail about the case of mereological nihilism at the CMM structure in metaphysics workshop.

Supervaluational consequence again

I’ve just finished a new version of my paper supervaluational consequence. A pdf version is available here. I thought I’d post something explaining what’s going on therein.

Let’s start at the beginning. Classical semantics requires, inter alia, the following. For every expression, there has to be a unique intended interpretation. This single interpretation will assign to each name, a single referent. To each predicate, it will assign a set of individuals. Similarly for other grammatical categories.

But sometimes, the idea that there are such unique referents, extensions and so on, looks absurd. What supervaluationism (in the liberal sense I’m interested in) gives you is the flexibility to accommodate this. Supervaluationism requires, not a single intended interpretation, but a set of interpretations.

So if you’re interested in the problem of the many, and think that there’s more than one optimal candidate referent for “Kilimanjaro”; if you’re interested in theory change, and think that relativist and rest mass are equi-optimal candidate properties to be what “mass” picks out; if you are interested in inscrutability of reference, and think that rabbit-slices, undetached rabbit parts as well as rabbits themselves are in the running to be in the extension of “rabbit”; if you’re interested in counterfactuals, and think that it’s indeterminate which world is the closest one where Bizet and Verdi were compatriots; if you think vagueness can be analyzed as a kind of multiple-candidate indeterminacy of reference; if you find any of these ideas plausible, then you should care about supervaluationism.

It would be interesting, therefore, if supervaluationism undermined the tenants of the kind of logic that we rely on. For either, in the light of the compelling applications of supervaluationism, we will have to revise our logic to accommodate these phenomena; or else supervaluationism as a theory of these phenomena is itself misconceived. Either way, there’s lots at stake.

Orthodoxy is that supervaluationism is logically revisionary, in that it involves admitting counterexamples to some of the most familiar classical inferential moves: conditional proof, reductio, argument by cases, contraposition. There’s a substantial hetrodox movement which recommends a hetrodox way of defining supervaluational consequence (so called “local consequence”) which is entirely non-revisionary.

My paper aims to do a number of things:

  1. to give persuasive arguments against the local consequence heterodoxy
  2. to establish, contra orthodoxy, that standard supervaluational consequence is not revisionary (this, granted a certain assumption)
  3. to show that, even if the assumption is refused, the usual case for revisionism is flawed
  4. to give a final fallback option: even if supervaluational consequence is revisionary, it is not damagingly so, for it in no way involves revision of inferential practice.

It convinces me that supervaluationists shouldn’t feel bad: they probably don’t revise logic, and if they do, it’s in a not-terribly-significant way.

Existence and just more theory

I’ve been spending much time recently in coffee shops with colleagues talking about the stuff that’s coming up in the fantastically named RIP Being conference (happening in Leeds this weekend). Hopefully I won’t be treading on toes if I draw out one strand of those conversations that I’ve been finding particularly interesting.

(continued below the fold)

The story for me begins with an old paper by Hartry Field. His series of papers in the 70’s is one of the all-time great runs: from “Tarski’s theory of truth” through “Quine and the correspondance theory”, “Theory Change”, “Logic, meaning and conceptual role”, “Conventionalism and Instrumentalism in semantics” and finishing off with “Mental representation”. (All references can be found here). Not all of them are reprinted in his collection Truth and the absence of fact, which seems a pity. The papers I mentioned above really seemed to me to lay out the early Fieldian programme in most of the details. Specifically, in missing out the papers “Logic, meaning …” and “Conventionalism and instrumentalism…”, you miss out on the early-Field’s take on how the cognitive significance of language relates to semantic theory; and the most interesting discussion I know of concerning what Putnam’s notorious “just more theory” argument might amount to.

The “just more theory” move is supposed to be the following. It’s familiar that you can preserve sensible truth conditions, by assigning wildly permuted reference-schemes to language (see my other recent posts for more details and links). But, prima facie, these permuted reference schemes are going to vitiate some plausible conditions of what it takes for a term to refer to something (e.g. that the object be causally connected to the term). Now, some theorists of meaning don’t build causal constraints into their metasemantic account. Davidson, early Lewis and the view Putnam describes as “standard” in his early paper, are among these (I call these “interpretationisms” elsewhere). But the received view, I guess, is to assume that some such causal constraint will be in play.

Inscrutability argument dead-in-the-water? No, says Putnam. For look! the permuted interpretation has the resources to render true sentences like “reference is a relation which is causally constrained”. For just as, on the permuted interpretation “reference” will be assigned as semantic value some weirdo twisted relation Reference*, so on the same interpretation “causation” will be assigned some weirdo twisted relation Causation. And it’ll turn out to be true that Reference* and Causation* match up in the right way. So (you might think), how can metasemantic theories tell you rule in favour of the sensible interpretation over this twisted one? For whichever no matter which of these we imagine to be the real interpretation of our language, everything we say will come out true.

Well, most people I speak to think this is a terrible argument. (For a particularly effective critique of Putnam—showing how badly things go if you allow him the “just more theory” move—see this paper by Tim Bays.) I’ll take it the reasons are pretty familiar (if not, Lewis’s “Putnam’s paradox” has a nice presentation of a now-standard response). Anyway, what’s interesting about Field’s paper is that it gives an alternative reading of Putnam’s challenge, which makes it much more interesting.

Let’s start by granting ourselves that we’ve got a theory which really has tied down reference pretty well. Suppose, for example, that we say “Billy” refers to Billy in virtue of appropriate causal connections between tokenings of that word and the person, Billy. The “Wild” inscrutability results threatened by permutation arguments simply don’t hold.

But now we can ask the following question: what’s special about that metasemantic theory you’re endorsing? Why should we be interested in Reference (=Causal relation C)? What if we tried to do all the explanatory work that we want semantics for, in terms of a different relation Reference*? We could then have a metasemantic* theory of reference*, which would explain that it is constrained to match a weirdo relation causation*. But, notice, that the relation “S expresses* proposition* p” (definable via reference*) and “S expresses proposition p” (definable via reference*) are coextensional. Now, if all the explanatory work we want semantics to do (e.g. explaining why people make those sounds when they believe the world is that way) only ever makes appeal to what propositions sentences express, then there just isn’t any reason (other than convenience) to talk about semantic properties rather than semantic* ones.

The conclusion of these considerations isn’t the kind of inscrutability I’m familiar with. It’s not that there’s some agreed-upon semantic relation, which is somehow indeterminate. It’s rather that (the consideration urges) it’ll be an entirely thin and uninteresting matter that we choose to pursue science via appeal to the determinate semantic properties rather than the determinate semantic* properties. You might think of this as a kind of metasemantic inscrutability, in contrast to the more usual semantic inscrutability: setting aside mere convenience, there’s no reason why we ought to give this metasemantic theory rather than that one.

Now, let’s turn to a different kind of inscrutability challenge. For one reason or another, lots of people are very worried over whether we can really secure determinate quantification over an absolutely unrestricted domain. Just suppose you’re convinced that there are no abstracta. Suppose you’re very careful to never say anything that commits you to their existence. However, suppose you’re wrong: abstracta exist. Intuitively, when you say “There are no abstracta, and I’m quantifying over absolutely everything!” you’re speaking falsely. But this is only so if your quantifiers range over the abstracta out there as well as the concreta: and why should that be? In virtue of what can your word “everything” range over the unrestricted domain? After all, what you say would be true if I interpreted the word as ranging over only concreta. I’d just take you to be saying that no concreta exist (within your domain; and that you were quantifying over absolutely everything in your domain. Both of these are true, given that your domain happens to contain only concreta!

Bring in causality doesn’t look like it helps here; neither would the form of reference-magnetism that Lewis endorsed, which demands that our predicates latch onto relatively natural empirical kinds, help. Ted Sider, in a paper he’s presenting at the RIP conference, advocates extending the Lewis point to make appeal to logical “natural kinds” (such as existence) at this point. However, let me sketch instead a variant of the Sider thought that seems more congenial to me (I’ll sketch at the end how to transfer it back).

My take on Lewis’s theory is the following. First, identify a “meaning building language”. This will contain only predicates for empirical natural kinds, plus some other stuff (quantifiers, connectives, perhaps terms for metaphysically basic things such as mereological notions). Now, what it is for a semantic theory for a natural language to be the correct one, is for there to be a semantic theory phrased in the meaning-building language, which (a) assigns to sentences of the natural language truth-conditions which fit with actual patterns of assent and dissent; and (b) is as syntactically simple as possible. (I defend this take on what Lewis is doing here).

Now, clearly we need to use some logical resources in constructing the semantic theory. Which should we allow? Sider’s answer: the logically natural ones. But for the moment let’s suppose we don’t want to commit ourselves to logically natural kinds. Well, why don’t we just stipulate that the meaning building language is going to contain this, that, and the next logical operator/connective? In the case of predicates, there’s the worry that our meaning-building theory should contain all the empirical kinds there are or could be: since we don’t know what these are, we need to give a general definition such as “the meaning building language will contain predicates for all and only natural kinds”. But there seems no comparible reason not simply to lay it down that “the meaning building language will contain negation, conjunction and the existential quantifier).

Indeed, we could go one further, and simply stipulate that the existential quantifier it contains is the absolutely unrestricted one. The effect will be just like the one Sider proposes: this metasemantic proposal has a built-in-bias towards ascribing truly unrestricted generality to the quantifiers of natural language, because it is syntactically simpler to lay down clauses for such quantifiers in the meaning-building language, than for the restricted alternatives. You quantify over everything, not just concreta, because the semantic theory that ascribes you this is more eligible than one that doesn’t, where eligibility is a matter of how simple the theory is when formulated in the meaning-building language just described.

Ok. So finally finally I get to the point. It seems to me that Field’s form of Putnam’s worries can be put to work here too. Let’s grant that the metasemantic theory just described delivers the right results about semantic properties of my language; and shows my unrestricted quantification to be determinate. But why choose just that metasemantic theory? Why not, for example, describe a metasemantic theory where semantic properties are determined by syntactic simplicity of a semantic theory in a meaning building language where the sole existential quantifier is restricted to concreta? Maybe we should grant that our way picks out the semantic properties: but we’ve yet to be told why we should be interested in the semantic properties, rather than the semantic* properties delivered by the rival metasemantic theory just sketched. Metasemantic inscrutability threatens once more.

(I think the same challenge can be put to the Sider-style proposal: e.g., consider the Lewis* metasemantic theory whereby the meaning-building language contains expressions for all those entities (of whatever category) which are natural*: i.e. are the intersection of genuinely natural properties (emprical or logical) with restricted domain D.)

I have suspicians that metasemantic inscrutability will turn out to be a worrying thing. That’s a substantive claim: but it’s got to be a matter for another posting!

(Major thanks here go to Andy and Joseph for discussions that shaped my thoughts on this stuff; though they are clearly not to be blamed..).

Rigidity and inscrutability

In response to something Dan asks in the comments in the previous post, I thought it might be worth laying out one reason why I’m thinking about “rich” forms of rigidity at the moment.

Vann McGee published a paper on inscrutability of reference recently. The part of it I’m particularly interested in deals with the permutation argument for radical inscrutability. The idea of the permutation arguments, in brief, is: twist the assignments of reference to terms as much as you like. By making compensating twists to the assignments of extensions to predicates, you’ll can make sure the twists “cancel out” so that the distribution of truth values among whole sentences matches exactly the “intended interpretation”. So (big gap) there’s no fact of the matter whether the twisted-interpretation or rather the intended-interpretation is the correct description of the semantic facts. (For details (ad nauseum) see e.g. this stuff)

Anyway, Vann McGee is interested in extending this argument to the intensional case. V interesting to me, since I’d be thinking about that too. I started to get worried when I saw that McGee argued that permutation arguments go wrong when you extend them to the intensional case. That seemed bad, coz I thought I’d proved a theorem that they go over smoothly to really rich intensional settings (ch.5, in the above). And, y’know, he’s Vann McGee, and I’m not, so default assumption was that he wins!

But actually, I think what he was saying doesn’t call into question the technical stuff I was working on. What it does is show that the permuted interpretations that I construct do strange things with rigidity. Hence my now wanting to think about rigidity a little more.

McGee’s nice point is this: if you permute the reference scheme wrt each world in turn, you end up disrupting facts about rigidity. To illustrate suppose that A is the actual world, and W a non-actual one. Choose a permutation for A that sends Billy to the Taj Mahal, and a permutation for W that sends Billy to the Great Wall of China. Then the permuted interpretation of the language will assign to “Billy” an intension that maps A to the Taj Mahal, and W to the Great Wall of China”. In the familiar way, we make compensating twists to the extension of each predicate wrt each world, and the intensions of sentences turn out invariant. But of course, “Billy” is no longer a rigid designator.

(McGee offers this as one horn of a dilemma concerning how you extend the permutation argument to the intensional case. The other horn concerns permuting the reference scheme for all worlds at once, with the result that you end up assigning objects as the reference of e in w, when that object doesn’t exist in w. I’ve also got thoughts about that horn, but that’s another story).

McGee’s dead right, and when I looked at (one form of) my recipe for extending the permutation argument to waht I called the “Carnapian” intensional case, I saw that this is exactly what I got. However, the substantial question is whether or not the non-rigidity of “Billy” on the permuted interpretation gives you any reason to rule out that interpretation as “unintended”. And this question obviously turns on the status of rigidity in the first place.

Now, if the motivation for thinking names were rigid, were just that assigning names rigid extensions allows us to assign the right truth conditions to “Billy is wise”, then it looks like the McGee point has little force against the permutation argument. Because, the permuted interpretation does just as well at generating the right truth conditions! So what we should conclude is that it becomes inscrutable whether or not names are rigid: the argument that names are rigid is undermined.

However, maybe there’s something deeper and spookier about rigidity, above and beyond getting-the-truth-conditions-right. Maybe, I thought, that’s what people are onto with the de jure rigidity stuff. And anyway, it’d be nice to get clear on all the motivations for rigidity that are in the air, to see whether we could get some (perhaps conditional) McGee-style argument against permutation inscrutability going.

p.s. one thing that I certainly hadn’t realized before reading McGee, was that the permuted interpretations I was offering as part of an inscrutability argument had non-rigid variables! As McGee points out, unless this were the case, you’d get the wrong results when looking at sentences involving quantification over a modal operator. I hadn’t clicked this, since I was working with Lewis’s general-semantics system, where variables are handled via an extra intensional index: it had quite passed me by that I was doing something so kooky to them. You live and learn!

Varities of Rigidity

This post over on metaphysical values by Ross Cameron has got me thinking about reference and rigidity.

There’s a familiar distinction between singular terms that are “de facto” rigid and those that are “de jure” rigid. Paradigm example of the former: “the smallest prime”; paradigm example of the latter: “Socrates” (or, variables).

I’m not sure exactly how “de jure” rigidity is typically characterized. I’ve seen it done through slogans such as: what the name contributes to the truth conditions expressed by sentences in which it figures is just the object it stands for. I’ve seen it done like this: a name is de jure rigid if its rigidity is “due to” the semantics of language, and not to metaphysical facts about the world.
Those two definitions seem to come apart: “the actual inventer of the zip” is plausibly de jure rigid in the second, but not the first, sense.

Let’s concentrate on the first sense of de jure rigidity (so a constraint on getting this right is that actualized descriptions won’t count as de jure rigid in this sense). How could we tighten it up? Well, the task is pretty easy if your semantic theory takes the right shape. For example, suppose you have a semantic theory which in the first instance assigns structured propositions to sentences, and then says what truth conditions these propositions (and thus sentences) have. Then you can say precisely what it is for “name to contribute an object” to the truth conditions of sentences in which it figures: it’s for you to shove an object into the structured prop associated with the sentence.

Notice two things:
(1) this is a semantic characterization: you can read off from the semantics of the language whether or not a given term is de jure rigid. (In this sense, it’s like the characterization of “rigidity” as “referring to the same thing wrt every world”).
(2) this is a local characterization: it only works if you’re working within the right semantic framework (the structured-props one). You can’t use it if you’re working e.g. with Davidsonian truth theories, or possible world semantics.

This raises a natural question: how can we capture de jure rigidity in this, that and the next semantic framework? What interests me is what we can do to this end, working with a general semantics in the sense of Lewis (1970). I can’t see any way to read off de jure rigidity from semantic theory.

But if we appeal to metasemantics (i.e. the theory of how semantic facts get fixed) it looks like we have some options. Suppose, for example you’re one of the word-first guys: that is, like early Field, Fodor, Stalnaker et al, you think that the metasemantic story operates first at the level of lexical items (names, predicates), and then we can offer a reduction of the semantic properties of complex expressions (e.g. definite descriptions, sentences) to the semantic properties of their parts. The de jure rigid terms will be those whose semantic properties are fixed in the following way:

(1) term T refers (simpliciter) to an object X.
(2) term T has the as intension that function from worlds to objects, which, at each world w, will pick out the entity that is identical to what T refers to (simpliciter).

So here’s my puzzle: this looks like a characterization that’s turns essentially on the word-first metasemantic theory. Fair do’s, if you like that kind of thing. But I’m more sympathetic to metasemantic theories like Lewis’s, where the semantic properties of language get determined holistically. If you’re an “interpretationist” (and if you haven’t got the semantic characterizations to help you out, because you’re working with a trad possible world semantics), is there any content in the notion of de jure rigidity? More on this to follow.

Semantics for nihilists

Microphysical mereological nihilists believe that only simples exist—things like leptons and quarks, perhaps. You can be a mereological nihilist without being a microphysical mereological nihilist (e.g. you can believe that ordinary objects are simples, or that the whole world is one great lumpy simple. Elsewhere I use this observation to respond to some objections to microphysical mereological nihilism). But it’s not so much fun.

If you’re a microphysical mereological nihilist, you’re likely to start getting worried that you’re committed to an almost universal error-theory of ordinary discourse. (Even if you’re not worried by that, your friends and readers are likely to be). So the MMN-ists tend to find ways of sweetening the pill. Van Inwagen paraphrases ordinary statements like “the cat is on the mat” into plural talk (the things arranged cat-wise are located above the things arranged mat-wise”). Dorr wants us to go fictionalist: “According to the fiction of composition, the cat is on the mat”). There’ll be some dispute at this point about the status of these substitutes. I don’t want to get into that here though.

I want to push for a different strategy. The way to do semantics is to do possible world semantics. And to do possible world semantics, you don’t merely talk about things and sets of things drawn from the actual world: you assign possible-worlds intensions as semantic values. For example, the possible-worlds semantic value of “is a cordate” is going to be something like a function from possible worlds to the things which have hearts in those worlds. And (I assume, contra e.g. Williamson) that there could be something that doesn’t exist in the actual world, but nevertheless has a heart. I’m assuming that this function is a set, and sets that have merely possible objects in their transitive closure are at least as dubious, ontologically speaking, as merely possible objects themselves.

Philosophers prepared to do pw-semantics, therefore, owe some account of this talk about stuff that doesn’t actually exist, but might have done. And so they give some theories. The one that I like best is Ted Sider’s “ersatz pluriverse” idea. You can think of this as a kind of fictionalism about possiblia-talk. You construct a big sentence that accurately describes all the possibilities. Statements about possibilia will be ok so long as they follow from the pluriverse sentence. (I know this is pretty sketchy: best to look at Sider’s version for the details).

Let’s call the possibilia talk vindicated by the construction Sider describes, the “initial” possibila talk. Sider mentions various things you might want to add into the pluriverse sentence. If you want to talk about sets containing possible objects drawn from different worlds (e.g. to do possible world semantics) then you’ll want to put some set-existence principles into your pluriverse sentence. If you want to talk about transworld fusions, you need to put some mereological principles into the pluriverse sentence. If you add a principle of universal composition into the pluriverse sentence, your pluriverse sentence will allow you to go along with David Lewis’s talk of arbitrary fusions of possibilia.

Now Sider himself believes that, in reality, universal composition holds. The microphysical mereological nihilist does not believe this. The pluriverse sentence we are considering says that in the actual world, there are lots of composite objects. Sider thinks this is a respect in which it describes reality aright; the MMN-ist will think that this is a respect in which it misdescribes reality.

I think the MMN-ist should use the pluriverse sentence we’ve just described to introduce possibilia talk. They will have to bear in mind that in some respects, it misdescribes reality: but after all, *everyone* has to agree with that. Sider thinks it misdescribes reality in saying that merely possible objects, and transworld fusions and sets thereof, exist—the MMN-ist simply thinks that it’s inaccuracy extends to the actual world. Both sides, of course, can specify exactly which bits they think accurately describe reality, and which are artefactual.

The MMN-ist, along with everyone else, already has the burden of vindicating possibilia-talk (and sets of possibilia, etc) in order to get the ontology required for pw-semantics. But when the MMN-ist follows the pluriverse route (and includes composition priniciples within the pluriverse sentence), they get a welcome side-benefit. Not only do they gain the required “virtual” other-worldly objects; they also get “virtual” actual-worldy objects.

The upshot is that when it comes to doing possible-world semantics, the MMN-ist can happily assign to “cordate” an intension that (at the actual world) contains macroscopic objects, just as Sider and other assign to “cordate” an intension that (at other worlds) contain merely possible objects. And sentences such as “there exist cordates” will be true in exactly the same sense as it is for Sider: the intension maps the actual world to a non-empty set of entities.

So we’ve no need for special paraphrases, or special-purpose fictionalizing constructions, in pursuit of some novel sense in which “there are cordates” is true for the MMN-ist. The flipside is that we can’t read off metaphysical commitments from such true existential sentences. Hey ho.

(cross-posted on Metaphysical Values)