# Category Archives: Indeterminacy

## Supervaluations and revisionism once more

I’ve just spent the afternoon thinking about an error I found in my paper “supervaluational consequence” (see this previous post). I’ve figured out how to patch it now, so thought I’d blog about it.

The background is the orthodox view that supervaluational consequence will lead to revisions of classical logic. The strongest case I know for this (due to Williamson) is the following. Consider the claim “p&~Determinately(p)”. This (it is claimed) cannot be true on any serious supervaluational model of our language. Equivalently, you can’t have both p and ~Determinately(p) both true in a single model. If classical reductio were an ok rule of inference, therefore, you’d be able argue from ~Determinately(p) to ~p. But nobody thinks that’s supervaluationally valid: any indeterminate sentence will be a counterexample to it. So classical reductio should be given up.

This is stronger than the more commonly cited argument: that supervaluational semantics vindicates the move from p to Determinately(p), but not the material conditional “if p then Determinately(p)” (a counterexample to conditional proof). The reason is that, if “Determinately” itself is vague, arguably the supervaluationist won’t be committed to the former move. The key here is the thought that as well as things that are determinately sharpenings of our language, their may be interpretations which are borderline-sharpenings. Perhaps interpretation X is an “admissible interpretation of our language” on some sharpenings, but not on others. If p is true at all the definite sharpenings, but false at X, then that may lead to a situation where p is supertrue, but Determinately(p) isn’t.

But orthodoxy says that this sort of situation (non-transitivity in the accessibility relation among interpretations of our language) does nothing to undermine the case for revisionism I mentioned in the first paragraph.

One thing I do in the paper is construct what seems to me a reasonable-looking toy semantics for a language, on which one can have both p and ~Determinately p. Here it is.

Suppose you have five colour patches, ranging from red to orange (non-red). Call them A,B,C,D,E.

Suppose that our thought and talk makes it the case that only interpretations which put the cut-off between B and D are determinately “sharpenings” of the language we use. Suppose, however, that there’s some fuzziness around in what it is to be an “admissible interpretation”. For example, an interpretation that places the cut-off between B and C, thinks that both interpretations placing the cut-off between C and D, and interpretations placing the cut-off between A and B, are admissible. And likewise, an interpretation that place the cut-off between C and D think that interpretations that place the cut-off between B and C are admissible, but also thinks that interpretations that place the cut-off between D and E are admissible.

Modelling the situation with four interpretations, labelled AB, BC, CD, DE, for where they place the red/non-red cut-off, we can express the thought like this: each intepretation accesses (thinks admissible) itself and its immediate neighbours, but nothing else. But BC and CD are the sharpenings.

My first claim is that all this is a perfectly coherent toy model for the supervaluationist: nothing dodgy or “unintended” is going on.

Now let’s think about the truths values assigned to particular claims. Notice, to start with, that the claim “B is red” will be true at each sharpening. The claim “Determinately, B is red” will be true at the sharpening CD, but it won’t be true at the sharpening BC, for that accesses an interpretation on which B counts as non-red (viz. AB).

Likewise, the claim “D is not red” will be true at each sharpening, but “Determinately, D is not red” will be true at the sharpening BC, but fails at CD, due to the latter seeing the (non-sharpening) interpretation DE, at which D counts as red.

In neither of these atomic cases do we find “p and ~Det(p)” coming out true (that’s where I made a mistake previously). But by considering the following, we can find such a case:

Consider “B is red and D is not red”. It’s easy to see that this is true at each of the sharpenings, from what’s been said above. But also “Determinately(B is red and D is not red)” is false at each of the sharpenings. It’s false at BC because of the accessible interpretation AB at which B counts as non-red. It’s false at CD because of the accessible interpretation DE at which D counts as red.

So we’ve got “B is red and D is not red, & ~Determinately(B is red and D is non-red).” And we’ve got that in a perfectly reasonable toy model for a language of colour predicates.

(Why do people think otherwise? Well, the standard way of modelling the consequence relation in settings where the accessibility relation is non-transitive is to think of the sharpenings as *all the interpretations accessible from some designated interpretation*. And that imposes additional structure which, for example, the model just sketch doesn’t satisfy. But the additional structure seems to me totally unmotivated, and I provide an alternative framework in the paper for freeing oneself from those assumptions. The key thing is not to try and define “sharpening” in terms of the accessibility relation.).

The conclusion: the best extant case for (global) supervaluational consequence being revisionary fails.

## Gavagai again again

A new version of my discussion of Quine’s “argument from below” is now up online (shorter! punchier! better!) Turns out it was all to do with counterpart theory all along.

Here’s the blurb: Gavagai gets discussed all the time. But (unless I’m missing something in the literature) I’ve never seen an advocate of gavagai-style indeterminacy spell out in detail what exactly the deviant interpretations or translations are, that incorporating the different ways of dividing reference (over rabbits, rabbit-stages or undetached rabbit-parts). And without this it is to say the least, a bit hard to evaluate the supposed counterexamples to such interpretations! So the main job of the paper is to spell out, for a significant fragment of language, what the rival accounts of reference-division amount to.

One audience for the paper (who might not realize they are an audience for it initially) are folks interested in the stage theory/worm theory debate in the philosophy of persistence. The neuvo-Gavagai guy, according to me, is claiming that there’s no fact of the matter whether our semantics is stage-theoretic or worm-theoretic. I think there’s a reasonable chance that that he’s right.

Stronger than this: so long as there are both 4D worms and instantaneous temporal parts thereof around (even if they’re “dependent entities” or “rabbit histories” or “mere sums” as opposed to Real Objects), the Gavagai guy asks you to explain why our words don’t refer to those worms or stages rather than whatever entity you think *really are* rabbits (say, enduring objects wholly present at each time).

By the way, even if these semantic indeterminacy results were right, I don’t think that this forecloses the metaphysical debate about which of endurance, perdurance or exdurance is the right account of *persistence*. But I do think that it forces us to think hard about what the difference is between semantic and metaphysical claims, and what sort of reasons we might offer for either.

## Supervaluational consequence again

I’ve just finished a new version of my paper supervaluational consequence. A pdf version is available here. I thought I’d post something explaining what’s going on therein.

Let’s start at the beginning. Classical semantics requires, inter alia, the following. For every expression, there has to be a unique intended interpretation. This single interpretation will assign to each name, a single referent. To each predicate, it will assign a set of individuals. Similarly for other grammatical categories.

But sometimes, the idea that there are such unique referents, extensions and so on, looks absurd. What supervaluationism (in the liberal sense I’m interested in) gives you is the flexibility to accommodate this. Supervaluationism requires, not a single intended interpretation, but a set of interpretations.

So if you’re interested in the problem of the many, and think that there’s more than one optimal candidate referent for “Kilimanjaro”; if you’re interested in theory change, and think that relativist and rest mass are equi-optimal candidate properties to be what “mass” picks out; if you are interested in inscrutability of reference, and think that rabbit-slices, undetached rabbit parts as well as rabbits themselves are in the running to be in the extension of “rabbit”; if you’re interested in counterfactuals, and think that it’s indeterminate which world is the closest one where Bizet and Verdi were compatriots; if you think vagueness can be analyzed as a kind of multiple-candidate indeterminacy of reference; if you find any of these ideas plausible, then you should care about supervaluationism.

It would be interesting, therefore, if supervaluationism undermined the tenants of the kind of logic that we rely on. For either, in the light of the compelling applications of supervaluationism, we will have to revise our logic to accommodate these phenomena; or else supervaluationism as a theory of these phenomena is itself misconceived. Either way, there’s lots at stake.

Orthodoxy is that supervaluationism is logically revisionary, in that it involves admitting counterexamples to some of the most familiar classical inferential moves: conditional proof, reductio, argument by cases, contraposition. There’s a substantial hetrodox movement which recommends a hetrodox way of defining supervaluational consequence (so called “local consequence”) which is entirely non-revisionary.

My paper aims to do a number of things:

1. to give persuasive arguments against the local consequence heterodoxy
2. to establish, contra orthodoxy, that standard supervaluational consequence is not revisionary (this, granted a certain assumption)
3. to show that, even if the assumption is refused, the usual case for revisionism is flawed
4. to give a final fallback option: even if supervaluational consequence is revisionary, it is not damagingly so, for it in no way involves revision of inferential practice.

It convinces me that supervaluationists shouldn’t feel bad: they probably don’t revise logic, and if they do, it’s in a not-terribly-significant way.

One role this blog is playing is allowing me to put down thoughts before I lose them.

So here’s another idea I’ve been playing with. If you think about the literature on vagueness, it’s remarkable that each of the main players seems to be broadly reductionist about vagueness. The key term here is “definitely”. The Williamsonian epistemicist reduces “definitely” to a concept constructed out of knowability. The supervaluationist typically appeals to semantic indecision, on one reading, that reduces vagueness to semantic facts; on another reading, that reduces vagueness to metasemantic facts concerning the link between semantic facts and their subvening base. Things are a little less clear with the degree theorist, but if “definite truth” is identified with “truth to degree 1”, then what they’re doing is reducing vagueness to semantic facts again.

If you think of the structure of the debate like this, then it makes sense of some of the dialectic on higher-order vagueness. For example, if vagueness is nothing but semantics, then the question immediately arises: what about those cases where semantic facts themselves appear to be vague? The parallel question for the epistemicist is: what about cases where it’s vague whether such-and-such is knowable? The epistemicists look like they’ve got a more stable position at this point, though exactly why this is is hard to spell out.

Consider other debates, e.g. in the philosophy of modality. Sure, there are reductionist views: Lewis wanting to reduce modality to what goes on in other concrete space-times; people who want to reduce it to a priori consistency; and so on. But a big player in that debate is the modalist, who just takes “possibility” and “necessity” as primitive, and refuses to offer a reductive story.

It seems to me pretty clear that a position analogous to modalism should be a central part of the vagueness literature; but I’m not aware of any self-conscious proponents of this position. Let me call it “primitivism” about vagueness. I think that perhaps some self-described semantic theorists would be better classified as primitivists.

At the end of ch 5 of the “Vagueness” book, Tim Williamson has just finished beating up on traditional supervaluationism, which equates truth with supertruth. He then briefly considers people who drop that identification. Here’s my take on this position. Proponents say that semantically, there’s a single precisification of our language which is the intended one, but which one it is is (semantically) vague. Truth is truth on the intended precisification; but definite truth is defined to be truth on all the precisifications which aren’t determinately unintended. Definite truth (supertruth) and truth come apart. This position, from a logical point of view, is entirely classical; satisfies bivalence; and looks like it thereby avoids many of Williamson’s objections to supervaluationism.

I think Williamson puts exactly the right challenge to this line. In what sense is this a semantic theory of vagueness? After all, you haven’t characterized “Definitely” in semantic terms: rather, what we’ve done is characterized “Definitely” using that very notion again in the metalanguage. One might resist this, claiming that “Definitely” should be defined using the term “admissible precisification” or some such. But then one wonders what account could be made of “admissible”: it plays no role in defining semantic notions such as “true” or “consequence” for this theorist. What sense can be made of it?

I think the challenge can be met by metasemantic versions of supervaluationism, who give a substantive theory of what makes a precisification admissible. I take that to be something like the McGee/McLaughlin line, and I spent a chapter of my thesis trying to lay out precisely what was involved. But that’s another story.

What I want to suggest now is that Primitivism about vagueness gives us a genuinely distinct option. This accepts Williamson’s contention that when we drop supertruth=truth, “nothing articulate” remains of the semantic theory of vagueness. But it questions the idea that this should lead us towards epistemicism. Let’s just take determinacy (or lack of it) as a fundamental part of reality, and then use it in constructing theories that make sense of the phenomenon of vagueness. Of course, there’s nothing positive this theorist has to say that distinguishes her from reductive rivals such as the epistemicist; but she has plenty of negative things to say disclaiming various reductive theses.

## The present time

One notorious issue for presentists (and other kinds of A-theorist) is the following: special relativity tells us (I gather) that among the slices of space-time that “look like time slices”, there’s no one that is uniquely privileged as “the present” (i.e. simulataneous with what’s going on here-now). But the presentist says that only the present exists. So it looks like her metaphysics entails that there is a metaphysically privileged time-slice: the only one that exists. (Of course, I suppose the science is just telling us that there’s no physically significance sense in which one is privileged, and it’s not obvious the presentist is saying anything that conflicts with that. But it does seem worrying…)

One option is to retreat into “here-now”ism: the only things that exist are those that exist right here right now. No problems with relativity there.

I was idly wondering about the following line: say that it’s (ontically) vague which time-slice is present, and so (for the presentist) say that it’s ontically vague what exists. As I’m thinking of it, there’ll be some kind of here-now-ish element to the metaphysics. From the point of view of a certain position p in space time, all that exists are those “time-like” slices of space time that contain the point, then it will be determinately the case that p exists. But for every other space-time point q, there will (I take it) be a reference frame according to which p and q are non-simultaneous. So it won’t determinately be the case that q exists.

The details are going to get quite involved. I think some hard thinking about higher-order indeterminacy will be in order. But here’s a quick sketch: choose a point r such that there’s a choice of reference-frame that make q and r simultaneous. Then it sort of seems to me that, from p’s perspective, the following should hold:

r doesn’t exist
determinately, r doesn’t exist
not determinately determinately r doesn’t exist

The idea is that while r isn’t “present” (and so fails to exist), relative to the perspective of some of the things that are present, it is present.

What I’d like to do is model this in a “supervaluation-style” framework like that one I talk about here. First, consider the set of all centred time-like-slices. It’ll end up determinate that one and only one of these exists: but it’ll be a vague matter which one. Let centred time-like-slice x access centred time-slice y iff the centre of y is somewhere in the time-slice x.

Now take a set of time-slices P which are all and only those with common centre p. These are the ontic candidates for being the present time. Next, consider the set P*, containing a set of time-slices which are all and only those accessed by some time-slice in P. And similarly construct P**, P*** etc etc etc.

Now, among space-time points, only the “here-now” point p determinately exists. All and only points which are within some some time-slice in P don’t determinately fail to exist. All and only points which are within some time-slice in P* don’t determinately determinately fail to exist. All and only points which are within some time-slice in P* don’t determinately determinately determinately fail to exist. And so on. (If you like, existence shades of into greater and greater indeterminacy as we look further away from the privileged here-now point).

Well, I’m no longer sure that this deserves the name “presentism”. Kit Fine distinguishes some versions of A-theory in a paper in “Modality and tense” which this view might fit better with (the Fine-esque way of setting this up would be to have the whole of space-time existing, but only some time-slices really or fundamentally existing. The above framework then models vagueness in what really or fundamentally exist). It is anyway up to it’s neck in ontic vagueness, which you might already dislike. But I’ve no problem with ontic vagueness, and insofar as I can simulate being a presentist, I quite like this option.

There should be other variants too for different forms of A-theory. Consider, for example, the growing block view of reality (the time-slices in the model can be thought of as the front edges of a growing block: as we go through time, more slices get added to the model). The differences may be interesting: for the growing block, future space-time points determinately don’t exist, but they don’t det …det fail to exist for some amount of iterations of “det”; while past space-time points determinately exist, but they don’t det …. det exist for some amount of iterations of “det”.

Any thoughts most welcome, and references to any related literature particularly invited!

## Existence and just more theory

I’ve been spending much time recently in coffee shops with colleagues talking about the stuff that’s coming up in the fantastically named RIP Being conference (happening in Leeds this weekend). Hopefully I won’t be treading on toes if I draw out one strand of those conversations that I’ve been finding particularly interesting.

(continued below the fold)

The story for me begins with an old paper by Hartry Field. His series of papers in the 70’s is one of the all-time great runs: from “Tarski’s theory of truth” through “Quine and the correspondance theory”, “Theory Change”, “Logic, meaning and conceptual role”, “Conventionalism and Instrumentalism in semantics” and finishing off with “Mental representation”. (All references can be found here). Not all of them are reprinted in his collection Truth and the absence of fact, which seems a pity. The papers I mentioned above really seemed to me to lay out the early Fieldian programme in most of the details. Specifically, in missing out the papers “Logic, meaning …” and “Conventionalism and instrumentalism…”, you miss out on the early-Field’s take on how the cognitive significance of language relates to semantic theory; and the most interesting discussion I know of concerning what Putnam’s notorious “just more theory” argument might amount to.

The “just more theory” move is supposed to be the following. It’s familiar that you can preserve sensible truth conditions, by assigning wildly permuted reference-schemes to language (see my other recent posts for more details and links). But, prima facie, these permuted reference schemes are going to vitiate some plausible conditions of what it takes for a term to refer to something (e.g. that the object be causally connected to the term). Now, some theorists of meaning don’t build causal constraints into their metasemantic account. Davidson, early Lewis and the view Putnam describes as “standard” in his early paper, are among these (I call these “interpretationisms” elsewhere). But the received view, I guess, is to assume that some such causal constraint will be in play.

Inscrutability argument dead-in-the-water? No, says Putnam. For look! the permuted interpretation has the resources to render true sentences like “reference is a relation which is causally constrained”. For just as, on the permuted interpretation “reference” will be assigned as semantic value some weirdo twisted relation Reference*, so on the same interpretation “causation” will be assigned some weirdo twisted relation Causation. And it’ll turn out to be true that Reference* and Causation* match up in the right way. So (you might think), how can metasemantic theories tell you rule in favour of the sensible interpretation over this twisted one? For whichever no matter which of these we imagine to be the real interpretation of our language, everything we say will come out true.

Well, most people I speak to think this is a terrible argument. (For a particularly effective critique of Putnam—showing how badly things go if you allow him the “just more theory” move—see this paper by Tim Bays.) I’ll take it the reasons are pretty familiar (if not, Lewis’s “Putnam’s paradox” has a nice presentation of a now-standard response). Anyway, what’s interesting about Field’s paper is that it gives an alternative reading of Putnam’s challenge, which makes it much more interesting.

Let’s start by granting ourselves that we’ve got a theory which really has tied down reference pretty well. Suppose, for example, that we say “Billy” refers to Billy in virtue of appropriate causal connections between tokenings of that word and the person, Billy. The “Wild” inscrutability results threatened by permutation arguments simply don’t hold.

But now we can ask the following question: what’s special about that metasemantic theory you’re endorsing? Why should we be interested in Reference (=Causal relation C)? What if we tried to do all the explanatory work that we want semantics for, in terms of a different relation Reference*? We could then have a metasemantic* theory of reference*, which would explain that it is constrained to match a weirdo relation causation*. But, notice, that the relation “S expresses* proposition* p” (definable via reference*) and “S expresses proposition p” (definable via reference*) are coextensional. Now, if all the explanatory work we want semantics to do (e.g. explaining why people make those sounds when they believe the world is that way) only ever makes appeal to what propositions sentences express, then there just isn’t any reason (other than convenience) to talk about semantic properties rather than semantic* ones.

The conclusion of these considerations isn’t the kind of inscrutability I’m familiar with. It’s not that there’s some agreed-upon semantic relation, which is somehow indeterminate. It’s rather that (the consideration urges) it’ll be an entirely thin and uninteresting matter that we choose to pursue science via appeal to the determinate semantic properties rather than the determinate semantic* properties. You might think of this as a kind of metasemantic inscrutability, in contrast to the more usual semantic inscrutability: setting aside mere convenience, there’s no reason why we ought to give this metasemantic theory rather than that one.

Now, let’s turn to a different kind of inscrutability challenge. For one reason or another, lots of people are very worried over whether we can really secure determinate quantification over an absolutely unrestricted domain. Just suppose you’re convinced that there are no abstracta. Suppose you’re very careful to never say anything that commits you to their existence. However, suppose you’re wrong: abstracta exist. Intuitively, when you say “There are no abstracta, and I’m quantifying over absolutely everything!” you’re speaking falsely. But this is only so if your quantifiers range over the abstracta out there as well as the concreta: and why should that be? In virtue of what can your word “everything” range over the unrestricted domain? After all, what you say would be true if I interpreted the word as ranging over only concreta. I’d just take you to be saying that no concreta exist (within your domain; and that you were quantifying over absolutely everything in your domain. Both of these are true, given that your domain happens to contain only concreta!

Bring in causality doesn’t look like it helps here; neither would the form of reference-magnetism that Lewis endorsed, which demands that our predicates latch onto relatively natural empirical kinds, help. Ted Sider, in a paper he’s presenting at the RIP conference, advocates extending the Lewis point to make appeal to logical “natural kinds” (such as existence) at this point. However, let me sketch instead a variant of the Sider thought that seems more congenial to me (I’ll sketch at the end how to transfer it back).

My take on Lewis’s theory is the following. First, identify a “meaning building language”. This will contain only predicates for empirical natural kinds, plus some other stuff (quantifiers, connectives, perhaps terms for metaphysically basic things such as mereological notions). Now, what it is for a semantic theory for a natural language to be the correct one, is for there to be a semantic theory phrased in the meaning-building language, which (a) assigns to sentences of the natural language truth-conditions which fit with actual patterns of assent and dissent; and (b) is as syntactically simple as possible. (I defend this take on what Lewis is doing here).

Now, clearly we need to use some logical resources in constructing the semantic theory. Which should we allow? Sider’s answer: the logically natural ones. But for the moment let’s suppose we don’t want to commit ourselves to logically natural kinds. Well, why don’t we just stipulate that the meaning building language is going to contain this, that, and the next logical operator/connective? In the case of predicates, there’s the worry that our meaning-building theory should contain all the empirical kinds there are or could be: since we don’t know what these are, we need to give a general definition such as “the meaning building language will contain predicates for all and only natural kinds”. But there seems no comparible reason not simply to lay it down that “the meaning building language will contain negation, conjunction and the existential quantifier).

Indeed, we could go one further, and simply stipulate that the existential quantifier it contains is the absolutely unrestricted one. The effect will be just like the one Sider proposes: this metasemantic proposal has a built-in-bias towards ascribing truly unrestricted generality to the quantifiers of natural language, because it is syntactically simpler to lay down clauses for such quantifiers in the meaning-building language, than for the restricted alternatives. You quantify over everything, not just concreta, because the semantic theory that ascribes you this is more eligible than one that doesn’t, where eligibility is a matter of how simple the theory is when formulated in the meaning-building language just described.

Ok. So finally finally I get to the point. It seems to me that Field’s form of Putnam’s worries can be put to work here too. Let’s grant that the metasemantic theory just described delivers the right results about semantic properties of my language; and shows my unrestricted quantification to be determinate. But why choose just that metasemantic theory? Why not, for example, describe a metasemantic theory where semantic properties are determined by syntactic simplicity of a semantic theory in a meaning building language where the sole existential quantifier is restricted to concreta? Maybe we should grant that our way picks out the semantic properties: but we’ve yet to be told why we should be interested in the semantic properties, rather than the semantic* properties delivered by the rival metasemantic theory just sketched. Metasemantic inscrutability threatens once more.

(I think the same challenge can be put to the Sider-style proposal: e.g., consider the Lewis* metasemantic theory whereby the meaning-building language contains expressions for all those entities (of whatever category) which are natural*: i.e. are the intersection of genuinely natural properties (emprical or logical) with restricted domain D.)

I have suspicians that metasemantic inscrutability will turn out to be a worrying thing. That’s a substantive claim: but it’s got to be a matter for another posting!

(Major thanks here go to Andy and Joseph for discussions that shaped my thoughts on this stuff; though they are clearly not to be blamed..).

## Rigidity and inscrutability

In response to something Dan asks in the comments in the previous post, I thought it might be worth laying out one reason why I’m thinking about “rich” forms of rigidity at the moment.

Vann McGee published a paper on inscrutability of reference recently. The part of it I’m particularly interested in deals with the permutation argument for radical inscrutability. The idea of the permutation arguments, in brief, is: twist the assignments of reference to terms as much as you like. By making compensating twists to the assignments of extensions to predicates, you’ll can make sure the twists “cancel out” so that the distribution of truth values among whole sentences matches exactly the “intended interpretation”. So (big gap) there’s no fact of the matter whether the twisted-interpretation or rather the intended-interpretation is the correct description of the semantic facts. (For details (ad nauseum) see e.g. this stuff)

Anyway, Vann McGee is interested in extending this argument to the intensional case. V interesting to me, since I’d be thinking about that too. I started to get worried when I saw that McGee argued that permutation arguments go wrong when you extend them to the intensional case. That seemed bad, coz I thought I’d proved a theorem that they go over smoothly to really rich intensional settings (ch.5, in the above). And, y’know, he’s Vann McGee, and I’m not, so default assumption was that he wins!

But actually, I think what he was saying doesn’t call into question the technical stuff I was working on. What it does is show that the permuted interpretations that I construct do strange things with rigidity. Hence my now wanting to think about rigidity a little more.

McGee’s nice point is this: if you permute the reference scheme wrt each world in turn, you end up disrupting facts about rigidity. To illustrate suppose that A is the actual world, and W a non-actual one. Choose a permutation for A that sends Billy to the Taj Mahal, and a permutation for W that sends Billy to the Great Wall of China. Then the permuted interpretation of the language will assign to “Billy” an intension that maps A to the Taj Mahal, and W to the Great Wall of China”. In the familiar way, we make compensating twists to the extension of each predicate wrt each world, and the intensions of sentences turn out invariant. But of course, “Billy” is no longer a rigid designator.

(McGee offers this as one horn of a dilemma concerning how you extend the permutation argument to the intensional case. The other horn concerns permuting the reference scheme for all worlds at once, with the result that you end up assigning objects as the reference of e in w, when that object doesn’t exist in w. I’ve also got thoughts about that horn, but that’s another story).

McGee’s dead right, and when I looked at (one form of) my recipe for extending the permutation argument to waht I called the “Carnapian” intensional case, I saw that this is exactly what I got. However, the substantial question is whether or not the non-rigidity of “Billy” on the permuted interpretation gives you any reason to rule out that interpretation as “unintended”. And this question obviously turns on the status of rigidity in the first place.

Now, if the motivation for thinking names were rigid, were just that assigning names rigid extensions allows us to assign the right truth conditions to “Billy is wise”, then it looks like the McGee point has little force against the permutation argument. Because, the permuted interpretation does just as well at generating the right truth conditions! So what we should conclude is that it becomes inscrutable whether or not names are rigid: the argument that names are rigid is undermined.

However, maybe there’s something deeper and spookier about rigidity, above and beyond getting-the-truth-conditions-right. Maybe, I thought, that’s what people are onto with the de jure rigidity stuff. And anyway, it’d be nice to get clear on all the motivations for rigidity that are in the air, to see whether we could get some (perhaps conditional) McGee-style argument against permutation inscrutability going.

p.s. one thing that I certainly hadn’t realized before reading McGee, was that the permuted interpretations I was offering as part of an inscrutability argument had non-rigid variables! As McGee points out, unless this were the case, you’d get the wrong results when looking at sentences involving quantification over a modal operator. I hadn’t clicked this, since I was working with Lewis’s general-semantics system, where variables are handled via an extra intensional index: it had quite passed me by that I was doing something so kooky to them. You live and learn!