Emergence, Supervenience, and Indeterminacy

While Ross Cameron, Elizabeth Barnes and I were up in St Andrews a while back, Jonathan Schaffer presented one of his papers arguing for Monism: the view that the whole is prior to the parts, and the world is the one “fundamental” object.

An interesting argument along the way argued that contemporary physics supports the priority of the whole, at least to the extent that properties of some systems can’t be reduced to properties of their parts. People certainly speak that way sometimes. Here, for example, is Tim Maudlin (quoted by Schaffer):

The physical state of a complex whole cannot always be reduced to those of its parts, or to those of its parts together with their spatiotemporal relations… The result of the most intensive scientific investigations in history is a theory that contains an ineliminable holism. (1998: 56)

The sort of case that supports this is when, for example, a quantum system featuring two particles determinately has zero total spin. The issues is that there also exist systems that duplicate the intrinsic properties of the parts of this system, but which do not have the zero-total spin property. So the zero-total-spin property doesn’t appear to be fixed by the properties of its parts.

Thinking this through, it seemed to me that one can systematically construct such cases for “emergent” properties if one is a believer in ontic indeterminacy of whatever form (and thinks of it that way that Elizabeth and I would urge you to). For example, suppose you have two balls, both indeterminate between red and green. Compatibly with this, it could be determinate that the fusion of the two be uniform; and it could be determinate that the fusion of the two be variegrated. The distributional colour of the whole doesn’t appear to be fixed by the colour-properties of the parts.

I also wasn’t sure I believed in the argument, so posed. It seems to me that one can easily reductively define “uniform colour” in terms of properties of its parts. To have uniform colour, there must be some colour that each of the parts has that colour. (Notice that here, no irreducible colour-predications of the whole are involved). And surely properties you can reductively define in terms of F, G, H are paradigmatically not emergent with respect to F, G and H.

What seems to be going on, is not a failure for properties of the whole to supervene on the total distribution of properties among its parts; but rather a failure of the total distribution of properties among the parts to supervene on the simple atomic facts concerning its parts.

That’s really interesting, but I don’t think it supports emergence, since I don’t see why someone who wants to believe that only simples instantiate fundamental properties should be debarred from appealing to distributions of those properties: for example, that they are not both red, and not both green (this fact will suffice to rule out the whole being uniformly coloured). Minimally, if there’s a case for emergence here, I’d like to see it spelled out.

If that’s right though, then application of supervenience tests for emergence have to be handled with great care when we’ve got things like metaphysical indeterminacy flying around. And it’s just not clear anymore whether the appeal in the quantum case with which we started is legitimate or not.

Anyway, I’ve written up some of the thoughts on this in a little paper.

Fundamental and derivative truths

I’ve posted a new version of my paper “Fundamental and derivative truths“. The new version notes a few more uses for the fundamental/derivative distinction, and clears up a few points.

As before, the paper is concerned with a way of understanding the—initially pretty hard to take—claim that tables exist, but don’t really exist. I think that that claim at least makes good sense, and arguably the distinction between what is really/fundamentally the case and what is merely the case is something we should believe in whether or not we endorse the particular claim about tables. I think in particular that it leads to a particularly attractive view on the nature of set theory, since it really does seem that we do want to be able to “postulate sets into existence” (y’know how things form sets? well consider the set of absolutely everything. On pain of contradiction that set can’t be something that existed beforehand…) The framework I like lets us make sober sense of that.

The current version tidies up a bunch of things, it pinpoints more explicitly the difference between comparatively “easy cases”—defending the compatibility of set theoretic truths with a nominalist ontology—-and “hard cases”—defending the compatibility of the Moorean corpus with a microphysical mereological nihilist ontology. I’ve got another paper focusing on some of the technicalities of the composition case.

This project causes me much grief, since it involves many many different philosophically controversial areas: philosophy of maths, metaphysics of composition, theory of ontological commitment, philosophy of language and in particular metasemantics, and so forth. That makes it exciting to work on, but hard to present to people in a digestible way. Nevertheless, I’m going to have another go at the CSMN workshop in Olso later this month, focusing on the philosophy of language/theory of meaning aspects.

A couple of bits of news.

First, I’ve finished a (much extended) draft of the reply I gave to Hugh Mellor’s paper “Microcomposition” at the Leeds RIP Being conference (the name still amuses: that’s the Royal Institute of Philosophy, folks, not a metametaphysical jibe). The paper’s called “Working parts” and presents some arguments against the view that mereological relations are metaphysical primitive. Hugh’s position is that they should be analyzed in terms of locational and causal relations, and I think there’s a lot to be said for that view. Comments, as ever, very welcome. The paper is available here.

Second, from the end of this month I’m going to be taking over as secretary of the Analysis Committee. The trust does all sorts of good things: from awarding Analysis studentships to giving out conference grants, and of course, and are the figures in the background of the fantastic journal Analysis. I’m really excited to be involved.

A puzzle about supervenience arguments for dualism

Suppose there’s a qualitative duplicate of the actual world (It might be a world with haecceitistic differences from the actual one, but it doesn’t have to be). Call the actual world A, and its duplicate, B.

I’m conscious in world A. Call the extension at the actual world of the things which are conscious S. There are cauliflowers in world B. Call the extension at B of the things which are cauliflowers, S*. Now consider the gruesome intension cauli-consc, which has S as its extension at world A, and S* as its extension in world B (it doesn’t matter what its extension is in other worlds: maybe it applies to all and only conscious cauliflowers).

Is there a property that things have iff they are cauli-consc? So long as “property” is intended in an ultra-lightweight sense (a sense in which any old possible-worlds intension corresponds to a property) then there shouldn’t be an trouble with this.

However. Cauli-consc is a property that doesn’t supervene on the pattern of instantiation of fundamental physical properties. After all, A and B are alike in all physical respects. But they differ as to where cauli-consc is instantiated.

Cauli-consc is a property, instantiated in the actual world, that doesn’t supervene on physical properties! Does that mean that the fact that I’m cauli-consc is a “further fact about our world, over and above the physical facts” (Chalmers 1996 p.123)? That is, do we have to say that, if there are such qualitive duplicates of the actual world, then materialism is shown to be wrong by cauli-consc?

Surely not. But the interesting question is: if some properties (like cauli-consc) can fail to supervene on the physical features of the world, what is that blocks the inference from failure of supervenience on physical features of the world, to the refutation of materialism? For what principled reason is this property “bad”, such that we can safely ignore its failure to supervene?

Here’s a way to put the general worry I’m having. Supervenience physicalism is often formulated as follows (from Lewis, I believe): any physical duplicate of the actual world is a duplicate simpliciter. But if duplication is understood (again following Lewis) as the sharing of natural properties by corresponding parts, then to get a counterexample to physicalism you’d need not only to demonstrate that a certain property fails to supervene on the physical features of the world, but also that some natural property fails to supervene: otherwise you won’t get a failure of duplication among physical duplicates. The case of cauli-consc is supposed to dramatize the gap here. Sometimes it looks like you can get properties which fail to supervene, but which don’t seem to threaten materialism.

However, when you look at the failure-to-supervene arguments for dualism, you find that people stop once they take themselves to establish that a given property fails to supervene, and not, in addition, that some natural property does so (For example, Chalmers 1996 p132 assumes that it’s enough to show that the 1-intension of “consciousness” fails to supervene, without also arguing that it’s a natural property) .

Now, I think in particular cases I can see how to run the arguments to address this issue. Add as a premise that e.g. the 1-intensions of the words of our language supervene on the total qualitative character of the world, so that we’re guaranteed that if there’s a world in which “1-consciousness” is instantiated and another where it isn’t, those can’t be qualitative duplicates. If now we find a failure of 1-consciousness to supervene on physical features of the world, we’ll be able to argue for the existence of physical duplicate worlds differing over 1-consciousness, we now know can’t be qualitative duplicates. (In effect, the suggestion is that the sense in which cauli-consc is bad is exactly that it fails to supervene on the total qualitative state of the world).

That all seems reasonable to me, but it does start to add potentially deniable premises to the argument against materialism. (For example, I’m not sure it should be uncontroversial that consciousness supervenes on the total qualitative state of the world. Is it really so clear, for example, that there are no haecceitistic elements to consciousness: that a world containing me might contain a conscious being, but a qualitiative duplicate containing some other individual doesn’t?)

So I’m not sure whether the elaboration of the Zombie argument for dualism I’ve just sketched is the way Chalmers et al want to go. I’d be interested to know how they have/would respond (references welcome, as ever).

Metametaphysics in Barcelona/some distinctions (x-post from MV)

Logos are holding a meta-metaphysics conference in Barcelona in 2008. The CFP is now out: with deadline being April 1st 2008.

I went to a Logos conference back in 2005, when I was just finishing up as a graduate student. It was a great experience: Barcelona is an amazing city to be in, Logos were fantastic hosts, and the conference was full of interesting people and talks. I also had what was possibly the best meal of my life at the conference dinner. This time, the format is preread, which I’ve really enjoyed in the past.

Here’s a quick note on the “metametaphysics” stuff. Following the Boise conference on this stuff, it seemed to me that under the label “metametaphysics” go a number of interesting projects that need a bit of disentangling. Here’s three, for starters.

First, there’s the “terminological disputes” project. Consider a first-order metaphysical question like: “under what circumstances do some things make up a further thing” (van Inwagen’s special composition question). This notes the range of seemingly rival answers to the question (all the time! some of the time! never!) and asks about whether there’s any genuine disagreement between the rival views (and if so, what sort of disagreement this is). The guiding question here is: under what conditions is a metaphysical/philosophical debate merely terminological (or whatever).

Note that the question here really doesn’t look like it has much to do with metametaphysics per se, as opposed to metaphilosophy in general. Metaphysics is just a source of case studies, in the first instance. Of course, it might turn out that metaphysics turns out to be full of terminological disputes, whereas phil science or epistemology or whatever isn’t. But equally, it might turn out that metaphysics is all genuine, whereas e.g. the Gettier salt mines are full of terminological disputes.

In contrast to this, there’s the “first order metametaphysics” (set of) project(s). This’d take key notions that are often used as starting points/framework notions for metaphysical debates, and reflect philosophically upon those. E.g.: (1) The notion of naturalness as used by Lewis. Is there such a notion? If so, are their natural quantifiers and objects and modifiers as well as natural properties? Does appeal to naturalness commit one to realism about properties, or can something like Sider’s operator-view of naturalness be made to work? (2) Ontological commitment. Is Armstrong right that (at least in some cases) to endorse a sentence “A is F” is to commit oneself to F-ness, as well as to things which are F? Might the ontological commitments of our theories be far less than Quine would have us believe (as some suggest)? (3) unrestricted existential quantifier. Is there a coherent such notion? How should its semantics be given? Is such a quantifier a Tarskian logical constant?

These debates might interest you even if you have no interesting thoughts in general about how to demarcate genuine vs. terminological disputes. Thinking about this stuff looks like it can be carried out in very much first-order terms, with rival theories of a key notion (naturalness, say) proposed and evaluated. Of course, this sort of first-order examination might be a particularly interesting kind of first-order philosophy to one engaged in the terminological disputes project.

The third sort of project we might call “anti-Quine/Lewis metametaphysics”. You might think the following. In recent years, there’s been a big trend for doing metaphysics with a Realist backdrop; in particular, the way that Armstrong and Lewis invite us to do metaphysics has been very influential among the young and impressionable. A bunch of presuppositions have become entrenched, e.g. a Quinean view of ontological commitment, the appeal to naturalness etc. So, without in the first instance attacking these presuppositions, one might want to develop an alternative framework in comparable detail which allows the formulation of alternatives. One natural starting point is to go with neoCarnapian thoughts about what the right thing to say about the SCQ is (e.g. it can be answered by stipulation). That sort of line is incompatible with the sort of view on these questions that Quine and Lewis favour. What’s the backdrop relative to which it makes sense? What are the crucial Quine-Lewis assumptions that need to be given up?

Now, this sort of project differs from the first kind of project in being (a) naturally restricted to metaphysics; and (b) not committed to any sort of demarcation of terminological disputes vs. genuine disputes. It differs from the second kind of project, since, at least in the first instance, we needn’t assume that the differences between the frameworks will reduce to different attitudes to ontological commitment, or naturalness, or whatever. On the other hand, it’s attractive to look for some underlying disagreement over the nature of ontological commitment, or naturalness, or whatever, to explain how the worldviews differ. So it may well be that a project of this kind leads to an interest in the first-order metametaphysics projects.

I’m not sure that these projects form a natural philosophical kind. What does seem to be right is that investigation of one might lead to interest in the others. There’s probably a bunch more distinctions to be drawn, and the ones I’ve pointed to probably betray my own starting points. But in my experience of this stuff, you do find people getting confused about the ambition of each other’s projects, and dismissing the whole field of metametaphysics because they identify it with some one of the projects that they themselves don’t find particularly interesting, or regard as hard to make progress with. So it’d probably be helpful if someone produced an overview of the field that teased the various possible projects apart (references anyone?).

Williamson on vague states of affairs

In connection with the survey article mentioned below, I was reading through Tim Williamson’s “Vagueness in reality”. It’s an interesting paper, though I find its conclusions very odd.

As I’ve mentioned previously, I like a way of formulating claims of metaphysical indeterminacy that’s semantically similar to supervaluationism (basically, we have ontic precisifications of reality, rather than semantic sharpenings of our meanings. It’s similar to ideas put forward by Ken Akiba and Elizabeth Barnes).

Williamson formulates the question of whether there is vagueness in reality, as the question of whether the following can ever be true:

(EX)(Ex)Vague[Xx]

Here X is a property-quantifier, and x an object quantifier. His answer is that the semantics force this to be false. The key observation is that, as he sets things up, the value assigned to a variable at a precisification and a variable assignment depends only on the variable assignment, and not at all on the precisification. So at all precisifications, the same value is assigned to the variable. That goes for both X and x; with the net result that if “Xx” is true relative to some precisification (at the given variable assignment) it’s true at all of them. That means there cannot be a variable assignment that makes Vague[Xx] true.

You might think this is cheating. Why shouldn’t variables receive different values at different precisifications (formally, it’s very easy to do)? Williamson says that, if we allow this to happen, we’d end up making things like the following come out true:

(Ex)Def[Fx&~Fx’]

It’s crucial to the supervaluationist’s explanatory programme that this come out false (it’s supposed to explain why we find the sorites premise compelling). But consider a variable assignment to x which at each precisification maps x to that object which marks the F/non-F cutoff relative to that precisification. It’s easy to see that on this “variable assignment”, Def[Fx&Fx’] comes out true, underpinning the truth of the existential.

Again, suppose that we were taking the variable assignment to X to be a precisification-relative matter. Take some object o that intuitively is perfectly precise. Now consider the assignment to X that maps X at precisification 1 to the whole domain, and X at precisification 2 to the null set. Consider “Vague[Xx]”, where o is assigned to x at every precisification, and the assignment to X is as above. The sentence will be true relative to these variable assignments, and so we have “(EX)Vague[Xx]” relative to an assignment of o to x which is supposed to “say” that o has some vague property.

Although Williamson’s discussion is about the supervaluationist, the semantic point equally applies to the (pretty much isomorphic) setting that I like, and which is supposed to capture vagueness in reality. If one makes the variable assignments non-precisification relative, then trivially the quantified indeterminacy claims go false. If one makes the variable assignments precisification-relative, then it threatens to make them trivially true.

The thought I have is that the problem here is essentially one of mixing up abundant and natural properties. At least for property-quantification, we should go for the precisification-relative notion. It will indeed turn out that “(EX)Vague[Xx]” will be trivially true for every choice of X. But that’s no more surprising that the analogous result in the modal case: quantifying over abundant properties, it turns out that every object (even things like numbers) have a great range of contingent properties: being such that grass is green for example. Likewise, in the vagueness case, everything has a great deal of vague properties: being such that the cat is alive, for example (or whatever else is your favourite example of ontic indeterminacy).

What we need to get a substantive notion, is to restrict these quantifiers to interesting properties. So for example, the way to ask whether o has some vague sparse property is to ask whether the following is true “(EX:Natural(X))Vague[Xx]”. The extrinsically specified properties invoked above won’t count.

If the question is formulated in this way, then we can’t read off from the semantics whether there will be an object and a property such that it is vague whether the former has the latter. For this will turn, not on the semantics for quantifiers alone, but upon which among the variable assignments correspond to natural properties.

Something similar goes for the case of quantification over states of affairs. (ES)Vague[S] would be either vacuously true or vacuously false depending on what semantics we assign to the variables “X”. But if our interest is in whether there are sparse states of affairs which are such that it is vague whether they obtain, what we should do is e.g. let the assignment of values to S be functions from precisifications to truth values, and then ask the question:

(ES:Natural(S))Vague[S].

Where a function from precisifications to truth values is “natural” if it corresponds to some relatively sparse state of affairs (e.g. there being a live cat on the mat). So long as there’s a principled story about which states of affairs these are (and it’s the job of metaphysics to give us that) everything works fine.

A final note. It’s illuminating to think about the exactly analogous point that could be made in the modal case. If values are assigned to variables independently of the world, we’ll be able to prove that the following is never true on any variable assignment:

Contingently[Xx].

Again, the extensions assigned to X and x are non-world dependent, so if “Xx” is true relative to one world, it’s true at them all. Is this really an argument that there is no contingent instantiation of properties? Surely not. To capture the intended sense of the question, we have to adopt something like the tactic just suggested: first allow world-relative variable assignment, and then restrict the quantifiers to the particular instances of this that are metaphysically interesting.

Ontic vagueness

I’ve been frantically working this week on a survey article on metaphysical indeterminacy and ontic vagueness. Mind bending stuff: there really is so much going on in the literature, and people are working with *very* different conceptions of the thing. Just sorting out what might be meant by the various terms “vagueness de re”, “metaphysical vagueness”, “ontic vagueness”, “metaphysical indeterminacy” was a task (I don’t think there are any stable conventions in the literature). And that’s not to mention “vague objects” and the like.

I decided in the end to push a particular methodology, if only as a stalking horse to bring out the various presuppositions that other approaches will want to deny. My view is that we should think of “indefinitely” roughly parallel to the way we do “possibly”. There are various disambiguations one can make: “possibly” might mean metaphysical possibility, epistemic possibility, or whatever; “indefinitely” might mean linguistic indeterminacy, epistemic unclarity, or something metaphysical. To figure out whether you should buy into metaphysical indeterminacy, you should (a) get yourself in a position to at least formulate coherently theories involving that operator (i.e. specify what its logic is); and (b) run the usual Quinean cost/benefit analysis on a case-by-case basis.

The view of metaphysical indeterminacy most opposed to this is one that would identify it strongly with vagueness de re, paradigmatically there being some object and some property such that it is indeterminate whether the former instantiates the latter (this is how Williamson seems to conceive of matters in a 2003 article). If we had some such syntactic criterion for metaphysical indeterminacy, perhaps we could formulate everything without postulating a plurality of disambiguations of “definitely”. However, it seems that this de re formulation would miss out some of the most paradigmatic examples of putative metaphysical vagueness, such as the de dicto formulation: It is indeterminate whether there are exactly 29 things. (The quantifiers here to be construed unrestrictedly).

I also like to press the case against assuming that all theories of metaphysical indeterminacy must be logically revisionary (endorsing some kind of multi-valued logic). I don’t think the implication works in either direction: multi-valued logics can be part of a semantic theory of indeterminacy; and some settings for thinking about metaphysical indeterminacy are fully classical.

I finish off with a brief review of the basics of Evans’ argument, and the sort of arguments (like the one from Weatherson in the previous post) that might convert metaphysical vagueness of apparently unrelated forms into metaphysically vague identity arguably susceptable to Evans argument.

From vague parts to vague identity

(Update: as Dan notes in the comment below, I should have clarified that the initial assumption is supposed to be that it’s metaphysically vague what the parts of Kilimanjaro (Kili) are. Whether we should describe the conclusion as deriving a metaphysically vague identity is a moot point.)

I’ve been reading an interesting argument that Brian Weatherson gives against “vague objects” (in this case, meaning objects with vague parts) in his paper “Many many problems”.

He gives two versions. The easiest one is the following. Suppose it’s indeterminate whether Sparky is part of Kili, and let K+ and K- be the usual minimal variations of Kili (K+ differs from Kili only in determinately containing Sparky, K- only by determinately failing to contain Sparky).

Further, endorse the following principle (scp): if A and B coincide mereologically at all times, then they’re identical. (Weatherson’s other arguments weaken this assumption, but let’s assume we have it, for the sake of argument).

The argument then runs as follows:
1. either Sparky is part of Kili, or she isn’t. (LEM)
2. If Sparky is part of Kili, Kili coincides at all times with K+ (by definition of K+)
3. If Sparky is part of Kili, Kili=K+ (by 2, scp)
4. If Sparky is not part of Kili, Kili coincides at all times with K- (by definition of K-)
5. If Sparky is not part of Kili, Kili=K- (by 4, scp).
6. Either Kili=K+ or Kili=K- (1, 3,5).

At this point, you might think that things are fine. As my colleague Elizabeth Barnes puts it in this discussion of Weatherson’s argument you might simply think at this point that only the following been established: that it is determinate that either Kili=K+ or K-: but that it is indeterminate which.

I think we might be able to get an argument for this. First our all, presumably all the premises of the above argument hold determinately. So the conclusion holds determinately. We’ll use this in what follows.

Suppose that D(Kili=K+). Then it would follow that Sparky was determinately a part of Kili, contrary to our initial assumption. So ~D(Kili=K+). Likewise ~D(Kili=K-).

Can it be that they are determinately distinct? If D(~Kili=K+), then assuming that (6) holds determinately, D(Kili=K+ or Kili=K-), we can derive D(Kili=K-), which contradicts what we’ve already proven. So ~D(~Kili=K+) and likewise ~D(~Kili=K-).

So the upshot of the Weatherson argument, I think, is this: it is indeterminate whether Kili=K+, and indeterminate whether Kili=K-. The moral: vagueness in composition gives rise to vague identity.

Of course, there are well known arguments against vague identity. Weatherson doesn’t invoke them, but once he reaches (6) he seems to think the game is up, for what look to be Evans-like reasons.

My working hypothesis at the moment, however, is that whenever we get vague identity in the sort of way just illustrated (inherited from other kinds of ontic vagueness), we can wriggle out of the Evans reasoning without significant cost. (I go through some examples of this in this forthcoming paper). The over-arching idea is that the vagueness in parthood, or whatever, can be plausibly viewed as inducing some referential indeterminacy, which would then block the abstraction steps in the Evans proof.

Since Weatherson’s argument is supposed to be a general one against vague parthood, I’m at liberty to fix the case in any way I like. Here’s how I choose to do so. Let’s suppose that the world contains two objects, Kili and Kili*. Kili* is just like Kili, except that determinately, Kili and Kili* differ over whether they contain Sparky.

Now, think of reality as indeterminate between two ways: one in which Kili contains Sparky, the other where it doesn’t. What of our terms “K+” and “K-“? Well, if Kili contains Sparky, then “K+” denotes Kili. But if it doesn’t, then “K+” denotes Kili*. Mutatis Mutandis for “K-“. Since it is is indeterminate which option obtains, “K+” and “K-” are referentially indeterminate, and one of the abstraction steps in the Evans proof fail.

Now, maybe it’s built into Weatherson’s assumptions that the “precise” objects like K+ and K- exist, and perhaps we could still cause trouble. But I’m not seeing cleanly how to get it. (Notice that you’d need more than just the axioms of mereology to secure the existence of [objects determinately denoted by] K+ and K-: Kili and Kili* alone would secure the truth that there are fusions including Sparky and fusions not including Sparky). But at this point I think I’ll leave it for others to work out exactly what needs to be added…

The fuzzy link

Following up on one of my earlier posts on quantum stuff, I’ve been reading up on an interesting literature on relating ordinary talk to quantum mechanics. As before, caveats apply: please let me know if I’m making terrible technical errors, or if there’s relevant literature I should be reading/citing.

The topic here is GRW. This way of doing things, recall, involved random localizations of the wavefunction. Let’s think of the quantum wavefunction for a single particle system, and suppose it’s initially pretty wide. So the amplitude of the wavefunction pertaining to the “position” of the particle is spread out over a wide span of space. But, if one of the random localizations occurs, the wavefunction collapses into a very narrow spike, within a tiny region of space.

But what does all this mean? What does it say about the position of the particle? (Here I’m following the Albert/Loewer presentation, and ignoring alternatives, e.g. Ghirardi’s mass-density approach).

Well, one traditional line was that talk of position was only well defined when the particle was in an eigenstate of the position observable. Since on GRW the particles’ wavefunction is pretty much spread all over space, on this view talking of a particle’s location would never be well-defined.

Albert and Loewer’s suggestion is that we alter the link. As previously, think of the wavefunction as giving a measure over different situations in which the particle has a definite location. Rather than saying x is located within region R iff the set of situations in which the particle lies in R is measure 1, they suggest that x is located within region R iff the set of situations in which the particle lies in R is almost measure 1. The idea is that even if not all of a particle’s wavefunction places it right here, the vast majority of it is within a tiny subregion here. On the Albert/Loewer suggestion, we get to say on this basis, that the particle is located in that tiny subregion. They argue also that there are sensible choices of what “almost 1” should be that’ll give the right results, though it’s probably a vague matter exactly what the figure is.

Peter Lewis points out oddities with this. One oddity is that conjunction-introduction will fail. It might be true that marble i is in a particular region, for each i between 1 and 100; and yet it fail to be true that all these marbles are in the box.

Here’s another illustration of the oddities. Take a particle with a localized wavefunction. Choose some region R around the peak of the wavefunction which is minimal, such that enough of the wavefunction is inside for the particle to be within R. Then subdivide R into two pieces (the left half and the right half) such that the wavefunction is nonzero in each. The particle is within R. But it’s not within the left half of R. Nor is it within the right half of R (in each case by modus tollens on the Albert/Loewer’s biconditional). But the R is just the sum of the left half and right half of R. So either we’re committed to some very odd combination of claims about location, or something is going wrong with modus tollens.

So clearly this proposal is looking like it’s pretty revisionary of well-entrenched principles. While I don’t think it indefensible (after all, logical revisionism from science isn’t a new idea) I do think it’s a significant theoretical cost.

I want to suggest a slightly more general, and I think, much more satisfactory, way of linking up the semantics of ordinary talk with the GRW wavefunction. The rule will be this:

“Particle x is within region R” is true to degree equal to the wavefunction-measure of the set of situations where the particle is somewhere in region R.

On this view, then, ordinary claims about position don’t have a classical semantics. Rather, they have a degreed semantics (in fact, exactly the degreed-supervaluational semantics I talked about in a previous post). And ordinary claims about the location of a well-localized particle aren’t going to be perfectly true, but only almost-perfectly true.

Now, it’s easy but unwarranted to slide from “not perfectly true” to “not true”. The degree theorist in general shouldn’t concede that. It’s an open question for now how to relate ordinary talk of truth simpliciter to the degree-theorist’s setting.

One advantage of setting up things in this more general setting is that we can “off the peg” take results about what sort of behaviour we can expect the language to exhibit. An example: it’s well known that if you have a classically valid argument in this sort of setting, then the degree of untruth of the conclusion cannot exceed the sum of the degrees of untruth of the premises. This amounts to a “safety constraint” on arguments: we can put a cap on how badly wrong things can go, though there’ll always be the phenomenon of slight degradations of truth value across arguments, unless we’re working with perfectly true premises. So there’s still some point of classifying arguments like conjunction introduction as “valid” on this picture, for that captures a certain kind of important information.

Say that the figure that Albert and Loewer identified as sufficient for particle-location was 1-p. Then the way to generate something like the Albert and Loewer picture on this view is to identify truth with truth-to-degree-1-p. In the marbles case, the degrees of falsity of each premise “marble i is in the box” collectively “add up” in the conclusion to give a degree of falsity beyond the permitted limit. In the case

An alternative to the Albert-Loewer suggestion for making sense of ordinary talk is to go for a universal error-theory, supplemented with the specification of a norm for assertion. To do this, we allow the identification of truth simpliciter with true-to-degree 1. Since ordinary assertions of particle location won’t be true to degree 1, they’ll be untrue. But we might say that such sentences are assertible provided they’re “true enough”: true to the Albert/Loewer figure of 1-p, for example. No counterexamples to classical logic would threaten (Peter Lewis’s cases would all be unsound, for example). Admittedly, a related phenomenon would arise: we’d be able to go by classical reasoning from a set of premises all of which are assertible, to a conclusion that is unassertible. But there are plausible mundane examples of this phenomenon, for example, as exhibited in the preface “paradox”.

But I’d rather not go either for the error-theoretic approach, nor for the identification of a “threshold” for truth, as the Albert-Loewer inspired proposal suggests. I think there are more organic ways to handle utterance-truth within a degree theoretic framework. It’s a bit involved to go into here, but the basic ideas are extracted from recent work by Agustin Rayo, and involve only allowing “local” specifications of truth simpliciter, relative to a particular conversational context. The key thing is that on the semantic side, once we have the degree theory, we can take “off the peg” an account of how such degree theories interact with a general account of communication. So combining the degree-based understanding of what validity amounts to (in terms of limiting the creep of falsity into the conclusion) and this degree-based account
of assertion, I think we’ve got a pretty powerful, pretty well understood overview about how ordinary language position-talk works.

Kripkenstein’s monster

Though I’ve thought a lot about inscrutability and indeterminacy (well, I wrote my PhD thesis on it) I’ve always run a bit scared from the literature on Kripkenstein. Partly this is because the literature is so huge and sometimes intimidatingly complex. Partly it’s because I was a bit dissatisfied/puzzled with some of the foundational assumptions that seemed to be around, and was setting it aside until I had time to think things through.

Anyway, I’m now thinking about making a start on thinking about the issue. So this post is something in the way of a plea for information: I’m going to set out how I understand the puzzle involved, and invite people to disabuse me of my ignorance, recommend good readings or where these ideas have already been worked out.

To begin with, let’s draw a rough divide between three types of facts:

  1. Paradigmatically naturalistic facts (patterns of assent and dissent, causal relationships, dispositions, etc).
  2. Meaning-facts. (Of the form: “+” means addition, “67+56=123” is true, “Dobbin” refers to Dobbin.)
  3. Linguistic norms. (Of the form: One should utter “67+56=123” in such-and-such circs).

Kripkenstein’s strategy is to ask us to show how facts of (A) can constitute facts of kind (B) and (C). (An oddity here: the debate seems to have centred on a “dispositionalist” account of the move from A to B. But that’s hardly a popular option in the literature on naturalistic treatments of content, where variants of radical interpretation (Lewis, Davidson), of causal (Fodor, Field) and teleological (Millikan) theories are far more prominent. Boghossian in his state of the art article in Mind seems to say that these can all be seen as variants of the dispositionalist idea. But I don’t quite understand how. Anyway…)

One of the major strategies in Kripkenstein is to raise doubts about whether this or that constitutive story can really found facts of kind (C). Notice that if one assumes that (B) and (C) are a joint package, then this will simultaneously throw into doubt naturalistic stories about (B).

In what sense might they be a joint package? Well, maybe some sort of constraint like the following is proposed: unless putative meaning-facts make immediately intelligible the corresponding linguistic norms, then they don’t deserve the name “meaning facts” at all.

To see an application, suppose that some of Kripke’s “technical” objections to the dispositionalist position were patched (e.g. suppose one could non-circularly identify a disposition of mine to return the intuitively correct verdicts to each and every arithmetical sum). Still, then, there’s the “normative” objection: why are those the verdicts the ones one should return in those circumstances? And (right or wrongly) the Kripkenstein challenge is that this normative explanation is missing. So (according to the Kripkean) these ain’t the meaning-facts at all.

There’s one purely terminological issue I’d like to settle at this point. I think we shouldn’t just build it into the definition of meaning-facts that they correspond to linguistic norms in this way. After all, there’s lot of other theoretical roles for meaning other than supporting linguistic norms (e.g. a predicative/explanatory role wrt understanding, for example). I propose to proceed as follows. Firstly, let’s speak of “semantic” or “meaning” facts in general (picked out if you like via other aspects of the theoretical role of meaning). Secondly, we’ll look for arguments for or against the substantive claim that part of the job of a theory of meaning is to subserve, or make immediately intelligible, or whatever, facts like (C).

Onto details. The Kripkenstein paradox looks like it proceeds on the following assumptions. First, three principles are taken as target (we can think of them as part of a “folk theory” of meaning)

  1. the meaning-facts to be exactly as we take them to be: i.e. arithmetical truths are determinate “to infinity”; and
  2. the corresponding linguistic norms are determinate “to infinity” as well; and
  3. (1) and (2) are connected in the obvious way: if S is true, then in appropriate circumstances, we should utter S.

The “straight solutions” seem to tacitly assume that our story should take the following form. First, give some constitutive story about what fixes facts of kind (B). Then (supposing there’s no obvious counterexamples, i.e. that the technical challenge is met). Then the Kripkensteinian looks to see whether this “really gives you meaning”, in the sense that we’ve also got a story underpinning (C). Given our early discussion, the Kripkensteinian challenge needs to be rephrased somewhat. Put the challenge as follows. First, the straight solution gives a theory of semantic facts, which is evaluated for success on grounds that set aside putative connections to facts of kind (C). Next, we ask the question: can we give an adequate account of facts of kind (C), on the basis of what we have so far? The Kripkensteinian suggests not.

The “sceptical solution” starts in the other direction. It takes as groundwork facts of kind (A) and (C) (perhaps explaining facts of kind (C) on the basis of those of kind (A)?) and then uses this in constructing an account of (something like) (B). One Kripkensteinian thought here is to base some kind of vindication of (B)-talk on the (C)-style claim that one ought to utter sentences involving semantic vocabulary such as ” ‘+’ means addition”.

The basic idea one should be having at this point is more general however. Rather than start by assuming that facts like (B) are prior in the order of explanation to facts like (C), why not consider other explanatory orderings? Two spring to mind: linguistic normativity and meaning-facts are explained independently; or linguistic normativity is prior in the order of explanation to meaning-facts.

One natural thought in the latter direction is to run a “radical interpretation” line. The first element of a radical interpretation proposal is identify a “target set” of T-sentences, which the meaning-fixing T-theory for a language is (cp) constrained to generate. Davidson suggests we pick the T-sentences by looking at what sentences people de facto hold true in certain circumstances. But, granted (C)-facts, when identifying the target set of T-sentences one might instead appeal to what person’s ought to utter in such and such circs.

There’s no obvious reason why such normative facts need be construed as themselves “semantic” in nature, nor any obvious reason why the naturalistically minded shouldn’t look for reductions of this kind of normativity (e.g. it might be a normativity on a par with that involved with weak hypothetical imperatives, e.g. in the claim that I should eat this food, in order to stay alive, which I take to be pretty unscary.). So there’s no need to give up on reductionist project in doing things this way. Nor is it only radical interpretation that could build in this sort of appeal to (C)-type facts in the account of meaning.

One nice thing about building normativity into the subvening base for semantic facts in this way is that we make it obvious that we’ll get something like (a perhaps restricted and hedged) form of (iii). Running accounts of (B) and (C) separately would make the convergence of meaning-facts and linguistic norms seem like a coincidence, if it in fact holds in any form at all.)

Is there anything particularly sceptical about the setup, so construed? Not in the sense in which Kripke’s own suggestion is. Two things about the Kripke proposal (as I suggested we read it): it’s clear that we’ve got some kind of projectionist/quasi-realist treatment of the semantic going on (it’s only the acceptability of semantic claims that’s being vindicated, not “semantic facts” as most naturalistic theories of meaning would conceive them). Further, the sort of norms to which we can reasonably appeal will be grounded in practices of praise and blame in a linguistic community to which we belong, and given the sheer absence of people doing very-long sums, there just won’t be a practice of praise and blaming people for uttering “x+y=z” for sufficiently large choices of x, y and z. The linguistic norms we can ground in this way might be much more restricted than one might at first think: maybe only finitely many sentences S are such that something of the following form holds: we should assert S in circs c. Though there might be norms governing apparently infinitary claims, there is no reason to suppose in this setup that there are infinitely many type-(C) facts. That’ll mean that (2) and (3) are dropped.

In sum, Kripke’s proposal is sceptical in two senses: it is projectionist, rather than realist, about meaning-facts. And it drops what one might take to be a central plank of folk-theory of meaning, (2) and (3) above.

On the other hand, the modified radical interpretation or causal theory proposal I’ve been sketching can perfectly well be a realist about meaning-facts, having them “stretch out to infinity” as much as you like (I’d be looking to combine the radical interpretation setting sketched earlier with something like Lewis’s eligibility constraints on correct interpretation, to secure semantic determinacy). So it’s not “sceptical” in the first sense in which Kripke’s theory is: it doesn’t involve any dodgy projectivism about meaning-facts. But it is a “sceptical solution” in the other sense, since it gives up the claims that linguistic norms “stretch out” to infinity, and that truth-conditions of sentences are invariably paired with some such norm.

[Thanks (I think) are owed to Gerald Lang for the title to this post. A quick google search reveals that others have had the same idea…]