Category Archives: Indeterminacy

More on norms

One of the things that’s confusing about truth norms for belief is how exactly they apply to real people—people with incomplete information.

Even if we work with “one should: believe p only if p is true”. After all, I guess we can each be pretty confident that we fail to satisfy the truth-norm. I’m confident that at least one of my current beliefs is untrue. I’m in preface-paradox-land, and there doesn’t seem any escape. It doesn’t feel like I’m criticizable in any serious way for being in this situation. What is the better option (OK, you could say: switch to describing your doxastic state in terms of credences rather than all-or-nothing beliefs, but for now I’m playing the all-or-nothing-belief game).

So I’m not critizable just for having beliefs which are untrue. And I’m not criticizable for knowing that I have beliefs which are untrue. Here’s how I’d like to put it. There are lots of very specific norms, which can be schmatized as “one should: believe that p if p is true”. It’s when I know, of one particular instance, that I’m violating this “external” norm, that I seem to be criticizable.

Let’s turn to the indeterminate case. Suppose that it’s indeterminate whether p, and I know this. And consider three options.

  1. Determinately, I believe p.
  2. Determinately, I believe ~p.
  3. It’s indeterminate whether I believe p.

I’m going to ignore the “suspension of belief case”. I’ll assume in (3) we’re considering a case where the indeterminacy in my belief is such that, determinately, I believe p iff p is true.

In case (1) and (2), for the specific q in question, I can know that it’s indeterminate whether I’m violating the external norm. But for (3), it’s determinate that I’m not violating this norm.

It’s very natural to think that I’m pro tanto criticizable if I get into situation (1) or (2) here, when (3) is open to me (that is, I better have some overriding reason for going this way if I’m to avoid criticism). If this is one way in which criticism gets extracted out of external truth-norms, then it looks like indeterminate belief is the appropriate response to known indeterminacy.

But that isn’t by any means the only option here. We might reason as follows. What’s common ground by this point is that it’s indeterminate whether (1) or (2) violates the norm. So it’s not determinate that (1) or (2) do violate the norm. So it’s not determinate that a necessary condition for my beliefs being criticizable is met. So it’s at worst indeterminate whether I’m criticizable in this situation.

I can’t immediately see anything wrong with this suggestion. But I think that nevertheless, (2) (3) is the better state to be in than (1) (1) or (2). So here’s a different way of getting at this.

I’m going to now switch to talking in terms of credences *as well as* beliefs. Suppose that I believe, and am credence 1, that p is indeterminate. And suppose that I believe that p—but I’m not credence 1 in it. Suppose I’m credence 0.9 in p instead (this’d fit nicely, for example, with a “high threshold” account of the relationship between credence and all-out belief, but all I need is the idea that this sort of thing can happen, rather than any sort of general theory about what goes on here. It couldn’t happen if e.g. to believe p was to have credence 1 in p).

In this situation, I have 0.1 credence in ~p, and so 0.1 credence in p not being true (in the situation we’re envisaging, I’m credence 1 in the T-scheme that allows this semantic ascent).

I’m also going to assume that not only do I believe p, but I’m perfectly confident of this—credence 1 that I believe p. So I’m credence 0.1 in “I believe p & p is not true”—so credence 0.1 in the negation of “I believe p only if p is true”. So I’m at least credence 0.1 that I’ve violated the norm.

Contrast this with the situation where it’s indeterminate whether I believe p, and p is indeterminate, in such a way that “p is true iff I believe p” comes out determinately true. If I’m fully confident of all the facts here, I will have zero credence that I’ve violated the norm.

That is, if we go for option (1) or (2) above, when you’re certain that p is indeterminate, and are less than absolutely certain of p, then it looks to me that you’ll thereby give some credence to your having violated the aleithic norm (with respect to the particular p in question). If you go for (3), on the other hand, you can be certain that you haven’t violated the alethic norm.

It seems to me that faced with the choice between states which, by their own lights, may violate alethic norms, and states which, by their own lights, definitely don’t violate alethic norms, we’d be criticizable unless we opt for the second rather than the first so long as all else is equal. So I do think this line of thought supports the (anyway plausible) thought that it’s (3), rather than (1) or (2), which is the appropriate response to known indeterminate cases, given a truth-norm for belief.

(As noted in the previous post, this is all much quicker if the truth-norm were: one should, determinately( believe p only if p is true). But I do think the case for (3) would be much more powerful if we can argue for it on the basis of the pure truth-norm rather than this decorated version).

Indeterminacy day at Leeds

This past Saturday “indeterminacy day” was held at Leeds. Or, to give it its more prosaic title: “Metaphysical Indeterminacy: the state of the art”.

There were four speakers (Katherine Hawley, Daniel Nolan, Peter van Inwagen and myself). We had quite a few people turn up from around the country to participate in the discussions—we were very pleased to see so many grad students around—thanks to everyone who came along and helped make the event such fun!

I’m going to write up a short report on what happened for the reasoner (probably focussing more on the intellectual content than on the emergency evacuation procedures that ended in a locked courtyard). But I thought I’d take the chance to post the slides I talked to on the day. They’re available here.

I wanted to do two things with the talk. One was to give an overview of how we’ve been thinking about these things here at Leeds (on reflection, I should have been more explicit that I was drawing on previous work here—particularly joint work with Elizabeth Barnes. I’ve added some more explicit pointers in the posted slides). But I also wanted to go beyond this, to urge that one thing that we want from any “theory” of indeterminacy is some account of its cognitive role—what the rational constraints (if any) believing that p is indeterminate, puts on one’s attitude to p. (To fix ideas, think about chance: knowing that there’s a 0.5 chance of p (all else equal) means you should have 0.5 credence in p. That’s a pretty specific doxastic role. On the other hand, knowing that p is contingent is compatible with any old credence in p).

Now, in the talk, I said that this can help to articulate what people are complaining about when they say that they *just don’t understand* the notion of metaphysical indeterminacy. I reckon people shouldn’t say that the notion of indeterminacy with a metaphysical/worldly source is literally unintelligible (I reckon that’s way to strong a claim to be plausible—Elizabeth and I chat about this a bit in the joint paper). But I’m sympathetic to the thought that someone can complain they don’t “fully grasp” the concept of a specific sentential operator P if they’re entirely in the dark about its cognitive role (how credences in P(q) should constrain attitudes to q). A fair enough answer to this challenge is to say that there are no constraints. For P=it is contingent whether, that seems plausible. But there’s something compelling about the thought that someone who e.g. doesn’t appreciate that something like the principal principle governs chance, doesn’t grasp the concept of chance itself.

What makes the challenge to spell out cognitive role particularly pressing for the view that Elizabeth and I set up in the joint paper, is that we don’t get much of a steer from other aspects of what we say, as to what the cognitive role should be. We say that metaphysical indeterminacy is a primitive/fundamental operator (compare what some would like to say about modality or tense). No help from this about cognitive role—as there might be for someone who said that indeterminacy is a special case of some wider phenomenon whose cognitive role we had a prior grip on (e.g. ignorance). Moreover, in the joint paper the logic of indeterminacy that we defend is pretty thoroughly classical. And so there’s no obvious way of appealing to features of (the putative) logic of indeterminacy to get guidance. Others with a more revisionary/committal take on the logic of indeterminacy may well be able to point to features of the logic as implicitly answering the cognitive role question (that’s a strategy that Hartry Field has been advocating recently).

Some qualifications (arising from good questions put during the workshop, esp. by Daniel Nolan).

(i) I certainly shouldn’t suggest that being able to explicitly articulate the cognitive role of a concept C is required in order to fully grasp C. Surely we can at most require one *implement* the cognitive role (accord with whatever rules it specifies, not necessarily articulate those).

(ii) If one thinks in *general* that concepts are in part individuated by cognitive role, then we’ll have a general reason for thinking that in order for someone to come to fully grasp C, from a position where they don’t yet grasp it, they’ll need to be given resources to fix C’s cognitive role. On this view you won’t count as having attitudes to contents featuring the concept C at all, unless those contents are structured in the way prescribed by C’s cognitive role.

(iii) Even if you don’t go for a strong concept-individuation claim, you might be sympathetic to the general thought that it’s right to classify people as having greater/lesser grasp on a concept, the the extent that they’re deployment of the concept conforms to what’s laid down by its cognitive role.

(iv) There may be cases where we count people as fully competent with a concept, even though they don’t accord to cognitive role, if they’ve regard themselves as having (or can plausibly be interpreted as tacitly believing that there are) special reasons to depart from the cognitive role.

(v) If a theorist whose subject-matter is C doesn’t explicitly or implicitly convey information about the cognitive role of C, it’ll be appropriate for someone without an anterior concept of C, to complain that they haven’t been put in a position to become fully competent deployers of C.

Ok, so claims (i-v) sound eminently suitable for counterexamples—be very pleased to hear people’s thoughts about them in the comments. My thought is that when Elizabeth and I say we’re theorizing about a metaphysically primitive indeterminacy operator, whose logic is pretty much entirely classical—unless we say some more, people are entitled to complain in the way described in (v).

One thing I’d’ve talked about a bit more (if the Fire Alarm hadn’t interrupted!) is various ways of adding bits that implicitly fix cognitive role. Think about the following rather “external” norm of belief:

  • One should: believe p only if p is true.

Now, suppose that it’s indeterminate whether p is true (as it will be when p is indeterminate, on the position put forward in the joint paper). Then if it’s determinately true one believes p, it’ll be indeterminate whether the biconditional “believe(p) iff p is true” holds (compare: if A is necessary and B is contingent, then A<–>B is contingent). Likewise, determinately believing ~p in these circumstances leads to it being indeterminate whether you’ve violated the norm.

As Ross pointed out in the talk, on these formulations, suspending belief and disbelief in p is a way of determinately satisfying the norms. Maybe that’s an attractive result. If we strengthened the norms to biconditionals, then (determinately) not believing doesn’t lead to any worse status. And the biconditional versions don’t look implausible as articulating some kind of doxastic ideal: what a believer concerned aiming at the truth, and not resource-limited, should do.

If we leave things here, the conclusion is that when it’s indeterminate whether Harry is bald, it’s indeterminate whether (determinately) believing that Harry is bald violates the truth-norm on belief (and the same goes for other salient options). You can’t come all-out and say that someone who without hestitation *believes Harry is bald* is determinately doing something wrong. But notice: suppose someone without hesitation determinately believes it’s wrong to believe Harry is bald. Then you equally can’t say that it’s determinately wrong to believe what they believe. And of course this iterates!

This seems pretty vertigo-inducing to me. Notice that we shouldn’t ignore the option of it being indeterminate what a subject believes. In that situation, one might *determinately* meet the truth-norm even in biconditional version. (Compare: if A and B are both contingent, it can be necessarily true that A<–>B).

It’s tempting to think that, determinately, what you *should* do in these circumstances is to make it the case that it’s indeterminate whether you believe p. For only then can you avoid the worries about someone criticizing you, and not being not-determinately-wrong to do so! But of course, this really would be something over and above what we’ve said so far.

What *would* enforce the idea that when it’s indeterminate whether p, it should be indeterminate whether you believe p, is the following formulation of the truth norm:

  • One should: determinately (believe p iff p is true).

If p is indeterminate, then determinately believing p or determinately not believing p would each violate the claim that the biconditional is *determinately* true, and on the revised formulation, one isn’t doing as one should (and it’s determinately true to say so).

So I think that given the truth-norm (or, better, the narrow-scoped version just laid down) there’s some prospect of arguing that there’s a cognitive role for indeterminacy implicit in the kind of non-revisionary framework of the joint paper. There’s work to do to figure out how to go about meeting these constraints—what sort of mental setup it takes for it to be indeterminate whether you believe p, and what to say about rational action, in particular, given this. But we’ve got a starting point.

Indeterminate survival: in draft

So, finally, I’ve got another draft prepared. This is a paper focussing on Bernard Williams’ concerns about how to think and feel about indeterminacy in questions of one’s own survival.

Suppose that you know that you know there’s an individual in the future who’s going to get harmed. Should you invest a small amount of money to alleviate the harm? Should you feel anxious about the harm?

Well, obviously if you care about the guy (or just have a modicum of humanity) you probably should. But if it was *you* that was going to suffer the harm, there’d be a particularly distinctive frisson. From a prudential point of view, you’d be compelled to invest minor funds for great benefit. And you really should have that distinctive first-personal phenomenology associated with anxiety on one’s own behalf. Both of these de se attitudes seem important features of our mental life and evaluations.

The puzzle I take from Williams is: are the distinctively first-personal feelings and expectations appropriate in a case where you know that it’s indeterminate whether you survive as the individual who’s going to suffer?

Williams thought that by reflecting on such questions, we could get an argument against account of personal identity that land us with indeterminate cases of survival. I’d like to play the case in a different direction. It seems to me pretty unavoidable that we’ll end up favouring accounts of personal identity that allow for indeterminate cases. So if , when you combine such cases with this or that theory of indeterminacy, you end up saying silly things, I want to take that as a blow to that account of indeterminacy.

It’s not knock-down (what is in philosophy?) but I do think that we can get leverage in this way against rejectionist treatments of indeterminacy, at least as applied to these kind of cases. Rejectionist treatments include those folks who think that characteristic attitudes to borderline cases includes primarily a rejection of the law of excluded middle; and (probably) those folks who think that in such cases we should reject bivalence, even if LEM itself is retained.

In any case, this is definitely something I’m looking for feedback/comments on (particularly on the material on how to think about rational constraints on emotions, which is rather new territory for me). So thoughts very welcome!

Primitivism about indeterminacy: a worry

I’m quite tempted by the view that it is indeterminate that might be one of those fundamental, brute bits of machinery that goes into constructing the world. Imagine, for example, you’re tempted by the thought that in a strong sense the future is “open”, or “unfixed”. Now, maybe one could parlay that into something epistemic (lack of knowledge of what the future is to be), or semantic (indecision over which of the existing branching futures is “the future”) or maybe mere non-existence of the future would capture some of this unfixity thought. But I doubt it. (For discussion of what the openness of the future looks like from this perspective, see Ross and Elizabeth’s forthcoming Phil Studies piece).

The open future is far from the only case you might consider—I go through a range of possible arenas in which one might be friendly to a distinctively metaphysical kind of indeterminacy in this paper—and I think treating “indeterminacy” as a perfectly natural bit of kit is an attractive way to develop that. And, if you’re interested in some further elaboration and defence of this primitivist conception see this piece by Elizabeth and myself—and see also Dave Barnett’s rather different take on a similar idea in a forthcoming piece in AJP (watch out for the terminological clashes–Barnett wants to contrast his view with that of “indeterminists”. I think this is just a different way of deploying the terminology.)

I think everyone should pay more attention to primitivism. It’s a kind of “null” response to the request for an account of indeterminacy—and it’s always interesting to see why the null response is unavailable. I think we’ll learn a lot about what the compulsory questions the a theory of indeterminacy must answer, from seeing what goes wrong when the theory of indeterminacy is as minimal as you can get.

But here I want to try to formulate a certain kind of objection to primitivism about indeterminacy. Something like this has been floating around in the literature—and in conversations!—for a while (Williamson and Field, in particular, are obvious sources for it). I also think the objection if properly formulated would get at something important that lies behind the reaction of people who claim *just not to understand* what a metaphysical conception of indeterminacy would be. (If people know of references where this kind of idea is dealt with explicitly, then I’d be really glad to know about them).

The starting assumption is: saying “it’s an indeterminate case” is a legitimate answer to the query “is that thing red?”. Contrast the following. If someone asks “is that thing red?” and I say: it’s contingent whether it’s red”, then I haven’t made a legitimate conversational move. The information I’ve given is simply irrelevant to it’s actual redness.

So it’s a datum that indeterminacy-answers are in some way relevant to redness (or whatever) questions. And it’s not just that “it is indeterminate whether it is red” has “it is red” buried within it – so does the contingency “answer”, but it is patently irrelevant.

So what sort of relevance does it have? Here’s a brief survey of some answers:

(1) Epistemicist. “It’s indeterminate whether p” has the sort of relevance that answering “I don’t know whether p” has. Obviously it’s not directly relevant to the question of whether p, but at least expresses the inability to give a definitive answer.

(2) Rejectionist (like truth-value gap-ers, inc. certain supervaluationists, and LEM-deniers like Field, intuitionists). Answering “it’s indeterminate” communicates information which, if accepted, should lead you to reject both p, and not-p. So it’s clearly relevant, since it tells the inquirer what their attitudes to p itself should be.

(3) Degree theorist (whether degree-supervaluationist like Lewis, Edgington, or degree-functional person like Smith, Machina, etc). Answering “it’s indeterminate” communicates something like the information that p is half-true. And, at least on suitable elaborations of degree theory, we’ll then now how to shape our credences in p itself: we should have credence 0.5 in p if we have credence 1 that p is half true.

(4) Clarification request. (maybe some contextualists?) “it’s indeterminate that p” conveys that somehow the question is ill-posed, or inappropriate. It’s a way of responding whereby we refuse to answer the question as posed, but invite a reformulation. So we’re asking the person who asked “is it red?” to refine their question to something like “is it scarlet?” or “is it reddish?” or “is it at least not blue?” or “does it have wavelength less than such-and-such?”.

(For a while, I think, it was assumed that every series account of indeterminacy would say that if p was indeterminate, one couldn’t know p (think of parallel discussion of “minimal” conceptions of vagueness—see Patrick Greenough’s Mind paper). If that was right then (1) would be available to everybody. But I don’t think that that’s at all obvious — and in particular, I don’t think it’s obvious the primitivist would endorse it, and if they did, what grounds they would have for saying so).

There are two readings of the challenge we should pull apart. One is purely descriptive. What kind of relevance does indeterminacy have, on the primitivist view? The second is justificatory: why does it have that relevance? Both are relevant here, but the first is the most important. Consider the parallel case of chance. There we know what, descriptively, we want the relevance of “there’s a 20% chance that p” to be: someone learning this information should, ceteris paribus, fix their credence in p to 0.2. And there’s a real question about whether a metaphysical primitive account of chance can justify that story (that’s Lewis’s objection to a putative primitivist treatment of chance facts).

The justification challenge is important, and how exactly to formulate a reasonable challenge here will be a controversial matter. E.g. maybe route (4), above, might appeal to the primitivist. Fine—but why is that response the thing that indeterminacy-information should prompt? I can see the outlines of a story if e.g. we were contextualists. But I don’t see what the primitivist should say.

But the more pressing concern right now is that for the primitivist about indeterminacy, we don’t as yet have a helpful answer to the descriptive question. So we’re not even yet in a position to start engaging with the justificatory project. This is what I see as the source of some dissatisfaction with primitivism – the sense that as an account it somehow leaves something unimportant explained. Until the theorist has told me something more I’m at a loss about what to do with the information that p is indeterminate

Furthermore, at least in certain applications, one’s options on the descriptive question are constrained. Suppose, for example, that you want to say that the future is indeterminate. But you want to allow that one can rationally have different credences for different future events. So I can be 50/50 on whether the sea battle is going to happen tomorrow, and almost certain I’m not about to quantum tunnel through the floor. Clearly, then, nothing like (2) or (3) is going on, where one can read off strong constraints on strength of belief in p from the information that p is indeterminate. (1) doesn’t look like a terribly good model either—especially if you think we can sometimes have knowledge of future facts.

So if you think that the future is primitively unfixed, indeterminate, etc—and friends of mine do—I think (a) you owe a response to the descriptive challenge; (b) then we can start asking about possible justifications for what you say; (c) your choices for (a) are very constrained.

I want to finish up by addressing one response to the kind of questions I’ve been pressing. I ask: what is the relevance of answering “it’s indeterminate” to first-order questions? How should I alter my beliefs in receipt of the information, what does it tell me about the world or the epistemic state of my informant?

You might be tempted to say that your informant communicates, minimally, that it’s at best indeterminate whether she knows that p. Or you might try claiming that in such circumstances it’s indeterminate whether you *should* believe p (i.e. there’s no fact of the matter as to how you should shape your credences on the question of whether p). Arguably, you can derive these from the determinate truth of certain principles (determinacy, truth as the norm of belief, etc) plus a bit of logic. Now, that sort of thing sounds like progress at first glance – even if it doesn’t lay down a recipe for shaping my beliefs, it does sound like it says something relevant to the question of what to do with the information. But I’m not sure about that it really helps. After all, we could say exactly parallel things with the “contingency answer” to the redness question with which we began. Saying “it’s contingent that p” does entail that it’s contingent at best whether one knows that p, and contingent at best whether one should believe p. But that obviously doesn’t help vindicate contingency-answers to questions of whether p. So it seems that the kind of indeterminacy-involving elaborations just given, while they may be *true*, don’t really say all that much.

Probabilities and indeterminacy

I’ve just learned that my paper “Vagueness, Conditionals and Probability” has been accepted for the first formal epistemology festival in Konstanz this summer. It looks like the perfect place for me to get feedback on, and generally learn more about, the issues raised in the paper. So I’m really looking forward to it.

I’m presently presenting some of this work as part of a series of talks at Arche in St Andrews. I’m learning lots here too! One thing that I’ve been thinking about today relates directly to the paper above.

One of the main things I’ve been thinking about is how credences, evidential probability and the like should dovetail with supervaluationism. I’ve written about this a couple of times in the past, so I’ll briefly set out one sort of approach that I’ve been interested in, and then sketch something that just occurred to me today.

The basic question is: what attitude should we take to p, if we are certain that p is indeterminate? Here’s one attractive line of thought. First of all, it’s a familiar thought that logic should impose some rationality constraints on belief. Let’s formulate this minimally as the constraint that, for the rational agent, probability (credence or evidential probability) can never decrease across a valid argument:

A\models B \Rightarrow p(A)\leq p(B)

Now take one of the things that supervaluational logics are often taken to imply, where ‘D‘ is read as ‘it is determinate that’:

A\models DA

Then we note that this and the logicality constraint on probabilities entails that

p(A)\leq p(DA)

So in particular, if we fully reject A being determinate (e.g. if we fully accept that it’s indeterminate) then the probability of the RHS will be zero, and so by the inequality, the probability of the RHS is zero. (The particular supervaluational consequence I’m appealing to is controversial, since it follows only in settings which seem inappropriate for modelling higher-order indeterminacy, but we can argue by adding a couple of extra assumptions for the same result in other ways. This’ll do us for now though).

The result is that if we’re fully confident that A is indeterminate, we should have probability zero in both A and in not-A. That’s interesting, since we’re clearly not in Kansas anymore—this result is incompatible with classical probability theory. Hartry Field has argued in the past for the virtues of this result as giving a fix on what indeterminacy is, and I’m inclined to think that it captures something at the heart of at least one way of conceiving of indeterminacy.

Rather than thinking about indeterminate propositions as having point-valued probabilities, one might instead favour a view whereby they get interval values. One version of this can be defined in this setting. For any A, let u(A) be defined to be 1-p(\neg A). This quantity—how little one accepts the negation of a proposition—might be thought of as the upper bound of an interval whose lower bound is the probability of A itself. So rather than describe one’s doxastic attitudes to known indeterminate A as being “zero credence” in A, one might prefer the description of them as themselves indeterminate—in a range between zero and 1.

There’s a different way of thinking about supervaluational probabilities, though, which is in direct tension with the above. Start with the thought that at least for supervaluationism conceived as a theory of semantic indecision, there should be no problem with the idea of perfectly sharp classical probabilities defined over a space of possible worlds. The ways the world can be, for this supervaluationist, are each perfectly determinate, so there’s no grounds as yet for departing from orthodoxy.

But we also want to talk about the probabilities of what is expressed by sentences such as “that man is bald” where the terms involved are vague (pick your favourite example if this one won’t do). The supervaluationist thought is that this sentence picks out a sharp proposition only relative to a precisification. What shall we say of the probability of what this sentence expresses? Well, there’s no fact of the matter about what it expresses, but relative to each precisification, it expresses this or that sharp proposition—and in each case our underlying probability measure assigns it a probability.

Just as before, it looks like we have grounds for assigning to sentences, not point-like probability values, but range-like values. The range in question will be a subset of [0,1], and will consist of all the probability-values which some precisification of the claim acquires. Again, we might gloss this as saying that when A is indeterminate, it’s indeterminate what degree of belief we should have in A.

But the two recipes deliver totally utterly different results. Suppose, for example, I introduce a predicate into English, “Teads”, which has two precisifications: on one it applies to all and only coins which land Heads, on the other it applies to all and only coins that land Tails (or not Heads). Consider the claim that the fair coin I’ve just flipped will land Teads. Notice that we can be certain that this sentence will be indeterminate—whichever way the coin lands, Heads or Tails, the claim will be true on one precisification and false on the other.

What would the logic-based argument give us? Since we assign probability 1 to indeterminacy, it’ll say that we should assign probability 0, or a [0,1] interval, to the coin landing Teads.

What would the precisification-based argument give us? Think of the two propositions the claim might express: that the coin will land heads, or that the coin will land tails. Either way, it expresses a proposition that is probability 1/2. So the set of probability values associated with the sentence will be point-like, having value 1/2.

Of course, one might think that the point-like value stands in an interesting relationship to the [0,1] range—namely being its midpoint. But now consider cases where the coin is biased in one way. For example, if the coin is biased to degree 0.8 towards heads, then the story for the logic-based argument will remain the same. But for the precisification-based person the values will change to {0.8,0.2}. So we can’t just read off the values the precisificationist arrives at from what we get from the logic-based argument. Moral: in cases of indeterminacy, thinking of probabilities in the logic-based way wipes out all information other than that the claim in question is indeterminate.

This last observation can form the basis for criticism of supervaluationism in a range of circumstances in which we want to discriminate between attitudes towards equally indeterminate sentences. And *as an argument* I take it seriously. I do think there should be logical constraints on rational credence, and if the logic for supervaluationism is as its standardly taken to be, that enforces the result. If we don’t want the result, we need to argue for some other logic. Doing so isn’t cost free, I think—working within the supervaluational setting, bumps tend to arise elsewhere when one tries to do this. So the moral I’d like to draw from the above discussion is that there must be two very different ways of thinking about indeterminacy that both fall under the semantic indecision model. These two conceptions are manifest in different attitudes towards indeterminacy described above. (This has convinced me, against my own previous prejudices, that there’s something more-than-merely terminological to the question of “whether truth is supertruth”).

But let’s set that aside for now. What I want to do is just note that *within* the supervaluational setting that goes with the logic-based argument and thinks that all indeterminate claims should be rejected, there shouldn’t be any objection to the underlying probability measure mentioned above, and given this, one shouldn’t object to introducing various object-language operators. In particular, let’s consider the following definition:

“P(S)=n” is true on i, w iff the measure of {u: “S” is true on u,i}=n

But it’s pretty clear to see that the (super)truths about this operator will reflect the precisification-based probabilities described earlier. So even if the logic-based argument means that our degree of belief in indeterminate A should be zero, still there will be object-language claims we could read as “P(the coin will land Teads)=1/2” that will be supertrue. (The appropriate moral from the perspective of the theorist in question would be that whatever this operator expresses, it isn’t a notion that can be identified with degree of belief).

If this is right, then arguments that I’m interested in using against certain applications of the “certainty of indeterminacy entails credence zero” position have to be handled with extreme care. So, for example, in the paper mentioned right at the beginning of this post, I appeal to empirical data about folk judgements about the probabilities of conditionals. I was assuming that I could take this data as information on what the folk view about credences of conditionals is.

But if, compatibly with taking the “indeterminacy entails zero credence” view of conditionals, one could have within a language a P-operator which behaves in the ways described above, this isn’t so clear anymore. Explicit probability reports might be reporting on the P-operator, rather than subjective credence. So everything becomes rather delicate and very confusing.

Arche talks

In a few weeks time (31st March-5th April) I’m going to be visiting the Arche research centre in St Andrews, and giving a series of talks. I studied at Arche for my PhD, so it’ll be really good to go back and see what’s going on.

The talks I’m giving relate to the material on indeterminacy and probability (in particular, evidential probability or partial belief). The titles are as follows:

  • Indeterminacy and partial belief I: The open future and future-directed belief.
  • Indeterminacy and partial belief II: Conditionals and conditional belief.
  • Indeterminacy and partial belief III: Vague survival and de se belief.

A lot of these are based around exploring the consequences of the view that if p is indeterminate, and one knows this (or is certain of it) then one shouldn’t invest any probability in p. In the case of the open future, of conditionals, and in vague survival—for rather different reasons in each case—this seems highly problematic.

But why should you believe that key principle about how attitudes to indeterminacy constrain attitudes to p? The case I’ve been focussing on up till now has concerned a truth-value gappy position on indeterminacy. With a broadly classical logic governing the object language, one postulates truth-value gaps in indeterminate cases. There’s then an argument directly from this to the sort of revisionism associated with supervaluationist positions in vagueness. And from there, and a certain consistency requirement on rational partial belief (or evidence) we get the result. The consistency requirement is simply the claim, for example, that if q follows from p, one cannot rationally invest more confidence in p than one invests in q (given, of course, that one is aware of the relevant facts).

The only place I appeal to what I’ve previously called the “Aristotelian” view of indeterminacy (truth value gaps but LEM retained) is in arguing for the connection between attitudes to determinately p and attitudes to p. But I’ve just realized something that should have been obvious all along—which is that there’s a quick argument to something similar for someone who thinks determinacy is marked by a rejection of excluded middle. Assume, to begin with, that the paracompletist nonclassicist will think in borderline cases, characteristically, one should reject the relevant instance of excluded middle. So if one is fully convinced that p is borderline, one should utterly reject pv~p.

It’s dangerous to generalize about non-classical systems, but the ones I’m thinking of all endorse the claim p|-pvq—i.e. disjunction introduction. So in particular, an instance of excluded middle will follow from p.

But if we utterly reject pv~p in a borderline case (assign it credence 0), then by the probability-logic link we should utterly reject (assign credence 0) anything from which it follows.
In particular, we should assign credence 0 to p. And by parallel reasoning, we should assign credence 0 to ~p.

[Edit: there’s a question, I think, about whether the non-classicist should take us to utterly reject LEM in a borderline case (i.e. degree of partial belief=0). The folklore non-classicist, at least, might suggest that on her conception degrees of truth should be expert functions for partial beliefs—i.e. absent uncertainty about what the degrees of truth are, one should conform the partial beliefs to the degrees of truth. Nick J. J. Smith has a paper where he works out a view that has this effect, from what I can see. It’s available here and is well worth a read. If a paradigm borderline case for the folklore nonclassicist is one where degree of truth of p, not p and pv~p are all 0.5, then one’s degree of belief in all of them should be 0.5. And there’s no obvious violation of the probability-logic link here. (At least in this specific case. The logic will have to be pretty constrained if it isn’t to violate probability-logic connection somewhere).]

If all this is correct, then I don’t need to restrict myself to discussing the consequences of the Aristotelian/supervaluation sort of view. Everything will generalize to cover the nonclassical cases—and will cover both the folklore nonclassicist and the no interpretation nonclassicist discussed in the previous cases (here’s a place where there’s convergence).

[A folklore classicist might object that for them, there isn’t a unique “logic” for which to run the argument. If one focuses on truth-preservation, one gets say a Kleene logic; if one focuses on non-falsity preservation, one gets an LP logic. But I don’t think this thought really goes anywhere…]

Non-classical logics: the no interpretation account

In the previous post, I set out what I took to be one folklore conception of a non-classicist treatment of indeterminacy. Essential elements were (a) the postulation of not two, but several truth statuses; (b) the treatment of “it is indeterminate whether” (or degreed variants thereof) as an extensional operator; (c) the generalization to this setting of a classicist picture, where logic is defined as truth preservation over a range of reinterpretations, one amongst which is the interpretation that gets things right.

I said in that post that I thought that folklore non-classicism was a defensible position, though there’s some fairly common maneuvers which I think the folklore non-classicist would be better off ditching. One of these is the idea that the intended interpretation is describable “only non-classically”.

However, there’s a powerful alternative way of being a non-classicist. The last couple of weeks I’ve had a sort of road to Damascus moment about this, through thinking about non-classicist approaches to the Liar paradox—and in particular, by reading Hartry Field’s articles and new book where he defends a “paracomplete” (excluded-middle rejecting) approach to the semantic paradoxes and work by JC Beall on a “paraconsistent” (contradiction-allowing) approach.

One interpretative issue with the non-classical approaches to the Liar and the like is that a crucial element is a truth-predicate that works in a way very unlike the notion of “truth” or “perfect truth” (“semantic value 1”, if you want neutral terminology) that feature in the many-valued semantics. But that’s not necessarily a reason by itself to start questioning the folklore picture. For it might be that “truth” is ambiguous—sometimes picking up on a disquotational notion, sometimes tracking the perfect truth notion featuring in the nonclassicists semantics. But in fact there are tensions here, and they run deep.

Let’s warm up with a picky point. I was loosely throwing around terms like “3-valued logic” in the last post, and mentioned the (strong) Kleene system. But then I said that we could treat “indeterminate whether p” as an extensional operator (the “tertium operator” that makes “indet p” true when p is third-valued, and otherwise false). But that operator doesn’t exist in the Kleene system—the Kleene system isn’t expressively complete with respect to the truth functions definable over three values, and this operator is one of the truth-functions that isn’t there. (Actually, I believe if you add this operator, you do get something that is expressively complete with respect to the three valued truth-functions).

One might take this to be just an expressive limitation of the Kleene system. After all, one might think, in the intended interpretation there is a truth-function behaving in the way just described lying around, and we can introduce an expression that picks up on it if we like.

But it’s absolutely crucial to the nonclassical treatments of the Liar that we can’t do this. The problem is that if we have this operator in the language, then “exclusion negation” is definable—an operator “neg” such that “neg p” is true when p is false or indeterminate, and otherwise false (this will correspond to “not determinately p”—i.e. ~p&~indeterminate p, where ~ is so-called “choice” negation, i.e. |~p|=1-|p|). “p v neg p” will be a tautology; and arbitrary q will follow from the pair {p, neg p}. But this is exactly the sort of device that leads to so-called “revenge” puzzles—Liar paradoxes that are paradoxical even in the 3-valued system. Very roughly, it looks as if on reasonable assumptions a system with exclusion negation can’t have a transparent truth predicate in it (something where p and T(p) are intersubstitutable in all extensional contexts). It’s the whole point of Field and Beall’s approaches to retain something with this property. So they can’t allow that there is such a notion around (so for example, Beall calls such notions “incoherent”).

What’s going on? Aren’t these approaches just denying us the resources to express the real Liar paradox? The key, I think, is a part of the nonclassicist picture that Beall and Field are quite explicit about and which totally runs against the folklore conception. They do not buy into the idea that model theory is ranging over a class of “interpretations” of the language among which we might hope to find the “intended” interpretation. The core role of the model theory is to give an extensionally adequate characterization of the consequence relation. But the significance of this consequence relation is not to be explained in model-theoretic terms (in particular, in terms of one among the models being intended, so that truth-preservation on every model automatically gives us truth-preservation simpliciter).

(Field sometimes talks about the “heuristic value” of this or that model and explicitly says that there is something more going on than just the use of model theory as an “algebraic device”. But while I don’t pretend to understand exactly what is being invoked here, it’s quite quite clear that the “added value” doesn’t consist on some classical 3-valued model being “intended”.)

Without appeal to the intended interpretation, I just don’t see how the revenge problem could be argued for. The key thought was that there is a truth-function hanging around just waiting to be given a name, “neg”. But without the intended interpretation, what does this even mean? Isn’t the right thought simply that we’re characterizing a consequence relation using rich set-theoretic resources—and in terms of which we can draw differences that correspond to nothing in the phenomenon being modelled.

So it’s absolutely essential to the nonclassicist treatment of the Liar paradox that we drop the “intended interpretation” view of language. Field, for one, has a ready-made alternative approach to suggest—a Quinean combination of deflationism about truth and reference, with perhaps something like translatability being invoked to explain how such predicates can be applied to expressions in a language other than ones own.

I’m therefore inclined to think of the non-classicism—at least about the Liar—as a position that *requires* something like this deflationist package. Whereas the folklore non-classicist I was describing previously is clearly someone who takes semantics seriously, and who buys into a generalization of the powerful connections between truth and consequence that a semantic theory of truth affords.

When we come to the analysis of vagueness and other (non-semantic-paradox related) kinds of indeterminacy, it’s now natural to consider this “no interpretation” non-classicism. (Field does exactly this—he conceives of his project as giving a unified account of the semantic paradoxes and the paradoxes of vagueness. So at least *this* kind of nonclassicism, we can confidently attribute to a leading figure in the field). All the puzzles described previously for the non-classicist position are thrown into a totally new light. Once we make this move.

To begin with, there’s no obvious place for the thought that there are multiple truth statuses. For you get that by looking at a many valued model, and imagining that to be an image of what the intended interpretation of the language must be like. And that is exactly the move that’s now illegitimate. Notice that this undercuts one motivation for going towards a fuzzy logic—the idea that one represents vague predicates as some smoothly varying in truth status. Likewise, the idea that we’re just “iterating a bad idea” in multiplying truth values doesn’t hold water on this conception—since the many-values assigned to sentences in models just don’t correspond to truth statuses.

Connectedly, one shouldn’t say that contradictions can be “half true” (nor that excluded middle is “half true”. It’s true that (on say the Kleene approach) that you won’t have ~(p&~p) as a tautology. Maybe you could object to *that* feature. But that to me doesn’t seem nearly as difficult to swallow as a contradiction having “some truth to it” despite the fact that from a contradiction, everything follows.

One shouldn’t assume that “determinately” should be treated as the tertium operator. Indeed, if you’re shooting for a combined non-classical theory of vagueness and semantic paradoxes, you *really* shouldn’t treat it this way, since as noted above this would give you paradox back.

There is therefore a central and really important question: what is the non-classical treatment of “determinately” to be? Sample answer (lifted from Field’s discussion of the literature): define D(p) as p&~(p–>~p), where –> is a certain fuzzy logic conditional. This, Field argues, has many of the features we’d intuitively want a determinately operator to have; and in particular, it allows for non-trivial iterations. So if something like this treatment of “determinately” were correct, then higher-order indeterminacy wouldn’t be obviously problematic (Field himself thinks this proposal is on the right lines, but that one must use another kind of conditional to make the case).

“No interpretation” nonclassicism is an utterly, completely different position from the folklore nonclassicism I was talking about before. For me, the reasons to think about indeterminacy and the semantic and vagueness-related paradoxes in the first place, is that they shed light on the nature of language, representation, logic and epistemology. And on these sorts of issues, the no interpretation nonclassicism and the folklore version take diametrically opposed positions on such issues, and flowing from this, the appropriate ways to arguing for or against these views are just very very different.

“Supervaluationism”: the word

I’ve got progressively more confused over the years about the word “supervaluations”. It seems lots of people use it in slightly different ways. I’m going to set out my understanding of some of the issues, but I’m very happy to be contradicted—I’m really in search of information here.

The first occurrence I know of is van Fraassen’s treatment of empty names in a 1960’s JP article. IIRC, the view there is that language comes with a partial intended interpretation function, specifying the references of non-empty names. When figuring out what is true in the language, we
look at what is true on all the full interpretations that extend the intended partial interpretation. And the result is that “Zeus is blue” will come out neither true nor false, because on some completions of the intended interpretation the empty name”Zeus” will designate a blue object, and others he won’t.

So that gives us one meaning of a “supervaluation”: a certain technique for defining truth simpliciter out of the model-theoretic notions of truth-relative-to-an-index. It also, so far as I can see, closes off the question of how truth and “supertruth” (=truth on all completions) relate. Supervaluationism, in this original sense, just is the thesis that truth simpliciter should be defined as truth-on-all-interpretations. (Of course, one could argue against supervaluationism in this sense by arguing against the identification; and one could also consistently with this position argue for the ambiguity view that “truth” is ambiguous between supertruth and some other notion—but what’s not open is to be a supervaluationist and deny that supertruth is truth in any sense.)

Notice that there’s nothing in the use of supervaluations in this sense that enforces any connection to “semantic theories of vagueness”. But the technique is obviously suggestive of applications to indeterminacy. So, for example, Thomason in 1970 uses the technique within an “open future” semantics. The idea there is that the future is open between a number of currently-possible histories. And what is true about is what happens on all these histories.

In 1975, Kit Fine published a big and technically sophisticated article mapping out a view of vagueness arising from partially assigned meanings, that used among other things supervaluational techniques. Roughly, the basic move was to assign each predicate with an extension (the set of things to which it definitely applies) and an anti-extension (the set of things to which it definitely doesn’t apply). An interpretation is “admissible” only if it assigns an set of objects to a predicate that is a superset of the extension, and which doesn’t overlap the anti-extension. There are other constraints on admissibility too: so-called “penumbral connections” have to be respected.

Now, Fine does lots of clever stuff with this basic setup, and explores many options (particularly in dealing with “higher-order” vagueness). But one thing that’s been very influential in the folklore is the idea that based on the sort of factors just given, we can get our hands on a set of “admissible” fully precise classical interpretations of the language.

Now the supervaluationist way of working with this would tell you that truth=truth on each admissible interpretation, and falsity=falsity on all such interpretations. But one needn’t be a supervaluationist in this sense to be interested in all the interesting technologies that Fine introduces, or the distinctive way of thinking about semantic indecision he introduces. The supervaluational bit of all this refers only to one stage of the whole process—the step from identifying a set of admissible interpretations to the definition of truth simpliciter.

However, “supervaluationism” has often, I think, been identified with the whole Finean programme. In the context of theories of vagueness, for example, it is often used to refer to the idea that vagueness or indeterminacy arises as a matter of some kind of unsettledness as to what precise extensions are expressions pick out (“semantic indecision”). But even if the topic is indeterminacy, the association with *semantic indecision* wasn’t part of the original conception of supervaluations—Thomason’s use of them in his account of indeterminacy about future contingents illustrates that.

If one understands “supervaluationism” as tied up with the idea of semantic indecision theories of vagueness, then it does become a live issue whether one should identify truth with truth on all admissible interpretations (Fine himself raises this issue). One might think that the philosophically motivated semantic machinery of partial interpretations, penumbral connections and admissible interpretations is best supplemented by a definition of truth in the way that the original VF-supervaluationists favoured. Or one might think that truth-talk should be handled differently, and that the status of “being true on all admissible assignments” shouldn’t be identified with truth simpliciter (say because the disquotational schemes fail).

If you think that the latter is the way to go, you can be a “supervaluationist” in the sense of favouring a semantic indecision theory of vagueness elaborated along Kit Fine’s lines, without being a supervaluationist in the sense of using Van Fraassen’s techniques.

So we’ve got at least these two disambiguations of “supervaluationism”, potentially cross-cutting:

(A) Formal supervaluationism: the view that truth=truth on each of a range of relevant interpretations (e.g. truth on all admissible interpretations (Fine); on all completions (Van Fraassen); or on all histories (Thomason)).
(B) Semantic indeterminacy supervaluationism: the view that (semantic) indeterminacy is a matter of semantic indecision: there being a range of classical interpretations of the language, which, all-in, have equal claim to be the right one.

A couple of comments on each. (A) of course, needs to be tightened up in each case by saying which are the relevant range of classical interpretations quantified over. Notice that a standard way of defining truth in logic books is actually supervaluationist in this sense. Because if you define what it is for a formula “p” to be true as it being true relative to all variable assignments, then open formulae which vary in truth value from variable-assignment to variable assignment end up exactly analogous to formulae like “Zeus is blue” in Van Fraassen’s setting: they will be neither true nor false.

Even when it’s clear we’re talking about supervaluationism in the sense of (B), there’s continuing ambiguity. Kit Fine’s article is incredibly rich, and as mentioned above, both philosophically and technically he goes far beyond the minimal idea that semantic vagueness has something to do with the meaning-fixing facts not settling on a single classical interpretation.

So there’s room for an understanding of “supervaluationism” in the semantic-indecision sense that is also minimal, and which does not commit itself to Fine’s ideas about partial interpretations, conceptual truths as “penumbral constraints” etc. David Lewis in “Many but also one”, as I read him, has this more minimal understanding of the semantic indecision view—I guess it goes back to Hartry Field’s material on inscrutability and indeterminacy and “partial reference” in the early 1970’s, and Lewis’s own brief comments on related ideas in his (1969).

So even if your understanding of “supervaluationism” is the (B)-sense, and we’re thinking only in terms of semantic indeterminacy, then you still owe elaboration of whether you’re thinking of a minimal “semantic indecision” notion a la Lewis, or the far richer elaboration of that view inspired by Fine. Once you’ve settled this, you can go on to say whether or not you’re a supervaluationist in the formal, (A)-sense—and that’s the debate in the vagueness literature over whether truth should be identified with supertruth.

Finally, there’s the question of whether the “semantic indecision” view (B), should be spelled out in semantic or metasemantic terms. One possible view has the meaning-fixing facts picking out not a single interpretation, but a great range of them, which collectively play the role of “semantic value” of the term. That’s a semantic or “first-level” (in Matti Eklund‘s terminology) view of semantic indeterminacy. Another possible view has the meaning-fixing facts trying to fix on a single interpretation which will give the unique semantic value of each term in the language, but it being unsettled which one they favour. That’s a metasemantic or “second-level” view of the case.

If you want to complain that second view is spelled out quite metaphorically, I’ve some sympathy (I think at least in some settings it can be spelled out a bit more tightly). One might also want to press the case that the distinction between semantic and metasemantic here is somewhat terminological—what we choose to label the facts “semantic” or not. Again, I think there might be something to this. There are also questions about how this relates to the earlier distinctions—it’s quite natural to think of Fine’s elaboration as being a paradigmatically semantic (rather than metasemantic) conception of semantic supervaluationism. It’s also quite natural to take the metasemantic idea to go with a conception that is non-supervaluational in the (A) sense. (Perhaps the Lewis-style “semantic indecision” rhetoric might be taken to suggest a metasemantic reading all along, in which way it is not a good way to cash out what’s the common ground among (B)-theorists is). But there’s room for a lot of debate and negotiation on these and similar points.

Now all this is very confusing to me, and I’m sure I’ve used the terminology confusingly in the past. It kind of seems to me that ideally, we’d go back to using “supervaluationism” in the (A) sense (on which truth=supertruth is analytic of the notion); and that we’d then talk of “semantic indecision” views of vagueness of various forms, with its formal representation stretching from the minimal Lewis version to the rich Fine elaboration, and its semantic/metasemantic status specified. In any case, by depriving ourselves of commonly used terminology, we’d force ourselves to spell out exactly what the subject matter we’re discussing is.

As I say, I’m not sure I’ve got the history straight, so I’d welcome comments and corrections.

Aristotelian indeterminacy and partial beliefs

I’ve just finished a first draft of the second paper of my research leave—title the same as this post. There’s a few different ways to think about this material, but since I hadn’t posted for a while I thought I’d write up something about how it connects with/arises from some earlier concerns of mine.

The paper I’m working on ends up with arguments against standard “Aristotelian” accounts of the open future, and standard supervaluational accounts of vague survival. But one starting point was an abstract question in the philosophy of logic: in what sense is standard supervaluationism supposed to be revisionary? So let’s start there.

The basic result—allegedly—is that while all classical tautologies are supervaluational tautologies, certain classical rules of inference (such as reductio, proof by cases, conditional proof, etc) fail in the supervaluational setting.

Now I’ve argued previously that one might plausibly evade even this basic form of revisionism (while sticking to the “global” consequence relation, which preserves traditional connections between logical consequence and truth-preservation). But I don’t think it’s crazy to think that global supervaluational consequence is in this sense revisionary. I just think that it requires an often-unacknowledged premise about what should count as a logical constant (in particular, whether “Definitely” counts as one). So for now let’s suppose that there are genuine counterexamples to conditional proof and the rest.

The standard move at this point is to declare this revisionism a problem for supervaluationists. Conditional proof, argument by cases: all these are theoretical descriptions of widespread, sensible and entrenched modes of reasoning. It is objectionably revisionary to give them up.

Of course some philosophers quite like logical revisionism, and would want to face-down the accusation that there’s anything wrong with such revisionism directly. But there’s a more subtle response available. One can admit that the letter of conditional proof, etc are given up, but the pieces of reasoning we normally call “instances of conditional proof” are all covered by supervaluationally valid inference principles. So there’s no piece of inferential practice that’s thrown into doubt by the revisionism of supervaluational consequence: it seems that all that happens is that the theoretical representation of that practice has to take a slightly more subtle form than one might except (but still quite a neat and elegant one).

One thing I mention in that earlier paper but don’t go into is a different way of drawing out consequences of logical revisionism. Forget inferential practice and the like. Another way in which logic connects with the rest of philosophy is in connection to probability (in the sense of rational credences, or Williamson’s epistemic probabilities, or whatever). As I sketched in a previous post, so long as you accept a basic probability-logic constraint, which says that the probability of a tautology should be 1, and the probability of a contradiction should be 0, then the revisionary supervaluational setting quickly forces you to a non-classical theory of probability: one that allows disjunctions to have probability 1 where each disjunct has probability 0. (Maybe we shouldn’t call such a thing “probability”: I take it that’s terminological).

Folk like Hartry Field have argued completely independently of this connection to Supervaluationism that this is the right and necessary way to handle probabilities in the context of indeterminacy. I’ve heard others say, and argue, that we want something closer to classicism (maybe tweaked to allow sets of probability functions, etc). And there are Dutch Book arguments to consider in favour of the classical setting (though I think the responses to these from the perspective of non-classical probabilities are quite convincing).

I’ve got the feeling the debate is at a stand-off, at least at this level of generality. I’m particularly unmoved by people swapping intuitions about degrees of belief it is appropriate to have in borderline cases of vague predicates, and the like (NB: I don’t think that Field ever argues from intuition like this, but others do). Sometimes introspection suggests intriguing things (for example, Schiffer makes the interesting suggestion that one’s degree of belief in a conjunction of two vague propositions is typically matches one’s degree of belief in the propositions themselves). But I can’t see any real dialectical force here. In my own case, I don’t have robust intuitions about these cases. And if I’m to go on testimonial evidence on others intuitions, it’s just too unclear what people are reporting on for me to feel comfortable taking their word for it. I’m worried, for example, they might just be reporting the phenomenological level of confidence they have in the proposition in question: surely that needn’t coincide with one’s degree of belief in the proposition (thinking of an exam you are highly nervous about, but are fairly certain you will pass… your behaviour may well manifest a high degree of belief, even in the absence of phenomenological trappings of confidence). In paradigm cases of indeterminacy, it’s hard to see how to do better than this.

However, I think in application to particular debates we might be able to make much more progress. Let us suppose that the topic for the day is the open future, construed, minimally, as the claim that while there are definite facts about the past and present, the future is indefinite.

Might we model this indefiniteness supervaluationally? Something like this idea (with possible futures playing the role of precisifications) is pretty widespread, perhaps orthodoxy (among friends of the open future). It’s a feature of MacFarlane’s relativistic take on the open future, for example. Even though he’s not a straightforward supervaluationist, he still has truth-value gaps, and he still treats them in a recognizably supervaluational-style way.

The link between supervaluational consequence and the revisionionary behaviour of partial beliefs should now kick in. For if you know with certainty that some P is neither true nor false, we can argue that you should invest no credence at all in P (or in its negation). Likewise, in a framework of evidential probabilities, P gets no evidential probability at all (nor does its negation).

But think what this says in the context of the open future. It’s open which way this fair coin lands: it could be heads, it could be tails. On the “Aristotelian” truth-value conception of this openness, we can know that “the coin will land heads” is gappy. So we should have credence 0 in it, and none of our evidence supports it.

But that’s just silly. This is pretty much a paradigmatic case where we know what partial belief we have and should have in the coin landing heads: one half. And our evidence gives exactly that too. No amount of fancy footwork and messing around with the technicalities of Dempster-Shafer theory leads to a sensible story here, as far as I can see. It’s just plainly the wrong result. (One doesn’t improve matters very much by relaxing the assumptions, e.g. taking the degree of belief in a failure of bivalence in such cases to fall short of one: you can still argue for a clearly incorrect degree of belief in the heads-proposition).

Where does that leave us? Well, you might reject the logic-probability link (I think that’d be a bad idea). Or you might try to argue that supervaluational consequence isn’t revisionary in any sense (I sketched one line of thought in support of this in the paper cited). You might give up on it being indeterminate which way the coin will land—i.e. deny the open future, a reasonably popular option. My own favoured reaction, in moods when I’m feeling sympathetic to the open future, is to go for a treatment of metaphysical indeterminacy where bivalence can continue to hold—my colleague Elizabeth Barnes has been advocating such a framework for a while, and it’s taken a long time for me to come round.

All of these reactions will concede the broader point—that at least in this case, we’ve got an independent grip on what the probabilities should be, and that gives us traction against the Supervaluationist.

I think there are other cases where we can find similar grounds for rejecting the structure of partial beliefs/evidential probabilities that supervaluational logic forces upon us. One is simply a case where empirical data on folk judgements has been collected—in connection with indicative conditions. I talk about this in some other work in progress here. Another which I talk about in the current paper, and which I’m particularly interested in, concerns cases of indeterminate survival. The considerations here are much more involved than in indeterminacy we find in connection to the open future or conditionals. But I think the case against the sort of partial beliefs supervaluationism induces can be made out.

All these results turn on very local issues. None, so far as see, generalizes to the case of paradigmatic borderline cases of baldness and the rest. I think that makes the arguments even more interesting: potentially, they can serve as a kind of diagnostic: this style of theory of indeterminacy is suitable over here; that theory over there. That’s a useful thing to have in one’s toolkit.

Emergence, Supervenience, and Indeterminacy

While Ross Cameron, Elizabeth Barnes and I were up in St Andrews a while back, Jonathan Schaffer presented one of his papers arguing for Monism: the view that the whole is prior to the parts, and the world is the one “fundamental” object.

An interesting argument along the way argued that contemporary physics supports the priority of the whole, at least to the extent that properties of some systems can’t be reduced to properties of their parts. People certainly speak that way sometimes. Here, for example, is Tim Maudlin (quoted by Schaffer):

The physical state of a complex whole cannot always be reduced to those of its parts, or to those of its parts together with their spatiotemporal relations… The result of the most intensive scientific investigations in history is a theory that contains an ineliminable holism. (1998: 56)

The sort of case that supports this is when, for example, a quantum system featuring two particles determinately has zero total spin. The issues is that there also exist systems that duplicate the intrinsic properties of the parts of this system, but which do not have the zero-total spin property. So the zero-total-spin property doesn’t appear to be fixed by the properties of its parts.

Thinking this through, it seemed to me that one can systematically construct such cases for “emergent” properties if one is a believer in ontic indeterminacy of whatever form (and thinks of it that way that Elizabeth and I would urge you to). For example, suppose you have two balls, both indeterminate between red and green. Compatibly with this, it could be determinate that the fusion of the two be uniform; and it could be determinate that the fusion of the two be variegrated. The distributional colour of the whole doesn’t appear to be fixed by the colour-properties of the parts.

I also wasn’t sure I believed in the argument, so posed. It seems to me that one can easily reductively define “uniform colour” in terms of properties of its parts. To have uniform colour, there must be some colour that each of the parts has that colour. (Notice that here, no irreducible colour-predications of the whole are involved). And surely properties you can reductively define in terms of F, G, H are paradigmatically not emergent with respect to F, G and H.

What seems to be going on, is not a failure for properties of the whole to supervene on the total distribution of properties among its parts; but rather a failure of the total distribution of properties among the parts to supervene on the simple atomic facts concerning its parts.

That’s really interesting, but I don’t think it supports emergence, since I don’t see why someone who wants to believe that only simples instantiate fundamental properties should be debarred from appealing to distributions of those properties: for example, that they are not both red, and not both green (this fact will suffice to rule out the whole being uniformly coloured). Minimally, if there’s a case for emergence here, I’d like to see it spelled out.

If that’s right though, then application of supervenience tests for emergence have to be handled with great care when we’ve got things like metaphysical indeterminacy flying around. And it’s just not clear anymore whether the appeal in the quantum case with which we started is legitimate or not.

Anyway, I’ve written up some of the thoughts on this in a little paper.