More on norms

One of the things that’s confusing about truth norms for belief is how exactly they apply to real people—people with incomplete information.

Even if we work with “one should: believe p only if p is true”. After all, I guess we can each be pretty confident that we fail to satisfy the truth-norm. I’m confident that at least one of my current beliefs is untrue. I’m in preface-paradox-land, and there doesn’t seem any escape. It doesn’t feel like I’m criticizable in any serious way for being in this situation. What is the better option (OK, you could say: switch to describing your doxastic state in terms of credences rather than all-or-nothing beliefs, but for now I’m playing the all-or-nothing-belief game).

So I’m not critizable just for having beliefs which are untrue. And I’m not criticizable for knowing that I have beliefs which are untrue. Here’s how I’d like to put it. There are lots of very specific norms, which can be schmatized as “one should: believe that p if p is true”. It’s when I know, of one particular instance, that I’m violating this “external” norm, that I seem to be criticizable.

Let’s turn to the indeterminate case. Suppose that it’s indeterminate whether p, and I know this. And consider three options.

  1. Determinately, I believe p.
  2. Determinately, I believe ~p.
  3. It’s indeterminate whether I believe p.

I’m going to ignore the “suspension of belief case”. I’ll assume in (3) we’re considering a case where the indeterminacy in my belief is such that, determinately, I believe p iff p is true.

In case (1) and (2), for the specific q in question, I can know that it’s indeterminate whether I’m violating the external norm. But for (3), it’s determinate that I’m not violating this norm.

It’s very natural to think that I’m pro tanto criticizable if I get into situation (1) or (2) here, when (3) is open to me (that is, I better have some overriding reason for going this way if I’m to avoid criticism). If this is one way in which criticism gets extracted out of external truth-norms, then it looks like indeterminate belief is the appropriate response to known indeterminacy.

But that isn’t by any means the only option here. We might reason as follows. What’s common ground by this point is that it’s indeterminate whether (1) or (2) violates the norm. So it’s not determinate that (1) or (2) do violate the norm. So it’s not determinate that a necessary condition for my beliefs being criticizable is met. So it’s at worst indeterminate whether I’m criticizable in this situation.

I can’t immediately see anything wrong with this suggestion. But I think that nevertheless, (2) (3) is the better state to be in than (1) (1) or (2). So here’s a different way of getting at this.

I’m going to now switch to talking in terms of credences *as well as* beliefs. Suppose that I believe, and am credence 1, that p is indeterminate. And suppose that I believe that p—but I’m not credence 1 in it. Suppose I’m credence 0.9 in p instead (this’d fit nicely, for example, with a “high threshold” account of the relationship between credence and all-out belief, but all I need is the idea that this sort of thing can happen, rather than any sort of general theory about what goes on here. It couldn’t happen if e.g. to believe p was to have credence 1 in p).

In this situation, I have 0.1 credence in ~p, and so 0.1 credence in p not being true (in the situation we’re envisaging, I’m credence 1 in the T-scheme that allows this semantic ascent).

I’m also going to assume that not only do I believe p, but I’m perfectly confident of this—credence 1 that I believe p. So I’m credence 0.1 in “I believe p & p is not true”—so credence 0.1 in the negation of “I believe p only if p is true”. So I’m at least credence 0.1 that I’ve violated the norm.

Contrast this with the situation where it’s indeterminate whether I believe p, and p is indeterminate, in such a way that “p is true iff I believe p” comes out determinately true. If I’m fully confident of all the facts here, I will have zero credence that I’ve violated the norm.

That is, if we go for option (1) or (2) above, when you’re certain that p is indeterminate, and are less than absolutely certain of p, then it looks to me that you’ll thereby give some credence to your having violated the aleithic norm (with respect to the particular p in question). If you go for (3), on the other hand, you can be certain that you haven’t violated the alethic norm.

It seems to me that faced with the choice between states which, by their own lights, may violate alethic norms, and states which, by their own lights, definitely don’t violate alethic norms, we’d be criticizable unless we opt for the second rather than the first so long as all else is equal. So I do think this line of thought supports the (anyway plausible) thought that it’s (3), rather than (1) or (2), which is the appropriate response to known indeterminate cases, given a truth-norm for belief.

(As noted in the previous post, this is all much quicker if the truth-norm were: one should, determinately( believe p only if p is true). But I do think the case for (3) would be much more powerful if we can argue for it on the basis of the pure truth-norm rather than this decorated version).

Indeterminacy day at Leeds

This past Saturday “indeterminacy day” was held at Leeds. Or, to give it its more prosaic title: “Metaphysical Indeterminacy: the state of the art”.

There were four speakers (Katherine Hawley, Daniel Nolan, Peter van Inwagen and myself). We had quite a few people turn up from around the country to participate in the discussions—we were very pleased to see so many grad students around—thanks to everyone who came along and helped make the event such fun!

I’m going to write up a short report on what happened for the reasoner (probably focussing more on the intellectual content than on the emergency evacuation procedures that ended in a locked courtyard). But I thought I’d take the chance to post the slides I talked to on the day. They’re available here.

I wanted to do two things with the talk. One was to give an overview of how we’ve been thinking about these things here at Leeds (on reflection, I should have been more explicit that I was drawing on previous work here—particularly joint work with Elizabeth Barnes. I’ve added some more explicit pointers in the posted slides). But I also wanted to go beyond this, to urge that one thing that we want from any “theory” of indeterminacy is some account of its cognitive role—what the rational constraints (if any) believing that p is indeterminate, puts on one’s attitude to p. (To fix ideas, think about chance: knowing that there’s a 0.5 chance of p (all else equal) means you should have 0.5 credence in p. That’s a pretty specific doxastic role. On the other hand, knowing that p is contingent is compatible with any old credence in p).

Now, in the talk, I said that this can help to articulate what people are complaining about when they say that they *just don’t understand* the notion of metaphysical indeterminacy. I reckon people shouldn’t say that the notion of indeterminacy with a metaphysical/worldly source is literally unintelligible (I reckon that’s way to strong a claim to be plausible—Elizabeth and I chat about this a bit in the joint paper). But I’m sympathetic to the thought that someone can complain they don’t “fully grasp” the concept of a specific sentential operator P if they’re entirely in the dark about its cognitive role (how credences in P(q) should constrain attitudes to q). A fair enough answer to this challenge is to say that there are no constraints. For P=it is contingent whether, that seems plausible. But there’s something compelling about the thought that someone who e.g. doesn’t appreciate that something like the principal principle governs chance, doesn’t grasp the concept of chance itself.

What makes the challenge to spell out cognitive role particularly pressing for the view that Elizabeth and I set up in the joint paper, is that we don’t get much of a steer from other aspects of what we say, as to what the cognitive role should be. We say that metaphysical indeterminacy is a primitive/fundamental operator (compare what some would like to say about modality or tense). No help from this about cognitive role—as there might be for someone who said that indeterminacy is a special case of some wider phenomenon whose cognitive role we had a prior grip on (e.g. ignorance). Moreover, in the joint paper the logic of indeterminacy that we defend is pretty thoroughly classical. And so there’s no obvious way of appealing to features of (the putative) logic of indeterminacy to get guidance. Others with a more revisionary/committal take on the logic of indeterminacy may well be able to point to features of the logic as implicitly answering the cognitive role question (that’s a strategy that Hartry Field has been advocating recently).

Some qualifications (arising from good questions put during the workshop, esp. by Daniel Nolan).

(i) I certainly shouldn’t suggest that being able to explicitly articulate the cognitive role of a concept C is required in order to fully grasp C. Surely we can at most require one *implement* the cognitive role (accord with whatever rules it specifies, not necessarily articulate those).

(ii) If one thinks in *general* that concepts are in part individuated by cognitive role, then we’ll have a general reason for thinking that in order for someone to come to fully grasp C, from a position where they don’t yet grasp it, they’ll need to be given resources to fix C’s cognitive role. On this view you won’t count as having attitudes to contents featuring the concept C at all, unless those contents are structured in the way prescribed by C’s cognitive role.

(iii) Even if you don’t go for a strong concept-individuation claim, you might be sympathetic to the general thought that it’s right to classify people as having greater/lesser grasp on a concept, the the extent that they’re deployment of the concept conforms to what’s laid down by its cognitive role.

(iv) There may be cases where we count people as fully competent with a concept, even though they don’t accord to cognitive role, if they’ve regard themselves as having (or can plausibly be interpreted as tacitly believing that there are) special reasons to depart from the cognitive role.

(v) If a theorist whose subject-matter is C doesn’t explicitly or implicitly convey information about the cognitive role of C, it’ll be appropriate for someone without an anterior concept of C, to complain that they haven’t been put in a position to become fully competent deployers of C.

Ok, so claims (i-v) sound eminently suitable for counterexamples—be very pleased to hear people’s thoughts about them in the comments. My thought is that when Elizabeth and I say we’re theorizing about a metaphysically primitive indeterminacy operator, whose logic is pretty much entirely classical—unless we say some more, people are entitled to complain in the way described in (v).

One thing I’d’ve talked about a bit more (if the Fire Alarm hadn’t interrupted!) is various ways of adding bits that implicitly fix cognitive role. Think about the following rather “external” norm of belief:

  • One should: believe p only if p is true.

Now, suppose that it’s indeterminate whether p is true (as it will be when p is indeterminate, on the position put forward in the joint paper). Then if it’s determinately true one believes p, it’ll be indeterminate whether the biconditional “believe(p) iff p is true” holds (compare: if A is necessary and B is contingent, then A<–>B is contingent). Likewise, determinately believing ~p in these circumstances leads to it being indeterminate whether you’ve violated the norm.

As Ross pointed out in the talk, on these formulations, suspending belief and disbelief in p is a way of determinately satisfying the norms. Maybe that’s an attractive result. If we strengthened the norms to biconditionals, then (determinately) not believing doesn’t lead to any worse status. And the biconditional versions don’t look implausible as articulating some kind of doxastic ideal: what a believer concerned aiming at the truth, and not resource-limited, should do.

If we leave things here, the conclusion is that when it’s indeterminate whether Harry is bald, it’s indeterminate whether (determinately) believing that Harry is bald violates the truth-norm on belief (and the same goes for other salient options). You can’t come all-out and say that someone who without hestitation *believes Harry is bald* is determinately doing something wrong. But notice: suppose someone without hesitation determinately believes it’s wrong to believe Harry is bald. Then you equally can’t say that it’s determinately wrong to believe what they believe. And of course this iterates!

This seems pretty vertigo-inducing to me. Notice that we shouldn’t ignore the option of it being indeterminate what a subject believes. In that situation, one might *determinately* meet the truth-norm even in biconditional version. (Compare: if A and B are both contingent, it can be necessarily true that A<–>B).

It’s tempting to think that, determinately, what you *should* do in these circumstances is to make it the case that it’s indeterminate whether you believe p. For only then can you avoid the worries about someone criticizing you, and not being not-determinately-wrong to do so! But of course, this really would be something over and above what we’ve said so far.

What *would* enforce the idea that when it’s indeterminate whether p, it should be indeterminate whether you believe p, is the following formulation of the truth norm:

  • One should: determinately (believe p iff p is true).

If p is indeterminate, then determinately believing p or determinately not believing p would each violate the claim that the biconditional is *determinately* true, and on the revised formulation, one isn’t doing as one should (and it’s determinately true to say so).

So I think that given the truth-norm (or, better, the narrow-scoped version just laid down) there’s some prospect of arguing that there’s a cognitive role for indeterminacy implicit in the kind of non-revisionary framework of the joint paper. There’s work to do to figure out how to go about meeting these constraints—what sort of mental setup it takes for it to be indeterminate whether you believe p, and what to say about rational action, in particular, given this. But we’ve got a starting point.

UK philosophy rankings: RAE, gourmet, etc

There’s been quite a bit of discussion going on about how to interpret UK RAE results (see here and here). The raw output of the exercise isn’t a single figure, but a whole “vector” of information. You get a percentage of research activity that’s ranked “world class”, another percentage that’s “internationally excellent”, another for “international recognition” and lastly for “national recognition”. The figures also tell you how many people were submitted for evaluation.

There’s many ways of working with the data to produce different rankings. That means that everyone should be sceptical, I think, of people picking on one particular choice of “scoring” method and declaring that to be the thing that the RAE tells us about research quality in UK departments. In particular, there’s an obvious and massive choice point: you have to decide whether to look at average ratings (ignoring the quantity of high-quality research) or whether you’re going to take numbers into account. The figures, and induced rankings, change markedly depending on which you choose to focus on.

Jo Wolff in the Guardian writes:

“Really there seem to be two sensible ways of compiling rankings given the information we have so far. One is GPA, or perhaps a weighted version with extra weight to 4* outputs. This gives you an account of the average quality of the work submitted by the faculty (assuming that you trust the judgement of the panels). And the other is to multiply GPA by the number of people submitted, which will give a better prediction of eventual cash flows, and also an account of total quality.”

These aren’t the only natural ways of extracting rankings (see Brian Weatherson’s interesting discussion for several others that seem equally sensible). But they are representatives of the two obvious methods: looking at some kind of ranking by pure percentage quality (i.e. ignoring the quantity of research); or looking at some kind of ranking by the quantity of excellent research.

On GPA, the ranking goes like this:

University College London

3.15

University of St Andrews

3.15

King’s College London

3.05

University of Reading

3.05

University of Sheffield

3.05

University of Cambridge (HPS)

2.95

London School of Economics

2.95

University of Oxford

2.95

University of Stirling

2.95

University of Bristol

2.9

University of Essex

2.9

Birkbeck College

2.85

University of Cambridge (Philosophy)

2.85

University of Leeds

2.8

Middlesex University

2.8

University of Nottingham

2.8

University of Edinburgh

2.8

University of Sussex

2.7

University of Warwick

2.65

University of York

2.65

On what’s sometimes called “research power” (GPA multiplied by numbers), the ranking goes like this:

University of Oxford

233.0205

University of Cambridge (HPS)

103.25

University of Leeds

75.6

King’s College London

64.1415

University of St Andrews

60.417

University of Warwick

58.3

University of Sheffield

57.95

University of Cambridge (Philosophy)

54.15

University College London

51.282

University of Edinburgh

47.6

University of Durham

46.8

University of Bristol

46.4

University of Nottingham

43.4

Birkbeck College

41.325

London School of Economics

39.9725

University of York

39.75

University of Reading

36.6

University of Manchester

33.75

University of Glasgow

32.5

University of Stirling

32.45

If you’re in UCL or Reading, clearly you prefer looking at the first table; and if you’re in Oxford or Leeds, you prefer looking at the second!

As I say, there are many many more tables to be constructed, if you like that kind of thing. And in the end, all this is about perception and coffee-time chat – the real practical impact of these stats will be financial (which as Jo notes, is likely to be based on some variant of the second table).

Here’s one exercise that I find quite illuminating. Rather than trying to use the tables to argue about who is better than whom, we could try to use the tables to figure out what reputational surveys (of which the obvious exempler in our field is the gourmet report) plausibly track. Here’s the UK gourmet rankings for 2006, ordered by local mean (for direct comparison with the locally-determined RAE ratings):

1. Oxford (4.7)
2=. Cambridge (4)
2=. St Andrews (4)
4=. Sheffield (3.9)
4=. UCL (3.9)
6. Birkbeck (3.7)
7. KCL (3.6)
8=. Leeds (3.3)
8=. LSE (3.3)
10. Bristol (3.2)
11=. Nottingham (3.1)
11=. Reading (3.1)
13=. Edinburgh (3.0)
13=. Warwick (3.0)

Of course, the 2006 gourmet rankings are a year adrift of the census date for the RAE (which was late 2007). So we’re not dealing with exactly the same departments. But it seems close enough for the comparison to the RAE to be interesting.

The question is: is there a chart extractable from the RAE that looks like the gourmet rankings? If so, that might give a clue as to what kinds of things a reputational survey such as the gourmet report tracks.

Neither of the suggestions we’ve seen so far seems a very good candidate for this job, whatever their other merits. The ranking by GPA totally scrambles the order (of course, one convinced that GPA is the one true analysis of “betterness” among departments could use this to criticize the influence that the gourmet rankings have in the field—but my purposes are *not* to engage with that question, but just to look at what the gourmet ranking might track). The ranking by numbers times GPA gives a slightly better match—but still we have some very noticeable disparities. Leeds and KCL shoot up relative to the gourmet rankings, Sheffield and UCL shoot down.

Here’s a table that’s much closer. You get this by multiplying percentage of research activity rated in the top rank “world class” rank by numbers. As a story about how reputations get judged, this has some plausibility: how much top-notch stuff is going on at place X? Here’s the table we get:

1. University of Oxford

27.6465

2. University of Cambridge (HPS)

12.25

3. University of St Andrews

7.672

4. King’s College London

7.3605

5. University College London

7.326

6. University of Sheffield

6.65

7. University of Leeds

5.4

8. University of Bristol

4.8

9. University of Cambridge (Philosophy)

4.75

10. London School of Economics and Political Science

4.7425

11. University of Nottingham

3.875

12. Birkbeck College

3.625

13. University of Reading

3.6

14. University of Edinburgh

3.4

15. University of Warwick

3.3

This is actually surprising close considering the different methodologies of the surveys! Aside from Birkbeck and KCL every department is within a couple of slots of its gourmet ranking (KCL doing worse in the gourmet ranking than on this version of the RAE ranking; Birkbeck the opposite). Birkbeck is the only really big disparity between the tables—but I don’t know any way of fixing for that. And of course we couldn’t every expect perfect fit: the surveys are asking different things, took place a year apart, etc.

Cambridge is fractured into HPS and Philosophy in the RAE and not in the gourmet rankings, but we can’t really do anything about that either. HPS in Cambridge does have a large portion of historians of science who probably don’t register for gourmet report purposes, but also—of course—several philosophers of science and philosophers working in related areas. So there’s no simple way to determine a unified “Cambridge philosophy” ranking from the RAE.

As already emphasized, this doesn’t mean we should start declaring that high ranking on either table is the one true ranking of research quality—who ever thought that there’s a linear ordering to capture in the first place? I’m sure there will be those who point to the merits of the GPA table, and I’d certainly like to stick up for a ranking which takes quantity of world class *and* internationally excellent research into account. If we start to take that into account, Leeds performs excellently-to-stunningly (again see the Weatherson tables for some illustrations).

But the only claim I want to make here—and tentatively at that—is that insofar as you’re looking for some account of what explains research reputation, quantity of world class research activity is the most plausible candidate.

[Update: I thought I’d have a try at evaluating in a more objective way the judgements about which tables are “closer” to the gourmet rankings than which others. The method was this: assign values to each dept on each ranking (in the case of ties for ranks n-m, I assigned the value (n+m)/2. I then calculated the average difference between the values for each department on the gourmet report, compared to the values for those departments on the tables above.

Now above I claimed that the GPA figures were “totally scrambled” relative to the gourmet rankings. Sure enough, the average difference between positions was just over 3.8 by my workings. But I also said that the GPA*numbers did better (albeit still pretty badly). But by my workings, the average difference was pretty much just the same—just under 3.8. Also, in each case, six departments have positions on the respective RAE rankings that differ by more than four places from their positions on the gourmet rankings. So both are looking pretty much equally “scrambled” relative to the gourmet rankings.

What about the “WC*numbers” ranking that I suggested approximated the gourmet rankings quite nicely? Well, the average difference here was 1.5; and only one department shifts by more than 4 places. So this does seem to bear out the claim that that parsing of the RAE data is a much closer match to the reputational survey than either of the others.

But does this justify my final (tentative) claim—that the gourmet rankings may be tracking what this last RAE ranking represents, i.e. quantity of top-notch research? Well, we’d really have to do similar calculations for lots of other options to explore this fully. I’d like to see, for example, how the above compares to the kind of eight-way metascore that Brian Weatherson calls his “One True” RAE ranking. From what I can tell, its average divergence from the gourmet is not that much worse than the “quantity of world class” ranking, but it’s hard to tell exactly without redoing all of Brian’s calculations. I did look at a much cruder metascore (just working with the GPA and GPA*numbers given above): this still does much better at approximating the gourmet rankings than either of the constituent rankings on their own. The average difference is around 2.4, with only three departments more than 4 places away from their gourmet rankings. (For those who are interested, the crude metascore gives exactly the same rankings 1-7 as Brian’s: i.e. St Andrews through Leeds; but diverges slightly thereafter.)]

[Update 2: I just found an interesting blogpost by Chris Bertram on Crooked Timber from 2004, attempting a similar comparison between gourmet rankings and the (then) RAE. There certainly wasn’t the sort of flexibility available at that point, as the RAE results at the time were enormously cruder. The author and some of the participants from that post are also contributing to the current discussion on Brian Leiter’s blog. ]

Conditionals in Budapest

This event looks fabulous—over a week on conditionals in the company of Stanley, Loewer, Edgington, Hajek, Kratzer, and Stalnaker.

I’m actually due to be in Australia during July, but if I were in Europe, I’d be there.

What I did on my holidays

Sorry for the no-blog-updating of recent times. I’ve just got back to work after a year of research leave, and with teaching and administrating, and lots of fun talks and reading groups to go to, I’ve been pretty overwhelmed. For the time being, here’s a roundup of stuff I’ve been thinking about recently, plus some updates:

My joint paper with Elizabeth Barnes defending (metaphysically) vague parthood is coming out in PPQ.

Erkenntnis have accepted a paper on conditionals and indeterminacy for a special issue on conditionals and ranking functions. Special thanks to the Franz Huber et al who both organized an excellent conference in Konstanz on these topics, and who are co-editing the special volume in which this is to appear.

A paper on conditional excluded middle (which at times has been called “There is no might argument against conditional excluded middle” but is now called “Defending conditional excluded middle”) is now forthcoming in Nous. I’m bad at naming things—one of the main papers I cite is Stalnaker’s “A defense of conditional excluded middle”. But I guess noone’s going to confuse Bob Stalnaker’s paper for mine, though it might have been best to avoid the similarity. This isn’t the first time I’ve ended up with potentially confusing titles.

I’ve also got a few papers in draft—comments very welcome!

A paper on personal identity and indeterminacy. I give a (doubly qualified) defence of Bernard Williams’ scepticism about whether we can “comprehend” alleged indeterminacy in cases of our own survival. The most significant qualification is that I think this only has bite if we accept a “rejectionist” account of the cognitive role of indeterminacy. But that rejectionist take is something that at least arguably required by anyone who thinks indeterminacy leads to truth value gaps, or to the rejection of appropriate instances of the law of excluded middle. And that covers a great range of cases. This issue of the cognitive role of indeterminacy is also in play in the conditionals and indeterminacy paper mentioned above, and also appears in:

A paper on the claim that future contingents are indeterminate. You get into *big* trouble here if you go for one of those rejectionist takes on indeterminacy (as, for example, a standard interpretation says that Aristotle did). I consider a fictionalist response for the friend of the open future: rather than having beliefs about future contingents, we should indulge instead “opine” about them to various degrees—where opining is construed as a kind of fictional belief. I give some reasons, though, for scepticism about whether this approach will ultimately work.

Finally, a paper on the idea that survival is an “intrinsic matter” (an idea that is in play, in particular, in discussion of cases of personal fission). The most obvious ways of formulating this are unsustainable for reasons to do with maximality. I define a notion of part-intrinsicality that carves (what I think is) a maximality-compatible formulation of what’s right about the idea that being a person or surviving as a person is an “intrinsic matter”. I then use it to derive some perhaps uncomfortable results about how to respond to the “problem of the many”.

I’d like to particularly thank the UK Arts and Humanities Research Council for funding my year of research leave in 2007-8 in which much of the above work was written.

I’m going to be giving talks based on the indeterminate survival paper in Oxford and Cambridge in a couple of weeks time. I had a fun time in Manchester last week giving the same talk. I’ll also be presenting the part-intrinsic survival paper in Cambridge to the wonderfully named “Serious Metaphysics Group”.

Counting delineations

I presented my paper on indeterminacy and conditionals in Konstanz a few days ago. The basic question that paper poses is: if we are highly confident that a conditional is indeterminate, what sorts of confidence in the conditional itself are open to us?

Now, one treatment I’ve been interested in for a while is “degree supervaluationism”. The idea, from the point of view of the semantics, is to replace appeal to a single intended interpretation (with truth=truth at that interpretation) or set of “intended interpretations” (with truth=truth at all of them) with a measure over the set of interpretations (with truth to degree d = being true at exactly measure d of the interpretations). A natural suggestion, given that setting, is that if you know (/are certain) S is true to measure d, then your confidence in S should be d.

I’d been thinking of degree-supervaluationism in this sense, and the more standard set-of-intended-interpretations supervaluationism, as distinct options. But (thanks to Tim Williamson) I realize now that there may be an intermediate option.

Suppose that S= the number 6 is bleh. And we know that linguistic conventions settle that numbers <5 are bleh, and numbers >7 are not bleh. The available delineations of “nice”, among the integers, are ones where the first non-bleh number is 5, 6, 7 or 8. These will count as the “intended interpretations” for a standard supervaluational treatment, so “6 is bleh” will be indeterminate—in this context, neither true nor false.

I’ve discussed in the past several things we could say about rational confidence in this supervaluational setting. But one (descriptive) option I haven’t thought much about is to say that you should proportion your confidence to the number of delineations on which “6 is bleh” comes out true. In the present case, our confidence that 6 is bleh should be 0.5, our confidence that 5 is bleh should come out 0.25, and our confidence that 7 is bleh should come out 0.25.

Notice that this *isn’t* the same as degree-supervaluationism. For that just required some measure or other over the space of interpretations. And even if that was non-zero everywhere apart from ones which place first non-bleh number in 5-8, there are many options available. E.g. we might have a measure that assigns 0.9 to the interpretation which makes 5 the first non-bleh number, and distributes 0.3333… to the others. In other words, the degree-supervaluationist needn’t think that the measure is a measure *of the number of delineations*. I usually think of it (in the finite case), intuitively, as a measure of the “degree of intendedness” of each interpretation. In a sense, the degree-supervaluationists I was thinking of conceive of the measure as telling us to what extent usage and eligibility and other subvening facts favour one interpretation or another. But the kind of supervaluationists we’re now considering won’t buy into that at all.

I should mention that even if, descriptively, it’s clear what proposal here is, it’s less clear how the count-the-delineations supervaluationists would go about justifying the rule for assigning credences that I’m suggesting for them. Maybe the idea is that we should seek some kind of compromise between the credences that would be rational if we took D to be the unique intended interpretation, for each D in our set of “intended interpretations” (see this really interesting discussion of compromise for a model of what we might say—the bits at the end on mushy credence are particularly relevant). And they’ll be some oddities that this kind of theorist will have to adopt—e.g. for a range of cases, they’ll be assigning significant credence to sentences of the form “S and S isn’t true”. I find that odd, but I don’t think it blows the proposal out of the water.

Where might this be useful? Well, suppose you believe in B-theoretic branching time, and are going to “supervaluate” over the various future-branches (so “there will be a sea-battle” will a truth-value gap, since it is true on some but not all). (This approach originates with Thomason, and is still present, with tweaks, in recent relativistic semantics for branching time). “Branches” play the role of “interpretations”, in this setting. I’ve argued in previous work that this kind of indeterminacy about branching futures leads to trouble on certain natural “rejectionist” readings of what our attitudes to known indeterminate p should be. But a count-the-branches proposal seems pretty promising here. The idea is that we should proportion our credences in p to the *number* of branches on which p is true.

Of course, there are complicated issues here. Maybe there are just two qualitative possibilities for the future, R and S. We know R has a 2/3 chance of obtaining, and S a 1/3 chance of obtaining. In the B-theoretic branching setting, an R-branch will exist, and an S-branch will exist. Now, one model of the metaphysics at this point is that we don’t allow qualitatively duplicate future brnaches: so there are just two future-branches in existence, the R one and the S one. On a count-the-branches recipe, we’ll get the result that we should have 1/2 credence that R will obtain. But that conflicts with what the instruction to proportion our credences to the known chances would give us. Maybe R is primitively attached to a “weight” of 2/3—but our count-the-branches recipe didn’t say anything about that.

An alternative is that we multiply indiscernable futures. Maybe there are two, indiscernable R futures, and only one S future. Then apportioning  the credences in the way mentioned won’t get us into trouble. And in general, if we think whenever the chance (at moment m) that p is k, then the proportion of p-futures to non-p-futures is k, then  we’ll have a recipe that coheres nicely with the principal principle.

Let me be clear that I’m not suggesting that we identify chances with numbers-of-branches. Nor am I suggesting that we’ve got some easy route here for justifying the principal principle. The only thing I want to say is that *if* we’ve got a certain match between chances and numbers of future branches, then two recipes for assigning credences won’t conflict.

(I emphasized earlier that count-the-precisifications supervaluationism had less flexibility than degree-supervaluationism where the relevant measure was unconstrained by counting considerations. In a sense, what the above little discussion highlights is that when we move from “interpretations” to “branches” as the locus of supervaluational indeterminacy, this difference in flexibility evaporates. For in the case where that role is played by actually existing futures, then there’s at least the possibility of mutiplying qualitatively indiscernable futures. That sort of maneuver has little place in the original, intended-interpretations settings, since presumably we’ve got an independent fix on what the interpretations are, and we can’t simply postulate that the world gives us intended interpretations in proporitions that exactly match the credences we independently want to assign to the cases.)

OPP feed.

Wo has made public an automated feed for online papers in philosophy. This is a wonderfully useful resource… I’m gonna be using it lots.

I wonder whether other branches of academia have similar resources?

HT: Online papers in philosophy, Wo’s weblog

Branching worlds

I’ve recently discovered some really interesting papers on how to think about belief in a future with branching time. Folks are interested in branching time as it (putatively) emerges out of “decoherence” in the Everett interpretation of standard Quantum mechanics.

The first paper linked to above is forthcoming in BJPS, by Simon Saunders and David Wallace. In it, they argue for a certain kind of parallel between the semantics for personal fission cases and the semantics most charitably applied to language users in branching time, and argue that this sheds lights on the way that beliefs should behave.

Now, lots of clever people are obviously thinking about this, and I haven’t absorbed all the discussion yet. But since it’s really cool stuff, and since I’ve been thinking about related material recently (charity-based metasemantics, fission cases, semantics in branching time) I thought I’d sit down and figure out how things look from my point of view.

I’m sceptical, in fact, whether personal fission itself (and associated de se uncertainty about who one will be) will really help us out here in the way that Saunders and Wallace think. Set aside for now the question of whether faced with a fission case you should feel uncertain which fission-product you will end up as (for discussion of that question, on the assumption that it’s indeterminate which of the Lewisian continuing persons is me, see the indeterminate survival paper I just posted up). But suppose that we do get some sense in which, when you’re about to fission, you have de se uncertainty about where you’ll be, even granted full knowledge of the de dicto facts.

The Saunders-Wallace idea is to try to generalize this de se ignorance as an explanation of the ignorance we’d have if we were placed in a branching universe, and knew what was to happen on every branch. We’d know all the de dicto truths about multiple futures—and we would literally be about to undergo fission, since I’d be causally related in the right kind of ways to multiple person stages in the different futures. So—they claim—ignorance of who I am maps onto ignorance of what I’m about to see next (whether I’m about to see the stuff in the left branch, or in the right). And that explains how we can get ignorance in a branching world, and so lays the groundwork for explaining how we can get a genuine notion of uncertainty/probability/degree of belief off the ground.

I’m a bit worried about the generality of the purported explanation. The basic thought there is that to get a complete story about beliefs in branching universes, we’re going to need to justify degrees of beliefs in matters that happen, if at all, long after we would go out of existence. And so it just doesn’t seem likely that we’re going to get a complete story about uncertainty from consideration of uncertainty about which branch I myself am located within.

To dramatize, consider an instantaneous, omniscient agent. She knows all the de dicto truths about the world (in every future branch) and also exactly where he is located—so no de se ignorance either. But still, this agent might care about other things, and have a certain degree of belief as to whether, e.g. the sea-battle will happen in the future. The kind of degree of belief she has (and any associated “ignorance”) can’t, I think, be a matter of de se ignorance. And I think, for events that happen if at all in the far future, we’re relevantly like the instantaneous omniscient agent.

What else can we do? Well—very speculatively—I think there’s some prospect for using the sort of charity-based considerations David Wallace has pointed to in the literature for getting a direct, epistemic account of why we should adopt this or that degree of belief in borderline cases. The idea would be that we *mimimize inaccuracy of our beliefs* by holding true sentences to exactly the right degrees.

A first caveat: this hangs on having the *right* kind of semantic theory in the background. A Thomason-style supervaluationist semantics for the branching future just won’t cut it, nor will MacFarlane-style relativistic tweaks. I think one way of generalizing the “multiple utterances” idea of Saunders and Wallace holds out some prospect of doing better—but best of all would be a degree-theoretic semantics.

A second caveat: what I’ve got (if anything) is epistemic reason for adopting certain kinds of graded attitude. It’s not clear to me that we have to think of these graded attitudes as a kind of uncertainty. And it’s not so clear why expected utility, as calculated from these attitudes, should be a guide to action. On the other hand, I don’t see clearly the argument that they *don’t* or *shouldn’t* have this pragmatic significance.

So I’ve written up a little note on some of these issues—the treatment of fission that Saunders-Wallace use, the worries about limitations to the de se defence, and some of the ideas about accuracy-based defences of graded beliefs in a branching world. It’s very drafty (far more so than anything I usually put up as work in progres). To some extent it seems like a big blog post, so I thought I’d link to it from here in that spirit. Comments very welcome!

Indeterminate survival: in draft

So, finally, I’ve got another draft prepared. This is a paper focussing on Bernard Williams’ concerns about how to think and feel about indeterminacy in questions of one’s own survival.

Suppose that you know that you know there’s an individual in the future who’s going to get harmed. Should you invest a small amount of money to alleviate the harm? Should you feel anxious about the harm?

Well, obviously if you care about the guy (or just have a modicum of humanity) you probably should. But if it was *you* that was going to suffer the harm, there’d be a particularly distinctive frisson. From a prudential point of view, you’d be compelled to invest minor funds for great benefit. And you really should have that distinctive first-personal phenomenology associated with anxiety on one’s own behalf. Both of these de se attitudes seem important features of our mental life and evaluations.

The puzzle I take from Williams is: are the distinctively first-personal feelings and expectations appropriate in a case where you know that it’s indeterminate whether you survive as the individual who’s going to suffer?

Williams thought that by reflecting on such questions, we could get an argument against account of personal identity that land us with indeterminate cases of survival. I’d like to play the case in a different direction. It seems to me pretty unavoidable that we’ll end up favouring accounts of personal identity that allow for indeterminate cases. So if , when you combine such cases with this or that theory of indeterminacy, you end up saying silly things, I want to take that as a blow to that account of indeterminacy.

It’s not knock-down (what is in philosophy?) but I do think that we can get leverage in this way against rejectionist treatments of indeterminacy, at least as applied to these kind of cases. Rejectionist treatments include those folks who think that characteristic attitudes to borderline cases includes primarily a rejection of the law of excluded middle; and (probably) those folks who think that in such cases we should reject bivalence, even if LEM itself is retained.

In any case, this is definitely something I’m looking for feedback/comments on (particularly on the material on how to think about rational constraints on emotions, which is rather new territory for me). So thoughts very welcome!

Primitivism about indeterminacy: a worry

I’m quite tempted by the view that it is indeterminate that might be one of those fundamental, brute bits of machinery that goes into constructing the world. Imagine, for example, you’re tempted by the thought that in a strong sense the future is “open”, or “unfixed”. Now, maybe one could parlay that into something epistemic (lack of knowledge of what the future is to be), or semantic (indecision over which of the existing branching futures is “the future”) or maybe mere non-existence of the future would capture some of this unfixity thought. But I doubt it. (For discussion of what the openness of the future looks like from this perspective, see Ross and Elizabeth’s forthcoming Phil Studies piece).

The open future is far from the only case you might consider—I go through a range of possible arenas in which one might be friendly to a distinctively metaphysical kind of indeterminacy in this paper—and I think treating “indeterminacy” as a perfectly natural bit of kit is an attractive way to develop that. And, if you’re interested in some further elaboration and defence of this primitivist conception see this piece by Elizabeth and myself—and see also Dave Barnett’s rather different take on a similar idea in a forthcoming piece in AJP (watch out for the terminological clashes–Barnett wants to contrast his view with that of “indeterminists”. I think this is just a different way of deploying the terminology.)

I think everyone should pay more attention to primitivism. It’s a kind of “null” response to the request for an account of indeterminacy—and it’s always interesting to see why the null response is unavailable. I think we’ll learn a lot about what the compulsory questions the a theory of indeterminacy must answer, from seeing what goes wrong when the theory of indeterminacy is as minimal as you can get.

But here I want to try to formulate a certain kind of objection to primitivism about indeterminacy. Something like this has been floating around in the literature—and in conversations!—for a while (Williamson and Field, in particular, are obvious sources for it). I also think the objection if properly formulated would get at something important that lies behind the reaction of people who claim *just not to understand* what a metaphysical conception of indeterminacy would be. (If people know of references where this kind of idea is dealt with explicitly, then I’d be really glad to know about them).

The starting assumption is: saying “it’s an indeterminate case” is a legitimate answer to the query “is that thing red?”. Contrast the following. If someone asks “is that thing red?” and I say: it’s contingent whether it’s red”, then I haven’t made a legitimate conversational move. The information I’ve given is simply irrelevant to it’s actual redness.

So it’s a datum that indeterminacy-answers are in some way relevant to redness (or whatever) questions. And it’s not just that “it is indeterminate whether it is red” has “it is red” buried within it – so does the contingency “answer”, but it is patently irrelevant.

So what sort of relevance does it have? Here’s a brief survey of some answers:

(1) Epistemicist. “It’s indeterminate whether p” has the sort of relevance that answering “I don’t know whether p” has. Obviously it’s not directly relevant to the question of whether p, but at least expresses the inability to give a definitive answer.

(2) Rejectionist (like truth-value gap-ers, inc. certain supervaluationists, and LEM-deniers like Field, intuitionists). Answering “it’s indeterminate” communicates information which, if accepted, should lead you to reject both p, and not-p. So it’s clearly relevant, since it tells the inquirer what their attitudes to p itself should be.

(3) Degree theorist (whether degree-supervaluationist like Lewis, Edgington, or degree-functional person like Smith, Machina, etc). Answering “it’s indeterminate” communicates something like the information that p is half-true. And, at least on suitable elaborations of degree theory, we’ll then now how to shape our credences in p itself: we should have credence 0.5 in p if we have credence 1 that p is half true.

(4) Clarification request. (maybe some contextualists?) “it’s indeterminate that p” conveys that somehow the question is ill-posed, or inappropriate. It’s a way of responding whereby we refuse to answer the question as posed, but invite a reformulation. So we’re asking the person who asked “is it red?” to refine their question to something like “is it scarlet?” or “is it reddish?” or “is it at least not blue?” or “does it have wavelength less than such-and-such?”.

(For a while, I think, it was assumed that every series account of indeterminacy would say that if p was indeterminate, one couldn’t know p (think of parallel discussion of “minimal” conceptions of vagueness—see Patrick Greenough’s Mind paper). If that was right then (1) would be available to everybody. But I don’t think that that’s at all obvious — and in particular, I don’t think it’s obvious the primitivist would endorse it, and if they did, what grounds they would have for saying so).

There are two readings of the challenge we should pull apart. One is purely descriptive. What kind of relevance does indeterminacy have, on the primitivist view? The second is justificatory: why does it have that relevance? Both are relevant here, but the first is the most important. Consider the parallel case of chance. There we know what, descriptively, we want the relevance of “there’s a 20% chance that p” to be: someone learning this information should, ceteris paribus, fix their credence in p to 0.2. And there’s a real question about whether a metaphysical primitive account of chance can justify that story (that’s Lewis’s objection to a putative primitivist treatment of chance facts).

The justification challenge is important, and how exactly to formulate a reasonable challenge here will be a controversial matter. E.g. maybe route (4), above, might appeal to the primitivist. Fine—but why is that response the thing that indeterminacy-information should prompt? I can see the outlines of a story if e.g. we were contextualists. But I don’t see what the primitivist should say.

But the more pressing concern right now is that for the primitivist about indeterminacy, we don’t as yet have a helpful answer to the descriptive question. So we’re not even yet in a position to start engaging with the justificatory project. This is what I see as the source of some dissatisfaction with primitivism – the sense that as an account it somehow leaves something unimportant explained. Until the theorist has told me something more I’m at a loss about what to do with the information that p is indeterminate

Furthermore, at least in certain applications, one’s options on the descriptive question are constrained. Suppose, for example, that you want to say that the future is indeterminate. But you want to allow that one can rationally have different credences for different future events. So I can be 50/50 on whether the sea battle is going to happen tomorrow, and almost certain I’m not about to quantum tunnel through the floor. Clearly, then, nothing like (2) or (3) is going on, where one can read off strong constraints on strength of belief in p from the information that p is indeterminate. (1) doesn’t look like a terribly good model either—especially if you think we can sometimes have knowledge of future facts.

So if you think that the future is primitively unfixed, indeterminate, etc—and friends of mine do—I think (a) you owe a response to the descriptive challenge; (b) then we can start asking about possible justifications for what you say; (c) your choices for (a) are very constrained.

I want to finish up by addressing one response to the kind of questions I’ve been pressing. I ask: what is the relevance of answering “it’s indeterminate” to first-order questions? How should I alter my beliefs in receipt of the information, what does it tell me about the world or the epistemic state of my informant?

You might be tempted to say that your informant communicates, minimally, that it’s at best indeterminate whether she knows that p. Or you might try claiming that in such circumstances it’s indeterminate whether you *should* believe p (i.e. there’s no fact of the matter as to how you should shape your credences on the question of whether p). Arguably, you can derive these from the determinate truth of certain principles (determinacy, truth as the norm of belief, etc) plus a bit of logic. Now, that sort of thing sounds like progress at first glance – even if it doesn’t lay down a recipe for shaping my beliefs, it does sound like it says something relevant to the question of what to do with the information. But I’m not sure about that it really helps. After all, we could say exactly parallel things with the “contingency answer” to the redness question with which we began. Saying “it’s contingent that p” does entail that it’s contingent at best whether one knows that p, and contingent at best whether one should believe p. But that obviously doesn’t help vindicate contingency-answers to questions of whether p. So it seems that the kind of indeterminacy-involving elaborations just given, while they may be *true*, don’t really say all that much.