Monthly Archives: October 2007

London Logic and Metaphysics Forum (x-posted from MV)

If you’re in London on a Tuesday evening, what better to do than to take in a talk by a young philosopher on logic or metaphysics?

Spotting this gap in the tourist offerings, the clever folks in the capital have set up the London Logic and Metaphysics forum. Looks an exciting programme, though I have my doubts about the joker on the 11th Dec…

Tues 30 Oct: David Liggins (Manchester)

Tues 13 Nov: Oystein Linnebo (Bristol & IP)
Compositionality and Frege’s Context Principle

Tues 27 Nov: Ofra Magidor (Oxford)
Epistemicism about vagueness and meta-linguistic safety

Tues 11 Dec: Robbie Williams (Leeds)
Is survival intrinsic?

8 Jan: Stephan Leuenberger (Leeds)

22 Jan: Antony Eagle (Oxford)

5 Feb: Owen Greenhall (Oslo & IP)

4 Mar: Guy Longworth (Warwick)

Full details can be found here.

In Rutgers

As Brian Weatherson reports here, there’s a metaphysics/phil physics conference at Rutgers this weekend (26-28th). I’m in Rutgers for the week, and am responding to one of the papers at the event. I’m looking forward to what looks like a really interesting conference.

Tonight (24th) I’m giving a talk to a phil language group at Rutgers. I’m going to be presenting some material on modal accounts of indicative conditionals (a la Stalnaker, Weatherson, Nolan). This piece has evolved quite a bit during the last few weeks as I’ve been working on it. A bit unexpectedly, I’ve ended up with an argument for Weatherson’s views.

Briefly, the idea is to look at what mileage we can get out of paradigmatic instances of the identification of the probability of a conditional “If A, B” with the conditional probability of B on A (CCCP). We know that in general that identification is highly problematic, due to notorious impossibility results due to David Lewis and more recently Ned Hall and Al Hajek. But I think it’s interesting to divide the issue into two halves:

First, what would a modal account of indicative conditionals that obeys (a handful of paradigmatic) instances of CCCP have to look like? I think there’s a lot we can say about this: of the salient options, it’ll look a lot like Weatherson’s theory; it’ll have to have a particular take on what kind of vagueness can effect the conditional; it’ll have to say that any proposition you know should have probability 1.

Second, is this package sustainable in the face of impossibility results? Al Hajek (in his papers in the Eels/Skyrms probability and conditionals volume) does a really nice job of formulating the challenges here. If we’re prepared to give up some instances of CCCP in recherche cases (like left-embedded conditionals, things of the form “if (if A, B), C”, then many of the general impossibility results won’t apply. But nevertheless, there a bunch of puzzles that remain: in particular, concerning how even the paradigmatic instances can survive when we receive new information.

I’ll mostly be talking about the first part of the talk this evening.

Edgington vs. Stalnaker

One of the things I’m thinking about at the moment is Stalnaker-esque treatments of indicative conditionals. Stalnaker’s story, roughly, is that indicative conditionals have almost exactly the same truth conditions as (on his theory) counterfactuals do. That is, A>B is true at w iff B is true at the nearest B-world to w. The difference comes only in the fine details about which worlds count as nearest. For counterfactuals, Stalnaker like Lewis thinks that some sort of similarity does the job. For indicatives, Stalnaker thinks that the nearness ordering is rooted in the same similarity metric, but distorted by the following overriding principle: if A and w are consistent with what we collectively presuppose, then the nearest A-worlds will also be consistent with what we collectively presuppose. In the jargon, all worlds outside the “context set” are pushed further out than they would be on the counterfactual ordering.

I’m interested in this sort of “push worlds” modal account of indicatives. (Others in a similar family include Daniel Nolan’s theory, whereby it’s knowledge that does the pushing rather than collective presuppositions). Lots of criticisms of Stalnaker’s theory don’t engage with the fine details of what he says about the closeness ordering, but more general aspects (e.g. its inability to sustain Adams’ thesis that the conditional probability is the probability of the conditional; its handling of Gibbard cases; its sensitivity to fine factors of conversational context). An exception, however, is an argument that Dorothy Edgington puts forward in her SEP survey article (which, by the way, I very much recommend!)

Here’s the case. Let’s suppose that Jill is uncertain how much fuel is in Jane’s car. The tank has a capacity for 100-miles’-worth, but Jill has no knowledge of what level it is at. Jane is
going to drive it until it runs out of fuel. For Jill, the probability of the car being driven for n miles, given that it’s driven for no more than fifty, is 1/50. (for n<51).

Suppose that in fact the tank is full. The most similar worlds to actuality, arguably, are those where the tank is 50 per cent full, and so where Jane drives 50 miles. The same goes for any world where the tank is more than 50 per cent full. So, if nearness of worlds is determined by similarity, the conditional is true as uttered at each of the worlds where the tank is more than 50 per cent full. So without knowing the details of the level of the tank, we should be at least 50 per cent confident that if it goes for under 50 miles, it’ll go for exactly 50 miles. But this seems all wrong. Varying the numbers we can make the case even worse: we should be almost sure of “If it goes for no more than 3 miles, it’ll go for exactly 3 miles”, even though we regard 3, 2, 1 as equiprobable fuel levels.

Of course, that’s only to take into account the comparative similarity of worlds in determining the ordering, and Stalnaker and Nolan have the distorting factor to appeal to: worlds that are incompatible with something we presuppose/know to be true, can be pushed further out. But it doesn’t seem in this case that anything relevant is being presupposed/known.

I don’t think this objection works. To see that something is going wrong, notice that the argument, if successful, would work against other theories too. Consider, for example, Stalnaker’s theory of the counterfactual conditional. Take the case as before, but suppose we’re a day later and Jill doesn’t know how far Jane drove. Consider the counterfactual “Had it stopped after no more than 50 miles, it’d have gone for exactly 50 miles”. By the previous reasoning, the most similar worlds to over-50 worlds are exactly-50 worlds; so we should be half confident of the truth of that conditional. Varying the numbers, we should be almost sure that “If it had gone no more than 3, it’d go exactly 3”, despite regarding the probabilities of 3, 2 and 1 as equally likely. But these all seem like bizarre results.

Moral: the counterfactual ordering of worlds isn’t fixed by the kind of similarity that Edgington appeals to: the sort of similarity whereby a world in which the car stops after 53 miles is more similar to one in which the car stops after 50 miles than one in which the car stops after 3 miles. Of course, in some sense (perhaps an “overall” sense) those similarity judgements are just right. But we know from the Fine/Bennett cases that the sense of similarity that supports the right counterfactual verdicts can’t be all in cases (those cases are ones concerning counterfactuals starting “if Nixon had pushed the nuclear button in the 70’s…” All-in similarity arguably says that closest such worlds are ones where no missiles are released, leading to the wrong results).

Spelling out what the right notion of similarity is is tricky. Lewis gave us one recipe. In effect, we look for a little miracle that’ll suffice to let the counterfactual world diverge from actual history to bring about the antecedent. Then we let events run on according to actual laws, and see what happens. So in worlds where the tank is full, say, let’s look for the little miracle required to to make it run for no more than 50 miles, and run things on. What are the plausible candidates? Perhaps Jane’s decides to take an extra journey yesterday, or forgets to fill up her car two days ago. Small miracles could suffice to get us into those sorts of worlds. But those sorts of divergences don’t really suggest that she’ll end up with exactly 50 miles worth of fuel in the tank, and so this approach undermines the case for “If were at most 50, then exactly 50” being true in antecedent-false worlds. (Which is a good thing!)

If that’s the right thing to say in the counterfactual case, the indicative case too will be sorted. For it’s designed to be a case where presuppositions/knowledge don’t have a relevant distorting effect. And so, once more, the case for “If the car goes for at most 50, then it’ll go for exactly 50” doesn’t work.

I think that the basic interest of push-worlds theories of indicatives like Stalnaker’s and Nolan’s is to connect up the counterfactual and indicative ordering: whether there’s anything informative to say about the counterfactual ordering of worlds itself is an entirely different matter. So if the glosses of the position lead to problems, it’s best to figure out whether the problems lie withthe gloss of the counterfactual ordering (which then should be assessed in connection with that familiar and worked through literature) or with the push-worlds maneuver itself (which has, I think, been less fully examined). I think Edgington’s objection is really connected with the first facet, and I’ve tried to say why I think a more detailed theory will make the problem dissolve. But even if it did turn out to be a problem, the push-worlds thesis itself is still standing.

(Incidentally, I do think Edgington’s setup (which she attributes to a student, James Studd) has wider interest. It looks to me like Jackson’s modal theory of counterfactuals, and Davis’ modal theory of indicatives, both deliver the wrong results in this case.)

[Actually, now I’ve written this out, it strikes me that maybe the anti-Stalnaker argument is fixable. The trick would be to specify the background state of the world to make the result for counterfactual probabilities seem plausible, but such that (given Jill’s ignorance of the background conditions) the indicative probabilities still seem wrong. So maybe the example is at least a recipe for a counterexample to Stalnaker, even if the original case is resistable as described.]