Chancy counterfactuals—three options

I was chatting to Rich Woodward earlier today about Jonathan Bennett‘s attitude to counterfactuals about chancy events. I thought I’d put down some of the thoughts I had arising from that conversation.

The basic thought is this. Suppose that on conditional that A were to happen, it would be overwhelmingly likely that B—but not probability 1 that B would occur. Take some cup I’m holding—if I were to drop it out the window, it’s overwhelmingly likely that it would fall to the floor and break, rather than shoot off sideways or quantum tunnel through the ground. But (we can suppose) there’s a non-zero—albeit miniscule—chance that the latter things would happen. (You don’t need to go all quantum to get this result—as Adam Elga and Barry Loewer have emphasized recently, if we have counterfactuals about macroevents, the probabilities involved in statistical mechanics also attribute tiny but nonzero probability to similarly odd things happening).

The question is, how should we evaluate the counterfactual “Drop>Break” taking into account the fact that given that Drop, there’d be a non-zero but tiny chance that ~Break?

Let’s take as our starting point a Lewisian account of of the counterfactual—“A>B” is to be true (at w) iff B is true at all the closest A-worlds to B. Then the worry many people have is that though the vast majority of closest possible Drop-worlds will be Break worlds, there’ll be a residual tiny minority of worlds where it won’t break—where quantum tunnelling or freaky statistical mechanical possibilities are realized. But since Lewis’s truth-conditions require that Break be true at *all* the closest Drop-worlds, even that tiny minority suffices to make the counterfactual “Drop>Break” false.

As goes “Drop>Break”, so goes almost every ordinary counterfactual you can think of. Almost every counterfactual would be false, if the sketch just given is right. Some people think that’s the right result. We’ll come back to it below.

Lewis’s own response is to deny that the freaky worlds are among the closest worlds. His idea is that freakiness (or as he calls it, the presence of “quasi-miracles”) itself is one of the factors that pushes worlds further away from actuality. That’s been recently criticised by John Hawthorne among others. I’m about to be in print defending a generally Lewisian line on these matters—though the details are different from Lewis’s and (I hope) less susceptible to counterexample.

But if you didn’t take that line, what should you say about the case? A tempting line of thought is to alter Lewis’s clause—requiring not truth at all the closest worlds but truth at most, or the overwhelming majority of them. (Of course, this idea presumes it makes sense to talk of relative proportions of worlds—let’s spot ourselves that).

This has a marked effect on the logic of counterfactuals—in particular, the agglomeration rule (A>B, A>C, therefore A>B&C) would have to go (Hawthorne points this out in his discussion, IIRC). To see how this could happen, suppose that there are 3 closest A-worlds, and X needs to be true at 2 of them in order for “A>X” to be true. Then let the worlds respectively be B&C, ~B&C, ~C&B-worlds. This produces a countermodel to agglomeration.

Agglomeration strikes me as a bad thing to give up. I’m not sure I have hugely compelling reasons for this, but it seems to me that a big part of the utility of counterfactuals lies in our being able to reason under a counterfactual supposition. Given agglomeration you can start by listing a bunch of counterfactual consequences (X, Y, Z), reason in standard ways (e.g. perhaps X, Y, Z entail Q) and then conclude that, under that counterfactual supposition, Q. This is essentially an inference of the following form:

  1. A>X
  2. A>Y
  3. A>Z
  4. X,Y,Z\models Q

Therefore: A>Q.

And in general I think this should be generalized to arbitrarily many premises. If we have that, counterfactual reasoning seems secure.

But agglomeration is just a special case of this, where Q=X&Y&Z (more generally, the conjunction of the various consequents). So if you want to vindicate counterfactual reasoning of the style just mentioned, it seems agglomeration is going to be at the heart of it. I think giving some vindication of this pattern is non-negotiable. To be honest though, it’s not absolutely clear that making it logically valid is obviously required. You might instead try to break this apart into a fairly reliable but ampliative inference from A>X, A>Y, A>Z to A>X&Y&Z, and then appeal to this and the premise X&Y&Z\models Q to reason logically to A>Q. So it’s far from a knock-down argument, but I still reckon it’s on to something. For example, anyone who wants to base a fictionalism on counterfactuals (were the fiction to be true then…) better take an interest in this sort of thing, since on it turns whether we can rely on multi-premise reasoning to preserve truth-according-to-the-fiction.

Jonathan Bennett is one who considers altering the truth clauses in the way just sketched (he calls it the “near miss” proposal–and points out a few tweaks that are needed to ensure e.g. that we don’t get failures of modus ponens). But he advances a second non-Lewisian way of dealing with the above clauses.

The idea is to abandon evaluations of counterfactuals being true or false, and simply assign them degrees of goodness. The degree of goodness of a counterfactual “A>B” is equal to the proportion of the closest A worlds that are B worlds.

There are at least two readings of this. One is that we ditch the idea of truth-evaluation of counterfactuals conditionals altogether, much as some have suggested we ditch truth-evaluation of indicatives. I take it that Edgington favours something like this, but it’s unclear whether that’s Bennett’s idea. The alternative is that we allow “strict truth” talk for counterfactuals, defined by a strict clause—truth at all the closest worlds—but then think that this strict requirement is never met, and so it’d be pointless to actually evaluate counterfactual utterances by reference to this strict requirement. Rather, we should evaluate them on the sliding scale given by the proportions. Really, this is a kind of error theory—but one supplemented by a substantive and interesting looking account of the assertibility conditions.

Both seem problematic to me. The main issue I have with the idea that we drop truth-talk altogether is the same issues I have with indicative conditionals—I don’t see how to deal with the great variety of embedded contexts in which we find the conditionals—conjunctions, other conditionals, attitude contexts, etc etc. That’s not going to impress someone who already believes in a probabilistic account of indicative conditionals, I guess, since they’ll have ready to hand a bunch of excuses, paraphrases, and tendancies to bite selected bullets. Really, I just don’t think this will wash—but, anyway, we know this debate.

The other thought is to stick with an unaltered Lewisian account, and accept an error theory. At first, that looks like an advance over the previous proposal, since there’s no problem in generalizing the truthconditional story about embedded contexts—we just take over the Lewis account wholesale. Now this is something of an advance of a brute error-theory, since we’ve got some positive guidance about the assertibility conditions for simple counterfactuals—they’re good to the extent that B is true in a high proportion of the closest A-worlds. And that will make paradigmatic ordinary counterfactuals like “Drop>Break” overwhelmingly good.

But really I’m not sure this is much of an advance over the Edgington-style picture. Because even though we’ve got a compositional story about truth-conditions, we don’t as yet have an idea about how to plausibily extend the idea of “degrees of goodness” beyond simple counterfactuals.

As an illustration, consider “If I were to own a china cup, then if I were to drop it out the window, it’d break”. Following simple-mindedly the original recipe in the context of this embedded conditional, we’d look for the proportion of closest owning worlds where the counterfactual “Drop>Break” is true. But because of the error-theoretic nature of the current proposal, at none (or incredibly few) of those worlds would the counterfactual be true. But that’s the wrong result—the conditional is highly assertible. So the simple-minded application of the orginal account goes wrong in this case.

Of course, what you might try to do is to identify the assertibility conditions of “Own>(Drop>Break)” with e.g. “(Own&Drop)>Break”—so reducing the problem of asseribility for this kind of embedding by way of paraphrase to one where the recipe gives plausible. But that’s to adopt the same kind of paraphrase-to-easy-cases strategy that Edgington likes, and if we’re going to have to do that all the time (including in hard cases, like attitude contexts and quantifiers) then I don’t see that a great deal of advance is made by allowing the truth-talk—and I’m just as sceptical as in the Edgington-style case that we’ll actually be able to get enough paraphrases to cover all the data.

There are other, systematic and speculative, approaches you might try. Maybe we should think of non-conditionals as having “degrees of goodness” of 1 or 0, and then quite generally think of the degree of goodness of “A>B” as the expected degree of goodness of B among the closest A-worlds—that is, we look at the closest A-worlds and the degree of goodness of B at each of these, and “average out” to get a single number we can associate with “A>B”. That’d help in the “Own>(Drop>Break)” case—in a sense, instead of looking at the expected truth value of “Drop>Break” among closest Own-worlds, we’d be looking at the expected goodness-value of “Drop>Break” among Own-worlds. (We’d also need to think about how degrees of goodness combine in the case of truth functional compounds of conditionals—and that’s not totally obvious. Jeffrey and Stalnaker have a paper on “Conditionals as Random Variables” which incorporates a proposal something like the above. IIRC, they develop it primarily in connection with indicatives to preserve the equation of conditional probability with the probability of the conditional. That last bit is no part of the ambition here, but in a sense, there’s a similar methodology in play. We’ve got an independent fix for associating degrees with simple conditionals—not the conditional subjective probability as in the indicative case—rather, the degree is fixed by the proportion of closest antecedent worlds where the (non-conditional) consequent holds. In any case, that’s where I’d start looking if I wanted to pursue this line).

Is this sort of idea best combined with the Edgington style “drop truth” line or the error-theoretic evaluation of conditionals? Neither, it seems to me. Just as previously, the compositional semantics based on “truth” seems to do no work at all—the truth value of compounds of conditionals will be simply irrelevant to their degrees of goodness. So it seems like a wheel spinning idly to postulate truth-values as well as these “Degrees of goodness”. But also, it doesn’t seem to me that the proposal fits very well with the spirit of Edgington’s “drop truth” line. For while we’re not running a compositional semantics on truth and falsity, we are running something that looks for all the world like a compositional semantics on degrees of goodness. Indeed, it’s pretty tempting to think of these “degrees of goodness” as degrees of truth—and think that what we’ve really done is replace binary truth-evaluation of counterfactuals with a certain style of degree-theoretic evaluation of them.

So I reckon that there are three reasonably stable approaches. (1) The Lewis-style approach where freaky worlds are further away then they’d otherwise be on account of their freakiness—where the Lewis-logic is maintained and ordinary counterfactuals are true in the familiar sense. (2) The “near miss” approach where logic is revised, ordinary counterfactuals are true in the familiar sense. (3) Then there’s the “degree of goodness” approach—which people might be tempted to think of in the guise of an error theory, or as an extension of the Adams/Edgington-style “no truth value” treatment of indicatives—but which I think will have to end up being something like a degree-theoretic semantics for conditionals, albeit of a somewhat unfamiliar sort.

I suggested earlier that an advantage of the Lewis approach over the “near miss” approach was that agglomeration formed a central part of inferential practice with conditionals. I think this is also an advantage that the Lewis account has over the degree-theoretic approach. How exactly to make this case isn’t clear, since it isn’t altogether obvious what the *logic* of the degree theoretic setting should be—but the crucial point is “A>X1″… “A>Xn” can all be good to a very high degree, while “A>X1&…&Xn” are good to a very low degree. Unless we restrict ourselves to starting points which are good to degree 1, then we’ll have to be wary of degradation of degree of goodness while reasoning under counterfactual suppositions, just as on the near miss proposal we’d have to be wary of degradation from truth to faslity. So the Lewisian approach I favour is, I think, the only one of the approaches currently on the table which makes classical reasoning under counterfactual suppositions fully secure.

16 responses to “Chancy counterfactuals—three options

  1. Daniel Elstein

    Hi Robbie,
    I’m a bit surprised you’re so attached to agglomeration. I thought the standard examples where the consequent affects the closeness relation will be counterexamples to agglomeration.

    P: Caesar was in command during the Korean War.
    Q: He (Caesar) used nuclear weapons.
    R: He (Caesar) used catapults.

    Then we plausibly have both P > Q and P > R, but obviously not P > (Q & R).

    And for similar reasons, I don’t believe in unrestricted reasoning under counterfactual suppositions. Now, it may be that agglomeration can be suitably restricted. Maybe what we want is agglomeration only for cases where the same worlds are closest when we evaluate each of the counterfactuals. But if we’re going to do that anyway, it might be okay to say that we only have agglomeration where the consequents hold at all the closest worlds (i.e. the strict case) and the previous condition is satisfied too. So I’m not convinced that preserving agglomeration, or even preserving agglomeration with the first restriction, is an advantage for the Lewisian view.

    But probably you’ll tell me that there’s a well-known reply to my naive worry about agglomeration…

  2. Hi Daniel—-

    I really struggle to hear the counterfactuals you mention being true in the same breath. The conjunction “If Caesar was in command in the Korean war, he’d have used nuclear weapons; and (also) if Caesar was in command he’d have used catapults”. Sounds totally wrong to me—especially if you strengthen the two consequents (as I guess is usually intended) to “… used nuclear weapons and not catapults”; “used catapults and not nuclear weapons”. So I’m very much inclined to think that this is either a case of indeterminacy (with one or the other being true, but not both) or context-sensitivity.

    I guess that’s sort of covered by the idea of consequents affecting (context and so) the closeness relations. But I do think there’s something fairly principled in appealing to them here—counterexamples to putatively valid arguments that rely on context change aren’t really counterexamples. I don’t really how the what we need to say about these cases is going to generalize to excuse or explain other (putative) failures of agglomeration.

    What do you think? Is there still a worry with this? Maybe there’s a way of packaging the failures of the agglomeration that’d make this seem easier…

  3. The phrase “I don’t really how the what we need” is, I submit, a classic of its kind. You can almost hear the splutter. It should have read “I don’t really see how what we need say about these cases….”.

  4. Hi Robbie,

    Might be interesting to consider conditional excluded middle here (that I think you’ve defended elsewhere). It does get the chancy counterfactuals true or false, and (I think) preserves the inferences you’re after. And I’m not sure it is question begging. If the question is how should we evaluate the counterfactual “Drop>Break” taking into account the fact that given that Drop, there’d be a non-zero but tiny chance that ~Break? , then it is difficult to believe that ~break occurs in the closest world. The question seems epistemological: how likely do you think it is that you’re in one of those plate-tunneling worlds, since you either are in such a world or you’re not? Assuming centering, I can give you what I’m pretty sure is the truth-value for ‘drop>break’. My estimation would not differ at all from the case in which the plate had already been dropped and you asked me what I’m sure happened.

  5. Daniel Elstein

    I agree that the conjunction of the counterfactuals I mention seems false. But I don’t see a nice way of denying that they’re both individually true. Now I’d rather not have an exception to the soundness of &I, so I end up saying that the conjunction is true but unassertible. Anyway, I don’t think the counterexample to agglomeration depends on the the conjunction being true, because the agglomeration rule treats the counterfactuals as separate premises. I suppose you could say that the rule as stated is equivalent to the rule which goes from (A > B) & (A > C) to A > (B & C), given that we have &I and &E.

    Anyway, you say that it’s either indeterminate which of P > Q and P > R is true, or they’re context sensitive. But as I understand the standard view, it’s the closeness relation which is context sensitive. That doesn’t make the counterfactuals themselves context-sensitive: if it’s the consequent which acts as the relevant context to fix the closeness relation, then each conditional is true given the background facts and the closeness relation fixed by their respective consequents, and thus neither sentence is context-sensitive, if by that you mean its truth-value is affected by countextual factors external to the sentence itself.

    Now it might be that there can be only one closeness relation per sentence, which would prevent the conjunction (P > Q) & (P > R) from being true, but I don’t really see why to accept that, especially because it causes a failure in &I. The only way that I can see to get the indeterminacy view going is if you think that each counterfactual is to be evaluated according to the same closeness relation, but it’s indeterminate which the correct closeness relation is. I’m not sure I have arguments against that, but really??

    I guess the main point of your reply is that “counterexamples to putatively valid arguments that rely on context change aren’t really counterexamples.” But I think there are two kinds of context change. There’s the kind that produces a kind of equivocation in the premises, and of course that leads to the argument being invalid because the relevant rule of inference isn’t genuinely instantiated. But I don’t think that’s the kind of context shift we’re dealing with here: the formulas don’t equivocate on the sentence letters; the only thing they could be thought to equivocate on is ‘>’ itself. But I don’t think that saying that this argument equivocates on ‘>’ shows that it isn’t really a counterexample to agglomeration. The point of the example is that ‘>’ is not well-behaved in ways that we expect connectives to be well-behaved. And that’s exactly the problem with agglomeration.

    Given all that, the point isn’t that the same phenomenon explains other failures of agglomeration, but rather that once we recognise that we can’t have unrestricted agglomeration, it’s no more ad hoc to go for the very restricted version of agglomeration (that requires the consequents to hold at all the closest worlds) rather than the less restricted version (that only requires the same closeness relation). If we’re not going to have full agglomeration anyway, why is it an advantage to have the only somewhat restricted version?

  6. I think some of the proposed theories are going to end up saying “If you were to buy a ticket in the lottery then you would lose” is just as good as “If I were to drop this cup it would break”, since presumably the chance of the cup failing to break if it fell is better than 1 in a million (I’ve definitely once or twice had the experience of dropping a fragile wine glass that miraculously survives the fall – though not out a second story window). However, the lottery case here seems almost as bad as saying that I know the ticket is going to lose the lottery.

    Also, Alan Hajek has a draft “Most Counterfactuals are False” up on his website that you might be interested in checking out. http://philrsss.anu.edu.au/people-defaults/alanh/index.php3

  7. Hi Mike,

    Yes, interesting to think about how this interacts with CEM. I do want to argue for it—though in the current context I was going with the Lewisian flow. I’m not sure the discussion would turn epistemological if we went that direction. It’s true you get “Drop>Break v Drop>not-Break”. But in cases where the Lewisian would say that there are Break and ~Break worlds tied for closeness, the canonical CEM response I thought would be to say that it was a vague matter which of the Break or ~Break worlds were closest. Of course, you could then take an epistemic view of vagueness, and get back a kind of Molinism about counterfactuals, but that’d be a pretty strong stance to adopt.

    What threatens here if you go the non-epistemic way here is that most ordinary counterfactuals are at best indeterminate. And that doesn’t seem great to me. For example, if you went for a supervaluationist account of this (as e.g. Stalnaker suggests) then, despite having an instance of CEM, you’d get that most ordinary counterfactuals were neither true nor false.

    Actually, this connects to stuff about indeterminacy I’ve been thinking about recently. It might be not too bad that “Drop>Break” be indeterminate, if we could still rationally have very high confidence in it. I’ve recently been thinking about arguments to the effect that its irrational to invest any confidence in indeterminate propositions. That’s the sort of issue I think we’d have to look at (again, if you’re an epistemicist, you shouldn’t be worried about any of this—it’ll be exactly as you describe it).

    Anyway, however we sort this out, CEM does undoubtedly have an impact on the above—e.g. the “near miss” proposal will be ruled out, since that’ll force failures of CEM just as Lewis’s “all worlds” proposal does. I’m not sure how to read the “degree of goodness” proposals interact—as I mentioned above, they seemed like an alternative semantics to me, and then since I’m not sure how to read off the logic from it, I don’t know how to evaluate how it dovetails with CEM (presumably it’ll depend a great deal on how we combine degrees of goodness across disjunctions).

    Nicely (though I haven’t thought this through before) my favoured combination of CEM and the Lewisian response to chancy counterfactuals (pushing freaky/atypical worlds further out) looks like a stable combination.

  8. Hi Kenny,

    Thanks for mentioning the Hajek paper—I should have referenced it in the post actually. I like it lots but I should go back and read it again now I’ve got more opinions on some of the matters it deals with.

    On the relation to lottery beliefs. My favoured view doesn’t have that consequence—at least for simple tickets-in-a-barrel lotteries. Winning the lottery isn’t itself a freaky enough event to push a world further away from actuality (by my lights). Freakiness for me amounts to a world being non-random by the lights of the (chancy) laws of nature of the base world. Me winning the lottery doesn’t make a world non-random. A fair coin coming up heads infinitely many times in a row (or even a billion billion times in a row) will do. (Randomness is officially a primitive in the theory, though I hope in various special cases it’ll coincide with mathematical understandings of that notion).

    You can think about “lotteries” where particular tickets winning will be pushed further away by the lights of this theory—but they’ll have to be delicately set up and I’m just not sure why we should have very firm intuitions about these things.

    Now, this means that the whole theory is setup specifically to deal with really freaky outcomes—like the quantum tunnelling or base balls through the window stat mechanics examples, and not with everyday chances of dropping glasses just right so they won’t break. I should have a think about what I want to say about the less recherche cases…

  9. Oh, I should have said, I do think that the “near miss” proposal is going to have the issue you mention—specifically, so long as you know you’re not going to play, you’ll be able to know (and assert) “Play>Lose”. That seems uncomfortable—some reason not to go this way, it seems to me.

    In the same circs, you’re right that “Play>Lose” might have degree of goodness equal to that of “Drop>Break”. I’m not so sure whether that’s a problem—if only because if we’ve ditched truth talk for degree of truth talk, it’s less clear how e.g. we should relate knowledge (or assertibility) to this.

  10. Hi Daniel,

    You write: “That doesn’t make the counterfactuals themselves context-sensitive: if it’s the consequent which acts as the relevant context to fix the closeness relation, then each conditional is true given the background facts and the closeness relation fixed by their respective consequents, and thus neither sentence is context-sensitive, if by that you mean its truth-value is affected by countextual factors external to the sentence itself.”

    I don’t think that’s right—I can perfectly well imagine contexts (e.g. where we’re discussing what certain generals would have done given current military training) where one of the conditionals you mention seems unambiguously false. I don’t think that just what literally occurs in the consequent fixes what closeness relation is relevant. Actually, I don’t think the mere occurrence of material in the consequent is the relevant part of consequent in an ordinary assertion of “Korea>Catapults”. Rather, it seems to me it’s just the fact we’ve asserted that very counterfactual—-and so accommodation, charity etc pushes us to look for a context where the sentence comes out true. That fits nicely with what I take as data—that when that very clause is embedded in more complex contexts (conjunctions, negations) or when the background context is super-explicit, we’ll evaluate the clause as false.

    Re indeterminacy—just to be clear (though I don’t know whether you intended to attribute anything in conflict with this), I didn’t want to say that the conjunction was indeterminate, but just that both conjuncts were. Like A and ~A each being indeterminate, while A&~A being superfalse. More generally—I do think that in a single context, we should only have one closeness relation, and evaluate both conjuncts with respect to that. But I also think it’s fairly obvious that contexts can change through the course of a speech act (“they think it’s all over… it is now”). So I’m sure you can find a speech act that forces change in the closeness relation midway through—it’s just that in fact I don’t think that this’ll happen here. Also, I do think that where you’ve got *relevant* context change in the premises, you can’t automatically expect tokens of valid argument types to preserve truth.

    I was thinking of this as equivocation in “>” (specifically, in the closeness relation that fixes what semantic value the connective gets). Actually, in the Caesar cases I’ve also got some sympathy with the thought that it’s not so clear what proposition “Caesar fought in Korea” is intended to express. E.g. how should we take the tense? As Caesar fought in Korea in the years before BC? Or fought in Korea in the 20th Century? Or just fought in Korea at some point before now? Does “fought in Korea” refer to any fighting that occurs on the Korean peninsula, or is it idiomatic for “fought in the Korean war”? To get a clear-cut example of context-sensitivity of “>” itself, I’d want to be sure we’re not getting noise from this kind of thing. Anyway, I’m tempted to think that “>” is no more badly behaved in these sort of respects than other modals—they tend to be context-sensitive, but modal logic looks good and useful to me, always provided we adopt standard contextual hygiene when reasoning.

  11. Daniel’s point about the consequent fixing a context sounds wrong to me too. What’s in consequent is a relevant factor when it comes to interpreting a speaker so as they come out as saying something true (or non-obviously false, or saying something that’s worth saying, or whatever), and in that regard, I guess the consequent is important when thinking about which similarity ordering is contextually salient. But it’s not like the consequent fixes the similarity ordering.

    Robbie: as we know, the subsequent point about the indeterminacy being in ‘>’ is really similar to what Stalnaker says in connection to CEM. In the Lewis setting, where we don’t have the uniqueness assumption, is the idea that it’s indeterminate which set of A-worlds is supposed to count as closest?

    Just on the Edgignton “paraphrase to easy cases” move. That has always struck me as really weird – in other cases where people want to paraphrase, it’s almost demanded that we can give a systematic treatment, not that we can piecemeal give paraphrases of either individual sentences or sentence types. Is this case supposed to be special because various embeddings don’t make sense and so we don’t need to show how they can be paraphrased? But I don’t see how that’s supposed to get her off the hook wrt, say, propositional attitudide reports.

  12. Hi Rich—Yeah, that’s exactly it. I was thinking that indeterminacy in closeness would be an option for Lewis too (or context sensitivity for that matter). Presumably the nicest thing is if we could locate some term in his analysis of closeness which was arguably indeterminate (or context sensitive) in the Caesar case, and so predict the relevant data. But I haven’t thought that through. It seems kinda like the Kangeroo example with which Lewis kicks off the book “Counterfactuals”—but I don’t know whether he returned to such cases later to explain how they worked once we have the more informative analysis of similarity in the “Time’s arrow” paper.

    I sympathize with the worries about Edgington’s strategy—I’d feel so much more happy if she offered a systematic paraphrase rather than a couple of strategies that need to be thought through on a case by case basis. But I’ve never been sure how to *argue* that she needs to provide something systematic (non-question-beggingly).

    I think it’s striking, though, that this isn’t just a problem for Edgington. Jackson equally appeals to these rather unsystematic paraphrases to explain the assertibility conditionals of compounded and embedded conditionals—he borrows the conditional probability story for assertibility of *simple* conditionals, but that gives us no guide for the others. He discusses this at the end of his book.

    In both Jackson and the error-theoretic account of counterfactuals discussed in this post, you can just about get away with saying surprising things about the truth-conditionals of ordinary conditionals by giving a systematic story about the assertibility conditions of *simple* conditionals—but assertibility conditions in the non-simple cases seem just as hard for Jackson and the error-theorist as for Edgington (so far as I can see at the moment).

  13. Rich Woodward

    Hey Robbie,

    I’m wondering why we can’t just say that the antecedent time in the Caesar examples is context sensitive (or indeterminate, of you’d prefer). It strikes me that in those cases, we use the content of the consequent to get a grip of when the antecedent time is. So in the catapult case, charity leads us to fix the antecedent time at around 50BC, whereas in the missles case, charity leads us to fix the antcedent time at around 2000AD.§

  14. Hi Rich,

    (As mentioned when we chatted earlier today) there’s a question about what “antecedent time” means in this case. Lewis’s official analysis doesn’t mention time at all (that’s partially the point of it) so we’d have to locate indeterminacy elsewhere… maybe the counterpart relation? Jackson et al I think do have a time built into the similarity story, so maybe then that’ll work.

    I think there is some additional work to do even if it’s unambiguously present-day—-e.g. whether Caesar has training as a roman general (time-travelling?) or knows how to use nukes…

  15. Robbie,

    I’ve been thinking about the counterpart relation giving rise to indeterminacy and it seems right to me.

    Take the Kangaroo case and consider

    (1) If kangaroos didn’t have tails, they’d fall over

    In some contexts, it seems natural to say that this is true. In others, it looks false. What seems to be going on, I think, is that its context-sensitive what properties something must have in order to qualify as a Kangaroo. Let’s sort the worlds into two types (there are probably more, but let’s simplify).

    Type w1 worlds: there is a species in w1 that has an evolutionary history just like actual kangaroos, but due to a series of small miracles, all the members of this species lose their tails. They all fall over.

    Type w2 worlds: there is a species in w2 that has an evolutionary history radically different from actual kangaroos. In particular, they’ve evolved in such a way that they don’t have tails. They don’t fall over. But they have a great many of the other properties that actual kangaroos have (and, indeed, the creatures in w1 worlds).

    On some counterpart relations, it seems to me that the creatures in w1 worlds deserve the name “kangaroos” but the creatures in w2 worlds don’t. If we have that counterpart relation in mind, then (1) will be true. But it seems to me that under different counterpart relations, the creatures in the w2 worlds are deserving of the name. If we have that counterpart relation in mind, then (1) will be false.

    If this is right, then the counterpart relation may well be source of indeterminacy. And I can see how the story extends to Caesar case (without, for that matter, the need for time-travelling (at least if we are allowed to be liberal with the counterpart relation)).

  16. Yeah, that does seem like a Lewis-y thing to say… though of course “counterparts” in Lewis are involved in the analysis of names (like “Caesar”), not of predicates like “is a kangeroo”. But I take it the way to express the thought in that framework is that indeterminacy surfaces in indeterminacy over the transworld extension of the predicate.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s