One version of Lewis’s worlds-semantics for counterfactuals can be put like this: “If were A, then B” is true at @ iff all the most similar A-worlds to @ are B-worlds. But what notion of similarity is in play? Not all-in overall approximate similarity, otherwise (as Fine pointed out) a world in which Nixon pressed the button, but it was quickly covered up, and things at the macro-level approximately resemble actuality from then on, would count as more similar to @ than worlds where he pressed the button and events took their expected course: international crisis, bombings, etc. Feed that into the clause for conditionals and you get false counterfactuals coming out true: e.g. “If Nixon had pressed the button, everything would be pretty much the way it actually is”.
In “Time’s arrow”, Lewis proposed a system of weightings for the “standard ordering” of counterfactual closeness. They’re intended to apply only in cases where the laws of nature of @ are deterministic. Roughly stated, worlds are ordered around @ by the following principles:
- It is of the first importance to avoid big, widespread violations of @’s laws
- It is of the second importance to maximize region of exact intrinsic match to @ in matters of particular fact
- It is of the third importance to avoid even small violations of @’s laws
- It is of little or no importance to maximize approximate similarity to @
These, he argued, gave the right verdict on the Nixon-counterfactuals. For Nixon counterfactuals only have approximate perfect match, which counts for little or nothing. The most similar button-pushing worlds by the above lights, said Lewis, would be worlds that perfectly matched @ up to a time shortly before the button-pressing, diverged by a small law-violation, and then events ran on wherever the laws of nature took them—presumably to international crisis, nuclear war, or whatever. Such worlds are optimal as regards (1), ok as regards (2) (because of the past match). And they’re ok as regards (3) (only one violation of law needed). (Let’s suppose that approximate convergence has no weight—it’ll make life easier). Pick one such world and call it NIX.
If this is to work, it better be that no “approximate future convergence” world does better by this system of weights than NIX. It’d be pretty easy to beat NIX on grounds (3)—just choose any nomically possible world and you get this. But the key issue is (2), which trumps such considerations. Are there approximate future convergence worlds that match or beat NIX on this front?
Lewis thought there wouldn’t be. NIX already secures perfect match up until the 70’s. So what we’d need is perfect convergence in the future (after the button pressing). But Lewis thought to do this, we’d have to invoke many many violations of law, to wipe out the traces of the button-pushing (set the lightwaves back on their original course, as it were). We’d need a big and diverse miracle to get perfect future match. But such worlds are worse than NIX by point (1), which is of overriding importance.
Now *some* miracle would be needed if we’re to get perfectly match at some future time-segment. Here’s the intuitive thought. Suppose A is the button-pushing world that perfectly matches @ at some future time T. Run the laws of nature backwards from T. If the laws are deterministic, you’ll get exact match of all times prior to T until you get some violation of law. But the button-pushing happens in @ and not in A, so they can’t be duplicates then. So there must be some miracle that happens in between T and the button-pressing.
First thought. The doesn’t yet make the case that for the reconvergence to happen, we need lots of violations all over the place. Why couldn’t there be worlds where a tiny miracle at a suitable “pressure point” effects global reconvergence?
Rejoinder: one trouble with this idea is that presumably (as Lewis notes) the knock-on-effects of the first divergence spread quickly. In the few moments it takes to get Nixon to press the button, the divergences from actuality are presumably covering a good distance (consider those light-waves!). So how could a single *local* miracle possibly undo this effect? If a beam of light is racing away from the first event, that wouldn’t otherwise be there, then changes resulting from the second (small, local) miracle aren’t going to “catch it up”. There are probably some major assumptions about locality of causation etc packed in here. But it does seem like Lewis is pretty well-justified in the claim that it’d take a big, widespread miracle to reconverge.
Second thought. Consider a world that, like NIX, diverges from actuality just at the button-pressing moment. Let it never perfectly match @ again, and let it contain no more miracles. In that case, it looks like (so far as we’ve said) it *exactly ties* with NIX for closeness. But now: couldn’t one such world have approximate match to @ in the future? That would require some *deterministic* progress from button-pushing to (somehow) the nuclear launch not happening, and a lot of (deterministic) coverup. A big ask. But to say that there is just no world meeting this description seems an equally big commitment.
Rejoinder. I’m not sure how Lewis should respond to this one. He mentions very plausible cases where slight differences would add up: slight changes of tone in the biography, influencing readers differently, changing their lives, etc. It’s very very plausible that such stuff happens. But is it *nomically impossible* that approximate similarity be maintained? I just don’t see the case here.
(A note at what’s at stake here. Unlike perfect reconvergence, if Lewis allowed such approximate reconvergence worlds, you wouldn’t get “If Nixon had pressed the button, things would be approximately the same” coming out true. For the most we’d get is that these approximate coverup worlds are as close as NIX. NIX ensures that counterfactuals like the above are false—approximate similarity wouldn’t ensue at all most similar button-pushing worlds. But the approximate convergence world would equally ensure the falsity of ordinary counterfactuals, e.g. “If Nixon had pressed the button, things would be very different”. More generally, the presence of such approximate reconvergence worlds would make lots of ordinary counterfactuals false.)
Third thought. Lewis raises the possibility of entirely legal worlds that resemble @ in the 1970’s, but feature Nixon pressing the button. As Lewis emphasizes, there can’t be perfectly match with temporal slices of @ at any time, if they involve no violation of deterministic law. Lewis really has two things to say about such worlds. First, he says there’s “no guarantee” that there’s any such world will even approximately resemble @ in the far distinct future or past. He says: “it is hard to imagine how two deterministic worlds anything like ours could possibly remain only a little bit different for very long. There are altogether too many opportunities for little differences to give to big differences”. But second, given the four-part analysis given above, such worlds, these worlds aren’t going to be good contenders for similarity, since e.g. they’ll never perfectly match @ at any time.
Let’s suppose Lewis is wrong on (1): that there are nomic possibilities approximately like ours throughout history, except for the Button Pushing. I’m not sure what exactly the case against these worlds being close is on the four-part analysis. Sure, NIX has perfect match throughout the whole of history up till the 1970’s. And the worlds just discussed don’t have that. But condition (2) just says that we have to maximize the region of perfect match—and maybe there are other ways to do that.
One idea is that worlds like these could earn credit by the lights of (2), by having large but scattered match with @. Suppose there’s a button-pushing world W, with perfect match before the button-pushing, and such that post-pressing, there are infinitely many centimetre-cubed by 1 second regions of space-time, at which the trajectories and properties of particles *within that region* exactly match those in the corresponding region of @. You might well think that in a putative case of approximate match (including approximate match of futures) there’d be lots of opportunities for this kind of short-lived, spatially limited coincidence.
So how does (2) handle these cases? It’s just not clear—it depends on what “maximizing the region of perfect match” means. Maybe we’re supposed to look at the sheer volume of the regions where there is perfect fit. But that’ll do no good if the volumes are each infinite. In a world with infinite past and infinite future, exact match from the 1970’s back “all the way” doesn’t have a greater volume than the sum of infinitely many scattered regions, if both volumes are infinite. In a world with finite past but infinite future, continued sparse scattered future match could have *infinite* volume, as opposed to the finite volume of perfect match secured for NIX.
This causes problems even without the reconvergence. We want “button pressing” worlds not to diverge too early. Divergence in the 1950’s, with things being very different from then on, ultimately ending with a Soviet stooge Nixon pressing the button, is not the kind of most-similar world we want. (2) is naturally thought to help us out—maximizing perfect match is supposed to pressure us to put the divergence event as late as possible. But if we look only at the relative volumes of perfect match, in cases of an infinite past, the volumes of perfect match will be the same. This suggests we look, not at volumes, but at subregionhood. w will be closer to @ than u (all else equal) if the region through which w perfectly matches @ is a proper superregion of that through which u perfectly matches @. But this won’t promote NIX over scattered perfect match worlds—since in neither case do the regions of perfect match completely overlap the other’s.
Perhaps there are more options. One thought is to look at something like the ratio of volume of regions of perfect match to the volume of regions of non-perfect match at each time. Scattered match clearly goes with a low density of perfect match at times, in this sense—whereas in NIX the density at a time will be 1. How to work this into a proposal for understanding the imperative “maximize perfect match!” I don’t know.
Unless we say *something* to rule out scattered perfect match worlds, then prima facie they could match the extent of match in NIX. But then, because they never violate the laws, but NIX does (albeit once), they beat out NIX on (3). So this case (unlike approximate future match given above) we’re back to a situation where there’s a danger of declaring the “future similarity” counterfactual true, as well as the ordinary counterfactuals false.
Let’s review the three cases. First, there was the possibility of getting exact reconvergence to @ at future time T, via a single miracle. Second, there was the possibility of approximate future similarity without any perfect similarity. Third, there was the possibility of approximate overall match throughout time, with local, scattered, perfect match.
In effect, Lewis in Time’s Arrow doubts whether there are possibilities matching any of these descriptions. I thought that we could give some prima facie substance to that doubt in the first case. In the other two, I can’t see what the principled position is other than agnosticism, as yet. Lewis says, for example, about the third kind of case, that it’s “hard to imagine” how two worlds could approximately resemble each other in this way, and that there’s “no guarantee” that they’ll be like this. But is this good enough? Lots of things about nomic space are hard to imagine. Have we any positive reasons for doubt that possibilities of type 3 exist? Personally, in the absence of evidence, I’ll go 50/50 on whether they exist. But that’s to go 50/50 on whether Lewis’s favoured account makes most ordinary counterfactuals false. Not a good result.
I do have one positive suggestion, that’ll fix up the third case. Again, it comes down to what we’re trying to maximize in maximizing regions of perfect fit. The proposal is that we insist on complete temporal slices perfectly matching @, before we count them towards closeness as outlined in (2). That is, (2) should be understood as saying: maximize the *temporal segment* in which you have perfect fit. Now we can appeal to determinism to show that legal worlds will *never* perfectly match with @ at any time—and so *automatically* flunk (2) to the highest possible degree.
So the state of play seems to me this. It seems to me that there are plausible grounds for having low credence in the first worry with the account. And precisifing “perfect match” in the way just suggested deals with the third one. That only leaves the second worry—perfect past match+small violation+approximate future match.
I do want to emphasize one thing here. It is significant that the remaining problem, unlike the others, doesn’t make the offending “future similarity” counterfactual *true*. Those objections, had they been successful, would have promised the result that *all* the most similar worlds have futures like ours, rather than like NIX. But all we get with the residual objection, if it’s successful, is that *some* of the most similar worlds are of the offending type—for all we’ve said, *most* of the most similar worlds would be like NIX.
This brings into play other tweaks to the setting. Some (like Bennett) want for indepedendent reasons to change Lewis’s truth-conditions from “B is true at all the closest A worlds” to “B is true at most/the vast majority of the closest A worlds”. One could make this move against the current worry, but not against the other two.
I’m not a particular fan of the revisions to the logic of counterfactuals this suggestion would induce. There’s another thought I’m more sympathetic to. That’s to go Stalnakerian on the truth conditions, viewing what Lewis thinks of as “ties for closeness” as cases of indeterminacy in a total ordering. If so, what we’d get from the above is that at most that counterfactuals like “If Nixon had pressed the button, things would have been very different” are indeterminate (because false on at least one precisification of the ordering).
It’s not clear to me that this is a bad result. It depends very much on the “cognitive role of indeterminacy” that I’ve talked about ad nauseum before on this blog. If one can perfectly rationally be arbitrarily highly confident of indeterminate propositions, then no revision to our ordinary credences in ordinary counterfactuals need be induced by admitting them to be indeterminate. If, on the other hand, you take a “rejectionist” view of indeterminacy where it acts a bit like presupposition failures, this option is no more comfortable than admitting that most counterfactuals are false.
Anyway, just to emphasize: if these options are even going to be runners, we’re going to have to do something about the scattered match case.
Mm, I hadn’t considered cases of scattered match. Another modification to the view which might avoid the problems with these cases is just to stipulate that the region of perfect match be continuous rather than scattered. Lewis’ claim that ‘It is of the second importance to maximize the spatio-temporal region throughout which perfect match of particular fact prevails’ could be read either as requiring that the total volume of all regions of exact match be maximized, or just that the largest single continuous region of exact match be maximized. I don’t think it’s too implausible to attribute the latter precisification to him.
Boris Kment has a nice paper on your second kind of problem case, where he gives an alternative theory of closeness which gets the intuively correct antecedent-worlds to come out closer than any ‘accidental approximate reconvergence. We discussed it a few weeks ago in our grad discussion group – have a look here for the paper, a summary of it, and some comments: http://mleseminar.wordpress.com/2009/05/05/week-2-kment-on-counterfactuals/
Thanks Alistair! I agree that a continuity constraint is interesting here. I suppose there’d still be a problem if there were legal worlds with a infinitely-volumed continuous subregion that matched the actual world (maybe a growing vacuum at some future point, just where a growing vacuum was in the actual world?) One nice thing about the temporal slice restriction is that it allows us to *argue* that legal worlds will never cause trouble—I guess with the continuity constraint the best we can hope for is something like a plausibility argument that legal worlds won’t have stuff like this going on. But if we could make a case for this, maybe that’d be good enough.
Thanks for the link to the Kment paper. Looks cool—I should take a look.
Another option to avoid these problem cases (basically, Analysis 1 in ‘Counterfactual dependence and time’s arrow’) is to take the direction of time as primitive, and just stipulate that past exact match is to be the feature of the second importance in the ranking of worlds. Where times aren’t mentioned in the antecedent (eg ‘If Kangaroos had tails…’) this suggestion just reduces to Lewis’ theory.
If you do this, you lose your explanation for the arrow of time in terms of counterfactual dependence. But how great a cost is this really? Lewis admitted that he couldn’t see how to integrate entropy into his 1979 discussion, and it seems plausible to lots of us that the arrow of time should be identified with the direction of increasing entropy. Then there seem to be good prospects of using the entropic arrow of time to explain why our counterfactual thinking works in the way it does. The opposite order of explanation, using an arrow of time derived from our intuitions about counterfactual dependence to ground time’s direction, as Lewis does, looks less promising.
Re-reading that paper, Lewis considers this retreat to Analysis 1 in postscript D (for other reasons, relating to the might/would duality), but says that would be excessively aprioristic – building the denial of backwards causation into the theory, for example. He insists ‘the asymmetry of counterfactual dependence should come from a symmetrical analysis and an asymmetrical world.’
I’m not sure how seriously to take this requirement. If our thinking developed in a world where the laws are asymmetric, as they certainly seem to be in ours, then is it surprising that asymmetry pervades our general counterfactual thinking? Perhaps I’ve misunderstood Lewis’ motivation here…
Hi Robbie,
Actually, the future perfect match possible world is a serious problem for Lewis. As far as I know it was first pointed out by David Albert (and discussed a bit in a seminar Lewis attended) and first written about by Adam Elga . (2000) “Statistical Mechanics and the Asymmetry of Counterfactual Dependence.” Philosophy of Science suppl. Vol 68: 313-24. I discuss it in a number of places including “Counterfactuals and the Second Law”. (In Causation, Physics, and the Constitution of Reality: Russell’s Republic Revisited, Huw Price and Richard Corry (ed.), New York: Oxford University Press, (2007) pp. 293-326.). This paper is on my website. Although I think that similarity accounts are the “intended” semantics for the conditionals we actually use there are features of the physics of our world (the temporal symmetry of the dynamical laws, statisitical mechanical probabilities, and other matters) that are in tension with those semantics and the implicit commitment that the actual laws play a central role in evaluating conditionals. In the paper I develop a conditional based on conditional probability that works better….
Barry
Hi Barry,
You’re absolutely right, and (given I was aware of it from talking to you previously!) I should have worked those cases into the post above. My thought was: even setting aside the cases built from time-reversal, does the Lewisian analysis have problems?
I take it (at least in the Elga presentation) that the worlds built via time-reversal techniques have perfect future similarity, but don’t match in the past, which is slightly different from the cases I was considering (perfect past similarity *plus* approx or perfect future similarity).
One question is whether whatever we have to do to get rid of the perfect-future-match, will also rid us of the remaining problems mentioned in the post. One idea I quite like to deal with the cases you mention is to have “typicality by the lights of chancy laws”, be built into the account of what it is to “fit” with the laws of nature of the world—even when the chances aren’t fundamental.
But I’m not sure that maneuver will help with the cases in the blog post. E.g. I’m not sure why scattered perfect match worlds would be statistically mechanically atypical (if the patches of perfect match were randomly distributed). It’ll depend a lot on how we spell out “typicality” though.
So I completely agree that all these problems need addressing. I’m hoping that principled patching and tweaking can help out the Lewisian here—maybe different in different cases.
But I do see the attractions of thinking through the whole thing from scratch, if these sort of cases continue to mount up. There’s a suspician of “regressive research programme” about the puncture-patch method I’m going in for at the moment.