Category Archives: Epistemology

Subject-relative safety and nested safety.

The paper I was going to post took off from very interesting recent work by John Hawthorne and Maria Lasonen that creates trouble from the interaction of safety constraints and a plausible looking principle about chance and close possibilities. The major moving part is a principle that tells you (roughly) that whenever a proposition is high-chance at w,t, then some world compatible with the proposition is  a member of the “safety set” relevant to any subject’s knowledge at w,t (the HCCP principle).

It’s definitely worth checking out Williamson’s reply to H&L. There’s lots of good stuff in it. Two relevant considerations: he formulates a version of safety in the paper that is subject-relativized (one of the “outs” in the argument that H&L identify, and defends this against the criticisms they offer). And he rejects the HCCP principle. The basic idea is this: take some high-but-not-1-chance proposition that’s intuitively known e.g. the ball is about to hit the floor. And consider a world in which this scenario is duplicated many times—enough so that the generalization “some ball fails to hit the floor” is high-chance (though false). Each individual conjunct seems epistemically on a par with the original. But by HCCP, there’s some failure-to-hit world in the safety set, which means at least one of the conjuncts is unsafe and so not known.

Rejecting HCCP is certainly sufficient to get around the argument as stated. But H&L explicitly mention subject-relativization of safety sets as a different kind of response, *compatible* with retaining HCCP. The idea I take it is that if safety sets (at a given time) can vary,  *different* “some ball hitting floor” possibilities could be added to the different safety sets, satisfying HCCP but not necessarily destroying any of the distributed knowledge claims.

I see the formal idea, which is kind of neat. The trouble I have with this is that I’ve got very little grip at all as to *how* subject-relativization would get us out of the H&L trouble. How can particular facts about subjects change what’s in the safety set?

I’m going to assume the safety set (for a subject, at a given time and place) is always a Lewisian similarity sphere—that is, for some formal similarity ordering of worlds, the safety sphere is closed downwards under “similarity to actuality”.  I’ll also assume that *similarity* isn’t subject-relative, though for all I’ll say it could vary e.g. with time. The assumptions are met by Lewis’s accout of counterfactual similarity—in fact, for him similarity isn’t time-relative either—but many other theories can also agree with this.

The assumption that the safety set is always a similarity sphere (in the minimal sense) seems a pretty reasonable requirement, if we’re to justify the gloss of a safety set as a set of the “sufficiently close worlds”.

But just given the minimal gloss, we can get some strong results: in particular, that safety sets for different subjects at a single time will be nested in one another (think of them as “spheres around actuality”–given minimal formal constraints, Lewis articulates, the “spheres” are nested, as the name suggests).

Suppose we have n subjects in an H&L putative “distributed knowledge” case as described earlier. Now take the minimal safety set M among those n subjects. This exists and is a subset of the safety sets of all the others, by nesting. And by HCCP, it has to include a failure-to-hit possibility within it. Say the possibility that’s included in M features ball k failing to hit. But this means that that possibility is also in the safety set relevant to the kth person’s belief that their ball *will* hit the ground, and so their actual belief is unsafe and can’t count as knowledge—exactly the situation that relativizing to subjects was supposed to save us from!

The trouble is, the sort of rescue of distributed knowledge sketched earlier relies on the thought that safety sets for subjects at a time might be “petal shaped”—overlapping, but not nested in one another. But thinking of them as “similarity spheres”, where similarity is not subject relative, simply doesn’t allow this.

Now, this doesn’t close off this line of inquiry. Perhaps we *should* make similarity itself relative to subjects or locations (if so, then we definitely can’t use Lewis’s “Time’s arrow” sense of similarity). Or maybe we could relax the formal restrictions on similarity that allow us to derive nesting (If worlds can be incomparable in terms of closeness to actuality, we get failures of nesting—weakening Lewis’s formal assumptions in this way weakens the associated logic of counterfactuals to Pollock’s SS). But I do think that it’s interesting that the kind of subject-relativity of closeness that might be motivated by e.g. interest-relative invariantism about knowledge (the idea that how “close” the worlds to be in the safety set  depends on the interests etc of the knower) simply don’t do enough to get us out of the H&L worries.  We need a much more thorough-going relativization if we’re going to make progress here.

Advertisements

Safety and lawbreaking

One upshot of taking the line on the scattered match case I discussed below is the following: if @ is deterministic, then legal worlds (aside from @) are really far away, on grounds of utterly flunking the “pefect match” criterion utterly. If perfect match, as I suggested, means “perfect match over a temporal segment of the world”, then legal worlds just never score on this grounds at all.

Here’s one implication of this. Take a probability distribution compatible with determinism—like the chances of statistical mechanics. I’m thinking of this as a measure over some kind of configuration space—the space of nomically poossible worlds. So subsets of this space correspond to propositions that (if we choose them right) have high probability, given the macro-state of the world at the present time. And we can equally consider the conditional probability of those on x pushing the nuclear button. For many choices of P which have high probability conditionally on button-pressing, “button-pressing>~P” will be true. The closest worlds where the button-pressing happens are going to be law-breaking worlds, not legal worlds. So any proposition only true at legal worlds will not obtain, given the counterfactual. But sets of such worlds can of course get high conditional probability.

There’s an analogue of this result that connects to recent work on safety by Hawthorne and Lasonen-Aarnio. First, presume that the safety set at w,t  (roughly set of worlds such we musn’t believe falsely that p, if we are to have knowledge that p) is a similarity sphere in Lewis’s sense. That is: any world counterfactually as close as a world in the set must be in the set. If any legal world is in the set, all worlds with at least some perfect match will also be in that set, by the conditions for closeness previously mentioned. But that would be crazy—e.g. there are worlds where I falsely believe that I’m sitting in front of my computer, on the same base as I do now, which have *some* perfect match with actuality in the far distant past (we can set up mad scientists etc to achieve this with only a small departure from actuality a few hundred years ago). So if the safety set is a similarity sphere, and the perfect match constraint is taken as I urged, then there better not be any legal worlds in the safety set.

What this means is that a fairly plausible principle has to go:  that if, at w and t, P is high probability, then there must be at least one P-world in the safety set at w and t. For as noted earlier, law-entailing propositions can be high-probability. But massive scepticisim results if they’re included in the safety set. (I should note that Hawthorne and Lasonen don’t endorse this principle, but only the analogous one where the “probabilities” are fundamental objective chances in an indeterministic world—but it’s hard to see what could motivate acceptance of that and non-acceptance of the above).

What to give up? Lewis’s lawbreaking account of closeness? The safety set as a similarity sphere? The probability-safety connection? The safety constraint on knowledge? Or some kind of reformulation of one of the above to make them all play nicely together. I’m presently undecided….

Logically good inference and the rest

From time to time in my papers, the putative epistemological significance of logically good inference has been cropping up. I’ve been recently trying to think a bit more systematically about the issues involved.

Some terminology. Suppose that the argument “A therefore B” is logically valid. Then I’ll say that reasoning from “A” is true, to “B” is true, is logically good. Two caveats (1) the logical goodness of a piece of reasoning from X to Y doesn’t mean that, all things considered, it’s ok to infer Y from X. At best, the case is pro tanto: if Y were a contradiction, for example, all things considered you should give up X rather than come to believe Y; (2) I think the validity of an argument-type won’t in the end be sufficient for for the logically goodness of a token inference of that type—partly because we probably need to tie it much closer to deductive moves, partially because of worries about the different contexts in play with any given token inference. But let me just ignore those issues for now.

I’m going to blur use-mention a bit by classifying material-mode inferences from A to B (rather than: “A” is true to “B” is true”) as logically good in the same circumstances. I’ll also call a piece of reasoning from A to B “modally good” if A entails B, and “a priori good” if it’s a priori that if A then B (nb: material conditional). If it’s a priori that A entails B, I’ll call it “a priori modally good”.

Suppose now we perform a competent deduction of B from A. What I’m interested in is whether the fact that the inference is logically good is something that we should pay attention to in our epistemological story about what’s going on. You might think this isn’t forced on us. For (arguably: see below) whenever an inference is logically good, it’s also modally and a priori good. So—the thought runs—for all we’ve said we could have an epistemology that just looks at whether inferences are modally/a priori good, and simply sets aside questions of logical goodness. If so, logical goodness may not be epistemically interesting as such.

(That’s obviously a bit quick: it might be that you can’t just stop with declaring something a priori good; rather, any a priori good inference falls into one of a number of subcases, one of which is the class of logically good inferences, and that the real epistemic story proceeds at the level of the “thickly” described subcases. But let’s set that sort of issue aside).

Are there reasons to think competent deduction/logically good inference is an especially interesting epistemological category of inference?

One obvious reason to refuse to subsume logically good inference within modally good inferences (for example) is if you thought that some logically good inferences aren’t necessarily truth-preserving. There’s a precedent for that thought: Kaplan argues in “Demonstratives” that “I am here now” is a logical validity, but isn’t necessarily true. If that’s the case, then logically good inferences won’t be a subclass of the modally good ones, and so the attempt to talk only about the modally good inferences would just miss some of the cases.

I’m not aware of persuasive examples of logically good inferences that aren’t a priori good. And I’m not persuaded that the Kaplanian classification is the right one. So let’s suppose pro tem that the logically good inference are always modally, a priori, and a priori modally, good.

We’re left with the following situation: the logically good inferences are a subclass of inferences that are also fall under other “good” categories. In a particular case where we come to believe B on the basis of A, where is the particular need to talk about its logical “goodness”, rather than simply about its modal, a priori or whatever goodness?

To make things a little more concrete: suppose that our story about what makes a modally good inference good is that it’s ultra-reliable. Then, since we’re supposing all logically good inferences are modally good, just from their modal goodness, we’re going to get that they’re ultra-reliable. It’s not so clear that epistemologically, we need say any more. (Of course, their logical goodness might explain *why* they’re reliable: but that’s not clearly an *epistemic* explanation, any more than is the biophysical story about perception’s reliability.)

So long as we’re focusing on cases where we deploy reasoning directly, to move from something we believe to something else we believe, I’m not sure how to get traction on this issue (at least, not in such an abstract setting: I’m sure we could fight on the details if they are filled out.). But even in this abstract setting, I do think we can see that the idea just sketched ignores one vital role that logically good reasoning plays: namely, reasoning under a supposition in the course of an indirect proof.

Familiar cases: If reasoning from A to B is logically good, then it’s ok to believe (various) conditional(s) “if A, B”. If reasoning from A1 to B is logically good, and reasoning from A2 to B is logically good, then inferring B from the disjunction A1vA2 is ok. If reasoning from A to a contradiction is logically good, then inferring not-A is good. If reasoning from A to B is logically good, then reasoning from A&C to B is good.

What’s important about these sort of deployments is that if you replace “logically good” by some wider epistemological category of ok reasoning, you’ll be in danger of losing these patterns.

Suppose, for example, that there are “deeply contingent a priori truths”. One schematic example that John Hawthorne offers is the material conditional “My experiences are of kind H > theory T of the world is true”. The idea here is that the experiences specified should be the kind that lead to T via inference to the best explanation. Of course, this’ll be a case where the a priori goodness doesn’t give us modal goodness: it could be that my experiences are H but the world is such that ~T. Nevertheless, I think there’s a pretty strong case that in suitable settings inferring T from H will be (defeasibly but) a priori good.

Now suppose that the correct theory of the world isn’t T, and I don’t undergo experiences H. Consider the counterfactual “were my experiences to be H, theory T would be true”. There’s no reason at all to think this counterfactual would be true in the specified circumstances: it may well be that, given the actual world meets description T*, the closest world where my experiences are H is still an approximately T*-world rather than a T-world. E.g. the nearest world where various tests for general relativity come back negative may well be a world where general relativity is still the right theory, but it’s effects aren’t detectable on the kind of scales initially tested (that’s just a for-instance: I’m sure better cases could be constructed).

Here’s another illustration of the worry. Granted, reasoning from H to T seems a priori. But reasoning from H+X to T seems terrible, for a variety of X. (So: My experiences are of H + my experiences are misleading in way W will plausibly a priori supports some T’ incompatible with T). But if we were allowed to use a priori good reasoning in indirect proofs, then we could simply argue from H+X to H, and thence (a priori) to T, overall getting an a priori route from H+X to T. the moral is that we can’t treat a priori good pieces of reasoning as “lemmas” that we can rely on under the scope of whatever suppositions we like. A priori goodness threatens to be “non-monotonic”: which is fine on its own, but I think does show quite clearly that it can completely crash when we try to make it play a role designed for logical goodness.

This sort of problem isn’t a surprise: the reliability of indirect proofs is going to get *more problematic* the more inclusive the reasoning in play is. Suppose the indirect reasoning says that whenever reasoning of type R is good, one can infer C. The more pieces of reasoning count as “good”, the more potential there is to come into conflict with the rule, because there’s simply more cases of reasoning that are potential counterexamples.

Of course, a priori goodness is just one of the inferential virtues mentioned earlier: modal goodness is another; and a priori modal goodness a third. Modal goodness already looks a bit implausible as an attempt to capture the epistemic status of deduction: it doesn’t seem all that plausible to classify the inferential move from A and B to B as w the same category as the move from this is water to this is H2O. Moreover, we’ll again have trouble with conditional proof: this time for indicative conditionals. Intuitively, and (I’m independently convinced) actually, the indicative conditional “if the watery stuff around here is XYZ, then water is H2O” is false. But the inferential move from the antecedent to the consequent is modally good.

Of the options mentioned, this leaves a priori modal goodness. The hope would be that this’ll cut out the cases of modally good inference that cause trouble (those based around a posteriori necessities). Will this help?

I don’t think so: I think the problems for a priori goodness resurface here. if the move from H to T is a priori good, then it seems that the move from Actually(H) to Actually(T) should equally be a priori good. But in a wide variety of cases, this inference will also be modally good (all cases except H&~T ones). But just as before, thinking that this piece of reasoning preserves its status in indirect proofs gives us very bad results: e.g. that there’s an a priori route from Actually(H) and Actually(X) to Actually (T), which for suitably chosen X looks really bad.

Anyway, of course there’s wriggle room here, and I’m sure a suitably ingenious defender of one of these positions could spin a story (and I’d be genuinely interested in hearing it). But my main interest is just to block the dialectical maneuver that says: well, all logically good inferences are X-good ones, so we can get everything we want having a decent epistemology of X-good inferences. The cases of indirect reasoning I think show that the *limitations* on what inferences are logically good can be epistemologically central: and anyone wanting to ignore logic better have a story to tell about how their story plays in these cases.

[NB: one kind of good inference I haven’t talked about is that backed by what 2-dimensionalists might call “1-necessary truth preservation”: I.e. truth preservation at every centred world considered as actual. I’ve got no guarantees to offer that this notion won’t run into problems, but I haven’t as yet constructed cases against it. Happily, for my purposes, logically good inference and this sort of 1-modally good inference give rise to the same issues, so if I had to concede that this was a viable epistemological category for subsuming logically good inference, it wouldn’t substantially effect my wider project.]