From time to time in my papers, the putative epistemological significance of logically good inference has been cropping up. I’ve been recently trying to think a bit more systematically about the issues involved.
Some terminology. Suppose that the argument “A therefore B” is logically valid. Then I’ll say that reasoning from “A” is true, to “B” is true, is logically good. Two caveats (1) the logical goodness of a piece of reasoning from X to Y doesn’t mean that, all things considered, it’s ok to infer Y from X. At best, the case is pro tanto: if Y were a contradiction, for example, all things considered you should give up X rather than come to believe Y; (2) I think the validity of an argument-type won’t in the end be sufficient for for the logically goodness of a token inference of that type—partly because we probably need to tie it much closer to deductive moves, partially because of worries about the different contexts in play with any given token inference. But let me just ignore those issues for now.
I’m going to blur use-mention a bit by classifying material-mode inferences from A to B (rather than: “A” is true to “B” is true”) as logically good in the same circumstances. I’ll also call a piece of reasoning from A to B “modally good” if A entails B, and “a priori good” if it’s a priori that if A then B (nb: material conditional). If it’s a priori that A entails B, I’ll call it “a priori modally good”.
Suppose now we perform a competent deduction of B from A. What I’m interested in is whether the fact that the inference is logically good is something that we should pay attention to in our epistemological story about what’s going on. You might think this isn’t forced on us. For (arguably: see below) whenever an inference is logically good, it’s also modally and a priori good. So—the thought runs—for all we’ve said we could have an epistemology that just looks at whether inferences are modally/a priori good, and simply sets aside questions of logical goodness. If so, logical goodness may not be epistemically interesting as such.
(That’s obviously a bit quick: it might be that you can’t just stop with declaring something a priori good; rather, any a priori good inference falls into one of a number of subcases, one of which is the class of logically good inferences, and that the real epistemic story proceeds at the level of the “thickly” described subcases. But let’s set that sort of issue aside).
Are there reasons to think competent deduction/logically good inference is an especially interesting epistemological category of inference?
One obvious reason to refuse to subsume logically good inference within modally good inferences (for example) is if you thought that some logically good inferences aren’t necessarily truth-preserving. There’s a precedent for that thought: Kaplan argues in “Demonstratives” that “I am here now” is a logical validity, but isn’t necessarily true. If that’s the case, then logically good inferences won’t be a subclass of the modally good ones, and so the attempt to talk only about the modally good inferences would just miss some of the cases.
I’m not aware of persuasive examples of logically good inferences that aren’t a priori good. And I’m not persuaded that the Kaplanian classification is the right one. So let’s suppose pro tem that the logically good inference are always modally, a priori, and a priori modally, good.
We’re left with the following situation: the logically good inferences are a subclass of inferences that are also fall under other “good” categories. In a particular case where we come to believe B on the basis of A, where is the particular need to talk about its logical “goodness”, rather than simply about its modal, a priori or whatever goodness?
To make things a little more concrete: suppose that our story about what makes a modally good inference good is that it’s ultra-reliable. Then, since we’re supposing all logically good inferences are modally good, just from their modal goodness, we’re going to get that they’re ultra-reliable. It’s not so clear that epistemologically, we need say any more. (Of course, their logical goodness might explain *why* they’re reliable: but that’s not clearly an *epistemic* explanation, any more than is the biophysical story about perception’s reliability.)
So long as we’re focusing on cases where we deploy reasoning directly, to move from something we believe to something else we believe, I’m not sure how to get traction on this issue (at least, not in such an abstract setting: I’m sure we could fight on the details if they are filled out.). But even in this abstract setting, I do think we can see that the idea just sketched ignores one vital role that logically good reasoning plays: namely, reasoning under a supposition in the course of an indirect proof.
Familiar cases: If reasoning from A to B is logically good, then it’s ok to believe (various) conditional(s) “if A, B”. If reasoning from A1 to B is logically good, and reasoning from A2 to B is logically good, then inferring B from the disjunction A1vA2 is ok. If reasoning from A to a contradiction is logically good, then inferring not-A is good. If reasoning from A to B is logically good, then reasoning from A&C to B is good.
What’s important about these sort of deployments is that if you replace “logically good” by some wider epistemological category of ok reasoning, you’ll be in danger of losing these patterns.
Suppose, for example, that there are “deeply contingent a priori truths”. One schematic example that John Hawthorne offers is the material conditional “My experiences are of kind H > theory T of the world is true”. The idea here is that the experiences specified should be the kind that lead to T via inference to the best explanation. Of course, this’ll be a case where the a priori goodness doesn’t give us modal goodness: it could be that my experiences are H but the world is such that ~T. Nevertheless, I think there’s a pretty strong case that in suitable settings inferring T from H will be (defeasibly but) a priori good.
Now suppose that the correct theory of the world isn’t T, and I don’t undergo experiences H. Consider the counterfactual “were my experiences to be H, theory T would be true”. There’s no reason at all to think this counterfactual would be true in the specified circumstances: it may well be that, given the actual world meets description T*, the closest world where my experiences are H is still an approximately T*-world rather than a T-world. E.g. the nearest world where various tests for general relativity come back negative may well be a world where general relativity is still the right theory, but it’s effects aren’t detectable on the kind of scales initially tested (that’s just a for-instance: I’m sure better cases could be constructed).
Here’s another illustration of the worry. Granted, reasoning from H to T seems a priori. But reasoning from H+X to T seems terrible, for a variety of X. (So: My experiences are of H + my experiences are misleading in way W will plausibly a priori supports some T’ incompatible with T). But if we were allowed to use a priori good reasoning in indirect proofs, then we could simply argue from H+X to H, and thence (a priori) to T, overall getting an a priori route from H+X to T. the moral is that we can’t treat a priori good pieces of reasoning as “lemmas” that we can rely on under the scope of whatever suppositions we like. A priori goodness threatens to be “non-monotonic”: which is fine on its own, but I think does show quite clearly that it can completely crash when we try to make it play a role designed for logical goodness.
This sort of problem isn’t a surprise: the reliability of indirect proofs is going to get *more problematic* the more inclusive the reasoning in play is. Suppose the indirect reasoning says that whenever reasoning of type R is good, one can infer C. The more pieces of reasoning count as “good”, the more potential there is to come into conflict with the rule, because there’s simply more cases of reasoning that are potential counterexamples.
Of course, a priori goodness is just one of the inferential virtues mentioned earlier: modal goodness is another; and a priori modal goodness a third. Modal goodness already looks a bit implausible as an attempt to capture the epistemic status of deduction: it doesn’t seem all that plausible to classify the inferential move from A and B to B as w the same category as the move from this is water to this is H2O. Moreover, we’ll again have trouble with conditional proof: this time for indicative conditionals. Intuitively, and (I’m independently convinced) actually, the indicative conditional “if the watery stuff around here is XYZ, then water is H2O” is false. But the inferential move from the antecedent to the consequent is modally good.
Of the options mentioned, this leaves a priori modal goodness. The hope would be that this’ll cut out the cases of modally good inference that cause trouble (those based around a posteriori necessities). Will this help?
I don’t think so: I think the problems for a priori goodness resurface here. if the move from H to T is a priori good, then it seems that the move from Actually(H) to Actually(T) should equally be a priori good. But in a wide variety of cases, this inference will also be modally good (all cases except H&~T ones). But just as before, thinking that this piece of reasoning preserves its status in indirect proofs gives us very bad results: e.g. that there’s an a priori route from Actually(H) and Actually(X) to Actually (T), which for suitably chosen X looks really bad.
Anyway, of course there’s wriggle room here, and I’m sure a suitably ingenious defender of one of these positions could spin a story (and I’d be genuinely interested in hearing it). But my main interest is just to block the dialectical maneuver that says: well, all logically good inferences are X-good ones, so we can get everything we want having a decent epistemology of X-good inferences. The cases of indirect reasoning I think show that the *limitations* on what inferences are logically good can be epistemologically central: and anyone wanting to ignore logic better have a story to tell about how their story plays in these cases.
[NB: one kind of good inference I haven’t talked about is that backed by what 2-dimensionalists might call “1-necessary truth preservation”: I.e. truth preservation at every centred world considered as actual. I’ve got no guarantees to offer that this notion won’t run into problems, but I haven’t as yet constructed cases against it. Happily, for my purposes, logically good inference and this sort of 1-modally good inference give rise to the same issues, so if I had to concede that this was a viable epistemological category for subsuming logically good inference, it wouldn’t substantially effect my wider project.]
This was quite an interesting post. I want to make a small point. “I am here now” is logically valid but not necessary in Kaplan’s logic of indexicals LD (I think that is the name of it). It is a fairly deviant logic, e.g. necessitation fails. It is a big jump from logically valid in LD to logically valid. I’m inclined to agree with your suspicions of Kaplan’s claim here.
Right… yeah, I was being a bit loose in just using “is logically valid” without qualification. I should hedge it round with “According to Kaplan’s system” or the equivalent.
Actually the failure of Nec in Kaplan’s system is an interesting illustration of the point I was making. The standard modal validities are a subset of Kaplan’s. E.g. you get the usual classical tautologies of predicate logic and modal logic. But, if you go the whole Kaplan route, you also get some more validities. And by strengthening what arguments count (in the system) as logically good, you lose some of the inference patterns: one of which is necessitation.
Another place where something like this happens is supervaluational logics. All classical validities are supervaluationally valid. But if certain other ones are added in (like the argument from p&~Dp to absurdity) then you get counterexamples to conditional proof, reductio and the rest. You get that result if, in effect, you treat “definitely” as a logical constant (which is implicit in most treatments). That’s pretty analogous to the situation in the Kaplan case, where one of the things that underpins the validity-in-his-system of “I am here now” and the rest is that the character of indexicals are held invariant over admissible models (the other is that only proper contexts are allowed).
Both illustrate the main point: that giving up a weaker system in favour of a stronger system, though it won’t reduce the number of valid sentences, may well undermine important inferential patterns.
The interesting thing about logically good inference, as classically conceived, is that it’s a system that is *strong* enough to underpin enough direct reasoning, but also weak enough to give us interesting patterns of suppositional reasoning. I think we need to bear that in mind if we’re considering discarding it in favour of some other
Another thought. A weakened Nec rule should hold in Kaplan’s system, right? (Maybe he makes this point in the paper). If p is a neo-classical validity, then Nec p will be a neo-classical validity (NC being the other system Kaplan considers). But if Nec p is an NC validity, it’ll be an LD validity.
I think that’s important when we try to figure the philosophical and theoretical costs of adopting a system like Kaplan’s. Obviously it’s “odd” in some sense, and “revisionary” in some sense. But one way it’s not revisionary is in taking token pieces of reasoning we thought were good, and saying we now have to doubt them. Because every time you or I deploy Necessitation, the LD-theorist can agree that that instance is an instance of a reliable reasoning pattern. Where they’ll differ is over the theoretical classification: you or I will see it as reasoning from p being a validity to Nec p being a validity. Kaplanians will have to see it as reasoning from p being a validity-relative-to-a-subsystem to Nec p being a validity. Slogan: giving up the general principle of “conditional proof” or “necessitation” doesn’t mean you have to regard *conditional proofs* or *necessitations* with suspicion.
Two spin-offs of this. First, obviously at the metatheoretic level, the system isn’t as neat as it might be. And that might still be a cost.
Second. If to rationalize token patterns of reasoning, we need to mention subsystems, then when we come to do the epistemology of the enterprise, it’s no good to concentrate solely on LD-validity. If we’re going to be reliable users of nec or conditional proof, we have to track validity in the subsystem. So (to speak impressionistically) there’s something “dualistic” about the picture of reasoning that emerges from this kind of story about reasoning under a supposition. That’s some kind of reason to look kindly on a system where you don’t have to do this.
I guess you can express what I wanted to say in the main piece like this: have a priori truth-preservation as the basic epistemic classification of good pieces of reasoning if you like. But (particularly if ampliative inferences can be a priori) when we come to consider the epistemological story underlying suppositional reasoning, we’ll probably need to appeal to a tighter, more demanding notion, and deal with any philosophical puzzles that ensue.
I think there’s a notion of apriority that excludes these cases of the deeply contingent a priori. I call this the “conclusive a priori” in a couple of places. Roughly, the idea is that P is conclusively a priori if one can be non-experientally justified in being certain that P (where certainty is appropriately understood). I think one could appeal to conclusive apriority to handle your cases.
As you say, one could also appeal to 1-necessity. But I think this will work only if, as I believe, 1-necessity coincides with conclusive apriority. If, as some people believe, there are 1-necessary truths that aren’t conclusively a priori (say, ‘An omniscient being exists’) then it looks like what’s relevant is conclusive apriority, not 1-necessity.
Hi Dave—thanks for dropping by!
I’ll have to think about conclusive apriority. It does sound important.
One thing that springs to mind (which I guess is obvious) is that if e.g. x is water follows conclusively a priori from x
is the watery stuff then conclusive a priori reasoning is not the sort of thing that we can safely use in the suppositional reasoning aimed at establishing counterfactuals, since (by the lights of Kripke/Putnam orthodoxy) “Had XYZ been the watery stuff, XYZ would have been water” is false. But still when p follows logically from q (without side premises) then we can conclude to the conditional “had it been that p then q”. (Well, Daniel Nolan’s on record as denying this due to issues over counterpossibles, but still…).
That’s certainly not as dramatic as the restrictions we get with the deeply contingent a priori cases, where issues crop up with extensional connectives (“and”) not just with intensional ones like counterfactuals and (arguably) indicatives.
But maybe the right reaction is to divide and conquer: demand modally good reasoning if you’re aiming to establish the counterfactual; conclusive a priori good reasoning if you’re aiming to establish the indicative. And so on, case-by-case.
I’m a bit unhappy with the divide and conquer idea. I’d like a single, powerful logic for “if” that was indifferent to whether the “if” was indicative, subjunctive or whatever. To articulate that common core as it relates to suppositional proofs of conditionals, logical consequence looks like what we need. But the methodological principles I’m appealing to here seem fuzzy even to me.
Final thought: you might think the inference from x is watery to x is water is also logically good (Kaplanian LD might give you that result, if the character of “water” and “the watery stuff” are the same). One thing this illustrates is that if we want to pick out something that’ll play the unifying role in suppositional reasoning to establish “if”, we can’t be cavalier in treating aspects of the meaning of words as invariant over models: we need a more traditional “narrow” conception of logic where some the class of logical constants is pretty restricted (I’m thinking, for example, of MacFarlane’s stuff on logicality here).