I’ve just learned that my paper “Vagueness, Conditionals and Probability” has been accepted for the first formal epistemology festival in Konstanz this summer. It looks like the perfect place for me to get feedback on, and generally learn more about, the issues raised in the paper. So I’m really looking forward to it.
I’m presently presenting some of this work as part of a series of talks at Arche in St Andrews. I’m learning lots here too! One thing that I’ve been thinking about today relates directly to the paper above.
One of the main things I’ve been thinking about is how credences, evidential probability and the like should dovetail with supervaluationism. I’ve written about this a couple of times in the past, so I’ll briefly set out one sort of approach that I’ve been interested in, and then sketch something that just occurred to me today.
The basic question is: what attitude should we take to p, if we are certain that p is indeterminate? Here’s one attractive line of thought. First of all, it’s a familiar thought that logic should impose some rationality constraints on belief. Let’s formulate this minimally as the constraint that, for the rational agent, probability (credence or evidential probability) can never decrease across a valid argument:
Now take one of the things that supervaluational logics are often taken to imply, where ‘‘ is read as ‘it is determinate that’:
Then we note that this and the logicality constraint on probabilities entails that
So in particular, if we fully reject A being determinate (e.g. if we fully accept that it’s indeterminate) then the probability of the RHS will be zero, and so by the inequality, the probability of the RHS is zero. (The particular supervaluational consequence I’m appealing to is controversial, since it follows only in settings which seem inappropriate for modelling higher-order indeterminacy, but we can argue by adding a couple of extra assumptions for the same result in other ways. This’ll do us for now though).
The result is that if we’re fully confident that A is indeterminate, we should have probability zero in both A and in not-A. That’s interesting, since we’re clearly not in Kansas anymore—this result is incompatible with classical probability theory. Hartry Field has argued in the past for the virtues of this result as giving a fix on what indeterminacy is, and I’m inclined to think that it captures something at the heart of at least one way of conceiving of indeterminacy.
Rather than thinking about indeterminate propositions as having point-valued probabilities, one might instead favour a view whereby they get interval values. One version of this can be defined in this setting. For any A, let be defined to be
. This quantity—how little one accepts the negation of a proposition—might be thought of as the upper bound of an interval whose lower bound is the probability of A itself. So rather than describe one’s doxastic attitudes to known indeterminate A as being “zero credence” in A, one might prefer the description of them as themselves indeterminate—in a range between zero and 1.
There’s a different way of thinking about supervaluational probabilities, though, which is in direct tension with the above. Start with the thought that at least for supervaluationism conceived as a theory of semantic indecision, there should be no problem with the idea of perfectly sharp classical probabilities defined over a space of possible worlds. The ways the world can be, for this supervaluationist, are each perfectly determinate, so there’s no grounds as yet for departing from orthodoxy.
But we also want to talk about the probabilities of what is expressed by sentences such as “that man is bald” where the terms involved are vague (pick your favourite example if this one won’t do). The supervaluationist thought is that this sentence picks out a sharp proposition only relative to a precisification. What shall we say of the probability of what this sentence expresses? Well, there’s no fact of the matter about what it expresses, but relative to each precisification, it expresses this or that sharp proposition—and in each case our underlying probability measure assigns it a probability.
Just as before, it looks like we have grounds for assigning to sentences, not point-like probability values, but range-like values. The range in question will be a subset of [0,1], and will consist of all the probability-values which some precisification of the claim acquires. Again, we might gloss this as saying that when A is indeterminate, it’s indeterminate what degree of belief we should have in A.
But the two recipes deliver totally utterly different results. Suppose, for example, I introduce a predicate into English, “Teads”, which has two precisifications: on one it applies to all and only coins which land Heads, on the other it applies to all and only coins that land Tails (or not Heads). Consider the claim that the fair coin I’ve just flipped will land Teads. Notice that we can be certain that this sentence will be indeterminate—whichever way the coin lands, Heads or Tails, the claim will be true on one precisification and false on the other.
What would the logic-based argument give us? Since we assign probability 1 to indeterminacy, it’ll say that we should assign probability 0, or a [0,1] interval, to the coin landing Teads.
What would the precisification-based argument give us? Think of the two propositions the claim might express: that the coin will land heads, or that the coin will land tails. Either way, it expresses a proposition that is probability 1/2. So the set of probability values associated with the sentence will be point-like, having value 1/2.
Of course, one might think that the point-like value stands in an interesting relationship to the [0,1] range—namely being its midpoint. But now consider cases where the coin is biased in one way. For example, if the coin is biased to degree 0.8 towards heads, then the story for the logic-based argument will remain the same. But for the precisification-based person the values will change to {0.8,0.2}. So we can’t just read off the values the precisificationist arrives at from what we get from the logic-based argument. Moral: in cases of indeterminacy, thinking of probabilities in the logic-based way wipes out all information other than that the claim in question is indeterminate.
This last observation can form the basis for criticism of supervaluationism in a range of circumstances in which we want to discriminate between attitudes towards equally indeterminate sentences. And *as an argument* I take it seriously. I do think there should be logical constraints on rational credence, and if the logic for supervaluationism is as its standardly taken to be, that enforces the result. If we don’t want the result, we need to argue for some other logic. Doing so isn’t cost free, I think—working within the supervaluational setting, bumps tend to arise elsewhere when one tries to do this. So the moral I’d like to draw from the above discussion is that there must be two very different ways of thinking about indeterminacy that both fall under the semantic indecision model. These two conceptions are manifest in different attitudes towards indeterminacy described above. (This has convinced me, against my own previous prejudices, that there’s something more-than-merely terminological to the question of “whether truth is supertruth”).
But let’s set that aside for now. What I want to do is just note that *within* the supervaluational setting that goes with the logic-based argument and thinks that all indeterminate claims should be rejected, there shouldn’t be any objection to the underlying probability measure mentioned above, and given this, one shouldn’t object to introducing various object-language operators. In particular, let’s consider the following definition:
“P(S)=n” is true on i, w iff the measure of {u: “S” is true on u,i}=n
But it’s pretty clear to see that the (super)truths about this operator will reflect the precisification-based probabilities described earlier. So even if the logic-based argument means that our degree of belief in indeterminate A should be zero, still there will be object-language claims we could read as “P(the coin will land Teads)=1/2” that will be supertrue. (The appropriate moral from the perspective of the theorist in question would be that whatever this operator expresses, it isn’t a notion that can be identified with degree of belief).
If this is right, then arguments that I’m interested in using against certain applications of the “certainty of indeterminacy entails credence zero” position have to be handled with extreme care. So, for example, in the paper mentioned right at the beginning of this post, I appeal to empirical data about folk judgements about the probabilities of conditionals. I was assuming that I could take this data as information on what the folk view about credences of conditionals is.
But if, compatibly with taking the “indeterminacy entails zero credence” view of conditionals, one could have within a language a P-operator which behaves in the ways described above, this isn’t so clear anymore. Explicit probability reports might be reporting on the P-operator, rather than subjective credence. So everything becomes rather delicate and very confusing.
Interesting post.
Am I right in thinking the argument for
when its certain that A is indeterminate relies on a conception of validity as preservation of supertruth? If we instead thought of validity as preservation of truth at a precisification presumably the deduction theorem would still hold and
only when A is supertrue or superfalse. In that case I can’t see why we couldn’t have completely classical probability functions. (At least, from logical considerations.)
Yes, the global consequence framework is necessary for the argument. And actually, arguably even that won’t do—I’ve got a paper where I argue that preservation of supertruth doesn’t automatically get you the A&~DA|=C result (assumptions about logicality and possibly the logic of D have to also be packed in). But on global consequence as standardly conceived, we get the above results.
How things go with local consequence is interesting. It’s entirely classical (the definition can be identical), so there’s no chance of revisionary arguments. But there are reasons to be suspicious of it, I reckon (the “bumps in the carpet” I mention in the post).
In particular, it’s so non-revisionary, that it coincides with classical consequence even for the multi-conclusion setting. So we get e.g. Av~A|=A,~A. This can be an argument with all true premises and all untrue conclusions. Switching from probabilities to all-or-nothing talk for the moment, a natural way of thinking of multi-conclusion settings is that they tell us about when we can’t accept all the premises, and reject all the conclusions. But where the premises are true, and the conclusions are untrue, it looks like we should do exactly this.
I’m not sure how to combine the all-or-nothing thinking which gets local consequence into trouble, with the probabilistic thinking which causes trouble for global consequence. If they’re exclusive options, or if we don’t need a single consequence relation covering both, maybe this is ok.
But I’m inclined to be wary of this.
I think I’m ok with that. In the case you mentioned, I don’t think I can simultaneously reject
and
, even in borderline cases (given I accepted
, but that bit seems superfluous.) Anyone who did so would strike me as being inconsistent. It seems to me we just have to suspend judgement in borderline cases.
In terms of probability functions, I reckon it wouldn’t be too difficult to prove you can have classical probability functions assigning 1 to
and
. Or is that not what you meant?
Anyway, thanks for the comments on global consequence. I’ve had similar thoughts myself actually – one reason I have to think that the D operator shouldn’t be treated as a logical constant, is that its accessibility relation is not invariant under arbitrary permutations of worlds (assuming it has a sensible logic which allows for higher order vagueness – i.e. not S5.)
Right—I think we’re on the same page.
There are two questions really. One is what attitude we should take to indeterminate cases. The other is whether what we say to this fits with supervaluationism. If we’re to take the idea that truth is supertruth at all seriously, the multi-conclusion result looks bad to me. So it’s really worries about overall coherence of the package, rather than anything intrinsically wrong with
.
Interesting that you’ve been thinking about the global consequence thing. These permutation results are often sensitive to what structure you demand be held fixed (I remember working through MacFarlane’s tests when I was writing the paper, and it seemed to me that it would come out as a logical constant by his lights. But maybe I had to assume S5 for that.)
Right, so my thoughts regarding the first question is that we should accept what we know to be supertrue, reject what we know to be superfalse, and suspend judgement when we know it to be a borderline case (even though we know it not to be supertrue.) This seems to me to be a natural line for a supervaluationist to take, even if she does equate truth with supertruth (which I personally am not to comfortable with.)
Anyway, I’ve actually never come across these MacFarlane tests you’re talking about. Where can I find them? The requirement I was thinking of was quite restrictive, making most operators non-logical, so maybe I should read this.
It sounds like “reject” for you is going to coincide with “accept the negation”. I was assuming a different reading—but the underlying question is which of the senses of “reject” we should care about, and take logic to norm.
My picture of the truth=supertruth supervaluationist take on the sorites paradox, is that they can say that each premise of the form “man x is bald, but man x+1 is not bald” should ber rejected. The diagnosis of the error that leads to apparent paradox is that we move from rejecting those conjunctions, to accepting the negation—which then means accepting the material conditional “man x is bald > man x+1 is bald”; and paradox. If that’s right, then we need to theorize about rejection in the sense strong enough to account from intuitive repulsion from the conjunctions above. Describing that attitude as agnosticism for me doesn’t really capture it. But obviously this will be controversial, even for a supervaluationist (and this does depend on taking your supervaluationism to cover truly vague predicates).
The MacFarlane stuff is spelled out in most detail in his PhD thesis “What does it mean to say that logic is formal?”, which is available through his website. I really like the way he sets up the issues…
Hi, sorry replying so late. That was very helpful, thankyou.
Thanks for the MacFarlane reference too, I will check that out. Another reason for not wanting to treat D as a logical constant, aside from considerations from permuting the accessibility relation, is that it should at least a necessary condition that the logical constants not be vague. But in the presence of higher order vagueness, D is exactly that. (The standard test for vagueness is hard to apply here, i.e. combine with precise vocabulary and see if the outcome is vague. But I think Sorenson had some sorites argument that showed ‘vague’ to be vague.)
Pingback: Is ‘determinately’ a logical constant? « Possibly Philosophy