One of the things that’s confusing about truth norms for belief is how exactly they apply to real people—people with incomplete information.
Even if we work with “one should: believe p only if p is true”. After all, I guess we can each be pretty confident that we fail to satisfy the truth-norm. I’m confident that at least one of my current beliefs is untrue. I’m in preface-paradox-land, and there doesn’t seem any escape. It doesn’t feel like I’m criticizable in any serious way for being in this situation. What is the better option (OK, you could say: switch to describing your doxastic state in terms of credences rather than all-or-nothing beliefs, but for now I’m playing the all-or-nothing-belief game).
So I’m not critizable just for having beliefs which are untrue. And I’m not criticizable for knowing that I have beliefs which are untrue. Here’s how I’d like to put it. There are lots of very specific norms, which can be schmatized as “one should: believe that p if p is true”. It’s when I know, of one particular instance, that I’m violating this “external” norm, that I seem to be criticizable.
Let’s turn to the indeterminate case. Suppose that it’s indeterminate whether p, and I know this. And consider three options.
- Determinately, I believe p.
- Determinately, I believe ~p.
- It’s indeterminate whether I believe p.
I’m going to ignore the “suspension of belief case”. I’ll assume in (3) we’re considering a case where the indeterminacy in my belief is such that, determinately, I believe p iff p is true.
In case (1) and (2), for the specific q in question, I can know that it’s indeterminate whether I’m violating the external norm. But for (3), it’s determinate that I’m not violating this norm.
It’s very natural to think that I’m pro tanto criticizable if I get into situation (1) or (2) here, when (3) is open to me (that is, I better have some overriding reason for going this way if I’m to avoid criticism). If this is one way in which criticism gets extracted out of external truth-norms, then it looks like indeterminate belief is the appropriate response to known indeterminacy.
But that isn’t by any means the only option here. We might reason as follows. What’s common ground by this point is that it’s indeterminate whether (1) or (2) violates the norm. So it’s not determinate that (1) or (2) do violate the norm. So it’s not determinate that a necessary condition for my beliefs being criticizable is met. So it’s at worst indeterminate whether I’m criticizable in this situation.
I can’t immediately see anything wrong with this suggestion. But I think that nevertheless, (2) (3) is the better state to be in than (1) (1) or (2). So here’s a different way of getting at this.
I’m going to now switch to talking in terms of credences *as well as* beliefs. Suppose that I believe, and am credence 1, that p is indeterminate. And suppose that I believe that p—but I’m not credence 1 in it. Suppose I’m credence 0.9 in p instead (this’d fit nicely, for example, with a “high threshold” account of the relationship between credence and all-out belief, but all I need is the idea that this sort of thing can happen, rather than any sort of general theory about what goes on here. It couldn’t happen if e.g. to believe p was to have credence 1 in p).
In this situation, I have 0.1 credence in ~p, and so 0.1 credence in p not being true (in the situation we’re envisaging, I’m credence 1 in the T-scheme that allows this semantic ascent).
I’m also going to assume that not only do I believe p, but I’m perfectly confident of this—credence 1 that I believe p. So I’m credence 0.1 in “I believe p & p is not true”—so credence 0.1 in the negation of “I believe p only if p is true”. So I’m at least credence 0.1 that I’ve violated the norm.
Contrast this with the situation where it’s indeterminate whether I believe p, and p is indeterminate, in such a way that “p is true iff I believe p” comes out determinately true. If I’m fully confident of all the facts here, I will have zero credence that I’ve violated the norm.
That is, if we go for option (1) or (2) above, when you’re certain that p is indeterminate, and are less than absolutely certain of p, then it looks to me that you’ll thereby give some credence to your having violated the aleithic norm (with respect to the particular p in question). If you go for (3), on the other hand, you can be certain that you haven’t violated the alethic norm.
It seems to me that faced with the choice between states which, by their own lights, may violate alethic norms, and states which, by their own lights, definitely don’t violate alethic norms, we’d be criticizable unless we opt for the second rather than the first so long as all else is equal. So I do think this line of thought supports the (anyway plausible) thought that it’s (3), rather than (1) or (2), which is the appropriate response to known indeterminate cases, given a truth-norm for belief.
(As noted in the previous post, this is all much quicker if the truth-norm were: one should, determinately( believe p only if p is true). But I do think the case for (3) would be much more powerful if we can argue for it on the basis of the pure truth-norm rather than this decorated version).
Robbie, apologies if this should go on the previous post, or on no post at all…
I don’t quite get why the three relevant options are the ones you present. Consider the options in the case where it is contingent that p. Do we think that any of the following are required?
1* Necessarily, I believe p.
2* Necessarily, I believe ~p.
3* It’s contingent whether I believe that p.
I guess it’s clear that 1* and 2* are silly. I’m not sure that will push us in the direction of 3*. An advantage of 3* is that it seems more comprehensible than 3: we have a clearer idea of what contingent belief might look like than indeterminate belief, and certainly of how one might aim at contingent belief vs. how one might aim at indeterminate belief. But the advice given in 3* is confusingly expressed. I think it is better to think about the norm not as mandating some particular attitude towards p in the actual world, but rather as mandating tracking of p across modal space. This is an extra condition in the case of contingency, because in the case of necessity of p or ~p there is nothing to track. So we get:
4* My belief concerning p at w tracks the truth of p at w.
[Nozick gives this as a condition on knowledge, which is wrong; but it looks better as a norm on contingent belief.] Our equivalent for indeterminacy can then be identical, except that the variable ‘w’ now ranges over actual worlds rather than possible worlds.
4 My belief concerning p at w tracks the truth of p at w.
Now I guess that 4 can be seen as an explication of 3 rather than an alternative to it. And I don’t quite see how to follow 4 either (whereas I have some idea how to follow 4*). But if I wanted to explain what mushy credences were, or I wanted to have an alternative to them, the idea of tracking across actual worlds would be the place I’d start.
Is this the view you thinking of: you ought to believe p or you ought to believe ~p, but it’s indeterminate which. Despite this, it’s permissible to determinately believe p or determinately believe ~p, but it’s indeterminate which (but it’s (determinately) impermissible to be agnostic.)
If so, it seems either people are usually irrationally being agnostic. You could argue that they’re in fact indeterminately believing p, but acting like they were agnostic. But by determinately acting as such they are indeterminately violating norms connecting belief to action. (Besides, I can’t see how you could be required to do something impossible, like indeterminately perform an action – or perhaps even indeterminately form a belief.)
I quite like a similar view where it’s indeterminate whether it’s p or ~p that you ought to believe, but it’s permissible to: be agnostic about p, (determinately) believe p, (determinately) believe ~p or indeterminately believe p. This view has the benefit of permitting agnosticism in cases of vagueness.
(The view asserts: it’s vague whether we ought to believe p, but it’s permissible to be agnostic. This is at best vague (it’s of the form ) but by the views own lights it’s still permissible to believe the theory – perhaps mandatary in this case.)
“But I think that nevertheless, (2) is the better state to be in than (1). ”
I don’t share this intuition at all. – why aren’t they completely symmetrical?
Sorry, I got mixed up in the parentheses beginning with “(Besides”. That should have been directed at the view where I ought to indeterminately believe p.
I think the issue about permissible agnosticism over p (when you’re certain that p is indeterminate) will turn on whether we go for a conditional or biconditional version of the truth-norm. I’d hope that weakening to the conditional version, I’d get the conclusion: if you’re going to not being an agnostic, you should make it indeterminate whether you believe p or ~p.
In terms of the application to actual cases. One thing we might say is that people *are* agnostic, but are criticizable for being so (since there’s a state that’s open to them that’s better, viz. indeterminately believing p).
One thing I quite like about the framework is that *in* circs where indeterminately believing p just isn’t a practical option (and even more plausibly, where indeterminately acting isn’t open to someone) then you won’t be criticizable for determinately believing/disbelieving. The idea would be that though your only option is to believe/do something that makes it knowably indeterminate whether you’ve violated the norms, you have comparatively low credence that you’ve violated the norm, and if there’s no better option available, it’ll be inappropriate to criticize. Basically, this is to exploit the “all else equal” rider in the original post.
The view you mention is interesting. Is this one way to state the strange feature you point to: it’s indeterminate whether it’s permissible to be agnostic, but it’s permissible to be agnostic. And believing this conjunction is *permissible* by the theories own lights.
Surely you’ll have to add some kind of rider if you want to leave open that it’s imperssible to deny this conjunction. Since the conjunction is at best indeterminate (and we know this), the original claims say that it’s permissible believe ~p in these circumstances, i.e. deny the theory. maybe you could make the permissions pro tanto, outweighed by considerations of evidence, etc—and in the particular case of your theory, you could then say that considering only the truth norm, it’s permissible to deny the theory, but adding in the *evidence*, we better believe it (which is also ok by truth-norm lights).
On the last note: that was a typo! Thanks for catching it—the post is now amended.
On the “Besides”. You could fairly make that point against the view as stated, in that I was committed at the end to the view that you’d be criticizable for not indeterminately believing p.
But I do think that the rider in the argument then kicks in if this just isn’t a state you can adopt: you won’t *then* be criticizable for opting for one of the other options that “satisfices” in meeting the truth-norm. This is, I think, an advantage over the narrow-scoped-det truth-norm, where we wouldn’t have this flexibility.
Here’s how I’m thinking of this (in application to the open future, on something like the Barnes-Cameron view where future contingents are indeterminate in the bivalently way). Ideally, by the lights of the belief norm we’d indeterminately believe p: believe p on exactly those precisifications where p is true. Likewise, we’d ideally bet on heads on exactly those worlds where we believe that it’ll land heads, i.e. where it lands heads. [This last move of course supposes a standard connection between how you ought to act and expected utility]
Nice work if you can get it!
But pretty clearly, indeterminately acting in the penumbrally connected way isn’t one of our typical options for acting. And you might think that believing this way is also not one of our options (though that’s a little less clear). The view as above allows room for taking “second best” without being criticized for doing so: determinately believing p (and being very confident in p) while noting that it’s indeterminate whether this is violating the norms (while being very confident it’s not violating them).
Yes, that was what I was thinking. Roughly, do enough philosophy and you’ll find it’s the best view – so you ought to accept it.
Of course, while the view seems internally consistent, it’s hard to see how you could rationally *come* to accept it, unless you were already in the frame of mind that it was ok to accept demonstrably vague propositions.
(Actually, I think the way I stated it *was* inconsistent, because I said that on every precisification either you ought to believe p, or you ought to believe ~p, but that on some precisifications it was permissible to be agnostic. I think what I want is what you said: it’s indeterminate whether it’s permissible to be agnostic, but it’s permissible to be agnostic.)
Thanks for this. Yes, I think that may be what we end up with—and I do think of it as an explication of (3). Basically, what it tells you to do is to make-true Det(Bel(p) iff p is true). What I was trying to do in the post was to give a way that it could be a *derived* norm, from a fundamental norm that just tells you to make-true that believe p iff p is true. The advantage is (as I now see it) you get the kind of wriggle room when we simply don’t have the option of tracking truth across actual worlds.
Just to give an idea of a case where we could track the truth in this way: we have a sentence S in our belief box, and which proposition S expresses is metaphysically indeterminate. Think of e.g. “Robbie is here”, where it’s indeterminate which object “Robbie” picks out (since e.g. it’s indeterminate which macroscopic obejct exists). It’s indeterminate which proposition you’re believing, but you’re only believing them on the precisifications where they’re true. (It’s of course determinate that the sentence is true, but if the objects of belief are propositions, then it’s indeterminate what we believe *by* having that sentence in our heads).
But it’s very difficult to see how to get ourselves into a state that meets the ideal in other cases (e.g. for open-future cases as above). So then the fall-back options kick in.
I should think more about what we say to the agnostic.
Here’s the thought I was having on the suspended judgement and the biconditional version of the truth norm. You know that if p, then you’ll be violating the truth-norm by not believing p. You know that if ~p, then you’ll be violating the truth-norm by not believing p. So reasoning by cases, either way you’re violating the truth-norm.
That’s fine, but is this a “bad” violation? As I think I remarked, surely we’re all sure that we violate the truth-norm somewhere. The real question is: can we pick a particular feature of our mental state that *constitutes* the violation, and which we can change to rectify the situation.
I think suspension-of-judgement can’t be a bad violation. Notice that it doesn’t rely on anything about indeterminacy: bog-standard suspended judgement would knowably violate the norms by the same argument. Of course, we could get rid of this by endorsing the conditional version (believe p only if p is true) rather than biconditional. But can we say something to stabilize the biconditional version?
Maybe. We can note that you don’t know whether it’s your not believing p that violates the truth norm, or whether it’s your not believing ~p that violates the truth-norm. And without this bit of information, you don’t know which state to change. That’s the sort of known violation of truth-norms that is not blameworthy, because we haven’t got an idea of what to do to rectify it.
Furthermore, if you don’t believe p, you don’t believe that (p&you don’t believe p). So you don’t believe that you’ve violated the norm in not believing p. Likewise for ~p. By suspending judgement over p, you don’t believe of either of your not-believing states, that it violates the truth-norm. (Of course, you know one or the other does).
So I think it doesn’t follow from the above that suspension of judgement is a criticizable state to be in directly.
However, it doesn’t seem comfortable. Let’s say for argument’s sake that you’re 0.5 confident of p when you suspend judgement. Then you’ll be 0.5 confident that you’re violating the norms in not believing p, and 0.5 confident that you’re violating the truth-norm in not believing ~p. So you’re 50/50 on violating the truth-norms. So even though you don’t believe you’ve violated the truth norm, you don’t believe you haven’t violated it either—you’ve suspended judgement on the question. This needn’t be an all-things-considered bad state to be in, but it’s not great.
(As a side note: notice that you’d have exactly the same verdict on believing p, or on believing ~p. So in the 50/50 confidence in p case, you can just as well defend believing p (suspending judgement on whether that violates the truth-norm) or believing ~p (suspending judgement on whether that violates the truth norm). I suppose on something like a threshold account of the belief-credence link, then being 50/50 in p and believing p is just impossible, however, so maybe we shouldn’t take this so seriously).
What’s weird is this. Basically, as you become more opinionated about p, your credence that you’re violating the norms tends to 0, but as you become less opinionated, your credence tends to 0.5 that you’re violating them (whatever you do). That’s just very strange.
I suppose what I’d like to say about this situation is the following: we know what to do to meet all our obligations perfectly. That’s the indeterminate belief case. If we can’t do that, everything is going to end up second best, and to be more or less uncomfortable. It turns out that some “second choices” are worse than otherse. But they may each, by their own lights, still be all-things-considered the best option for you.
You could move to the weaker truth norm (one should: believe that p only if p is true) to avoid all this trouble in the 50/50 case. That makes suspension of belief *always* permissible (not indeterminately so). It does seem a little unmotivated to me, once we remember that the truth-norm is an “external” norm.
Thanks Robbie, those examples are really helpful. About the indeterminate sentence case, it’s worth saying that this isn’t a norm that one satisfies automatically: I could have a curious conception of Robbie such that I am determinately thinking about some precise macroscopic object, and that would mean that I was determinately believing some proposition, thus failing to track the truth.
I suspect that in the open future case, where perfect tracking isn’t an option, we have to think about the second best alternative in terms of credences: the point of partial belief being something to do with quantifying the (unavoidable) failure of tracking that would be involved in outright belief that p. (Of course, given that the alternative futures are all actual, it isn’t clear why any measure over the space of them should correspond to probability in the intuitive sense. I suspect that this is just the same problem that Everettians have with linking probability and intensity. Though it may be easier for you because although you think that many futures are actual, only one of them will be actualised, if I understand the view right. Basically we want to be able to employ the Principal Principle or some surrogate.)
In response to your last comment, I think we can say some stuff to make the idea of being uncomfortable when in the 50/50 case more plausible. Suppose that I have to decide what to do in a situation where what the best thing to do is depends on whether or not p. The closer my credence is to 0 or 1, the more confident I can be that the action I’m taking is for the best, but if I just have no idea whether p I also have no idea whether my action is best, and this (legitimately) worries me. Of course, in the cases where 50/50 credence is just the right credence to have, I won’t be criticisable, but this seems to be because of general “ought implies can” thoughts. Maybe it’s better to think of it as an ideal rather than a norm, insofar as norms suggest oughts.
Thanks Daniel. I would like to be able to link this to some kind of “go with the measure” proposal. Sarah Moss has an interesting paper in ms. where she proposes that when we have mushy credences, we attempt a “compromise” between them (the mushy credences she has in mind are as a result of uncertainty, not indeterminacy, but the idea still seems good). I’d like to pursue that thought.
A couple of further thoughts. One is that the whole thing about degrees of belief and full belief (in the light of the truth norm) goes through—I mentioned this for the 50/50 case, but equally, believing p when one isn’t credence 1 that p will lead to giving small credences to your violating the truth-norm. The only distinctive bit is that in the indeterminate case we’ve got another option—indeterminately believing p—which might trump the other options.
Another thing that I really haven’t engaged with in these two posts is how to formulate the truth-norm for credences. For now I’ve worked with allowing people to have what credences they like, and then considered what all-or-nothing beliefs go with that. It’s fairly delicate whether we can really hold the discussion in that context.
One thought is just to adopt Joyce’s gradational accuracy norm, which incorporates (as a limit case) the idea that ideally your credences should be 1 in true p, and 0 in false p. But I’m not sure exactly how to set up the dialectic yet with this. It would be nice to have an argument for mushy credences, in the way that I tried to above for mushy belief.