Category Archives: Mind

More on norms

One of the things that’s confusing about truth norms for belief is how exactly they apply to real people—people with incomplete information.

Even if we work with “one should: believe p only if p is true”. After all, I guess we can each be pretty confident that we fail to satisfy the truth-norm. I’m confident that at least one of my current beliefs is untrue. I’m in preface-paradox-land, and there doesn’t seem any escape. It doesn’t feel like I’m criticizable in any serious way for being in this situation. What is the better option (OK, you could say: switch to describing your doxastic state in terms of credences rather than all-or-nothing beliefs, but for now I’m playing the all-or-nothing-belief game).

So I’m not critizable just for having beliefs which are untrue. And I’m not criticizable for knowing that I have beliefs which are untrue. Here’s how I’d like to put it. There are lots of very specific norms, which can be schmatized as “one should: believe that p if p is true”. It’s when I know, of one particular instance, that I’m violating this “external” norm, that I seem to be criticizable.

Let’s turn to the indeterminate case. Suppose that it’s indeterminate whether p, and I know this. And consider three options.

  1. Determinately, I believe p.
  2. Determinately, I believe ~p.
  3. It’s indeterminate whether I believe p.

I’m going to ignore the “suspension of belief case”. I’ll assume in (3) we’re considering a case where the indeterminacy in my belief is such that, determinately, I believe p iff p is true.

In case (1) and (2), for the specific q in question, I can know that it’s indeterminate whether I’m violating the external norm. But for (3), it’s determinate that I’m not violating this norm.

It’s very natural to think that I’m pro tanto criticizable if I get into situation (1) or (2) here, when (3) is open to me (that is, I better have some overriding reason for going this way if I’m to avoid criticism). If this is one way in which criticism gets extracted out of external truth-norms, then it looks like indeterminate belief is the appropriate response to known indeterminacy.

But that isn’t by any means the only option here. We might reason as follows. What’s common ground by this point is that it’s indeterminate whether (1) or (2) violates the norm. So it’s not determinate that (1) or (2) do violate the norm. So it’s not determinate that a necessary condition for my beliefs being criticizable is met. So it’s at worst indeterminate whether I’m criticizable in this situation.

I can’t immediately see anything wrong with this suggestion. But I think that nevertheless, (2) (3) is the better state to be in than (1) (1) or (2). So here’s a different way of getting at this.

I’m going to now switch to talking in terms of credences *as well as* beliefs. Suppose that I believe, and am credence 1, that p is indeterminate. And suppose that I believe that p—but I’m not credence 1 in it. Suppose I’m credence 0.9 in p instead (this’d fit nicely, for example, with a “high threshold” account of the relationship between credence and all-out belief, but all I need is the idea that this sort of thing can happen, rather than any sort of general theory about what goes on here. It couldn’t happen if e.g. to believe p was to have credence 1 in p).

In this situation, I have 0.1 credence in ~p, and so 0.1 credence in p not being true (in the situation we’re envisaging, I’m credence 1 in the T-scheme that allows this semantic ascent).

I’m also going to assume that not only do I believe p, but I’m perfectly confident of this—credence 1 that I believe p. So I’m credence 0.1 in “I believe p & p is not true”—so credence 0.1 in the negation of “I believe p only if p is true”. So I’m at least credence 0.1 that I’ve violated the norm.

Contrast this with the situation where it’s indeterminate whether I believe p, and p is indeterminate, in such a way that “p is true iff I believe p” comes out determinately true. If I’m fully confident of all the facts here, I will have zero credence that I’ve violated the norm.

That is, if we go for option (1) or (2) above, when you’re certain that p is indeterminate, and are less than absolutely certain of p, then it looks to me that you’ll thereby give some credence to your having violated the aleithic norm (with respect to the particular p in question). If you go for (3), on the other hand, you can be certain that you haven’t violated the alethic norm.

It seems to me that faced with the choice between states which, by their own lights, may violate alethic norms, and states which, by their own lights, definitely don’t violate alethic norms, we’d be criticizable unless we opt for the second rather than the first so long as all else is equal. So I do think this line of thought supports the (anyway plausible) thought that it’s (3), rather than (1) or (2), which is the appropriate response to known indeterminate cases, given a truth-norm for belief.

(As noted in the previous post, this is all much quicker if the truth-norm were: one should, determinately( believe p only if p is true). But I do think the case for (3) would be much more powerful if we can argue for it on the basis of the pure truth-norm rather than this decorated version).

Partial emotions?

If you think emotional states have representational content, it seems reasonable to think that there are rational constraints between the having of a certain emotion (say, feeling regretful that one has dropped something on one’s foot) and the having of a certain belief (say, believing that one has dropped something on one’s foot). Now, I imagine that some would want to question such a connection, but it seems at least a decent position to consider something like:

  • it is rational to regret that p only if it is rational to believe that p.

But now suppose that we think that for theoretical purposes (say in characterizing instrumentally rational action) we should really be talking in terms of partial beliefs rather than all-or-nothing beliefs. In the official idiom, it seems, we dispense with talking about “believing that one has dropped something on one’s foot” and instead talk of things like “believing to degree d that one has dropped something on one’s foot”. (I’ll come back later to the question about whether we just ditch the all-or-nothing belief talk).

What then should we say about the rational connections between doxastic and emotional states? How are emotions rationally constrained by belief? Here’s a very natural thing to write down:

  • it is rational to regret that p to degree d only if it is rational to believe that p to degree d.

The trouble with this is that I’m not sure I really understand the notion of “partial regret” that is now being talked about. Of course, I understand the idea of intensity of regret: I might regret insulting someone with a greater intensity than I regret forgetting my umbrella this morning. But “degrees of regret” in the intensity sense aren’t obviously what we want in this connection (I’m tempted to say they’re obviously not what we want). But do we really any other grip on the notion of a degreed emotion?

Of course, some people are likely to have similar sceptical thoughts about the notion of belief–finding all-or-nothing belief talk familiar home turf, and talk of partial belief rather mysterious. But the cases, to me, seem only superficially similar. To begin with, I think I had some pre-theoretic grip on the notion of degree of belief/confidence (though even here I think that there is a phenomenological intensity sense of “degree of confidence” that needs to be cleared out of the way). And even if I were to give up the pre-theoretic grip, I’ve at least got a theoretical/operational grip on the notion of degree of belief through decision-theoretic connections to action.

With partial emotions, I’m all at sea—things like regret seem to me, pre-theoretically, all-or-nothing (setting aside differences of felt intensity). And neither do I have a natural operational/theoretical grip on such partial emotions to reach for.  I’d be very glad if someone could convince me that I do understand the notion, or point to literature where such issues are discussed!

Another strategy, I suppose, is to think of all-or-nothing belief as distinct from the degreed notion. If that’s the case, then we could formulate the connection between beliefs and regret just as originally stated. This’d be interesting to me, since previously I’ve never really been clear what’s lost if we ditch all-or-nothing belief-talk (and ensuing puzzles like the lottery paradox) and only appeal in our theories of mind to the partial beliefs. But if other emotional states with intentional content have rational connections to all-or-nothing beliefs, then it seems we’ve got a real theoretical role for such states.

Of course, this line of thought gives urgency to puzzles about how to relate partial beliefs and all-or-nothing beliefs — (e.g. all-or-nothing belief as partial belief above a certain threshold). That’s a whole literature in itself.

What do people think? Am I being really naive in thinking there are rational connections like the above to get worried about? Do they require reformulation when we introduce partial beliefs, or (as suggested at the end) is this a way of arguing the importance of retaining all-or-nothing belief talk as well as partial belief talk? Can anyone make sense of the notion of a partial emotion (when distinguished from the phenomenological-intensity reading)?

Thresholds for belief

I’m greatly enjoying reading David Christensen’s Putting logic in its place at the moment. Some remarks he makes about threshold accounts of the relationship between binary and graded beliefs seemed particularly suggestive. I want to use them here to suggest a certain picture of the relationship between binary and graded belief. No claim to novelty here, of course, but I’d be interested to hear about worries about this specific formulation (Christensen himself argues against the threshold account).

One worry about threshold accounts is that they’ll make constraints on binary beliefs look very weird. Consider, for example, the lottery paradox. I am certain that someone will win, but for each individual ticket, I’m almost certain that it’s a loser. Suppose that having belief of degree n sufficed for binary belief. Then, by choosing a big enough lottery, we can make it that I believe a generalization (there will be a winner) while believing the negation of each of its premises. So I believe each of a logically inconsistent set.

This sort of situation is very natural from the graded belief perspective: the beliefs in question meet constraints of probabilistic coherence. But there’s a strong natural thought that binary beliefs should be constrained to be logically consistent. And of course, the threshold account doesn’t deliver this.

What Christensen points to is some observations by Kyburg about limited consistency results that can be derived from the threshold account. Minimally, binary beliefs are required to be weakly consistent: for any threshold above zero, one cannot believe a single contradictory proposition. But there are stronger results too. For example, for any threshold above 0.5, one cannot believe a pair of mutually contradictory propositions. One can see why this is if one remembers the following result: that a logically valid argument is such that the improbability of its conclusion cannot be greater than the sum of the improbabilities of its premises. For the case where the conclusion is absurd (i.e. the premises are contradictory) we get the the sum of the improbabilities of the premises must be less than or equal to 1.

In general, then, what we get is the following: if the threshold for binary belief is at least 1-1/n, then one cannot believe each of an inconsistent set of n propositions.

Here’s one thought. Let’s suppose that the threshold for binary belief is context dependent in some way (I mean here to use this broadly, rather than committing to some particularly potentially controversial semantic analysis of belief attributions). The threshold that marks the shift to binary belief can vary depending on aspects of the context. The thought, crudely put, is that there’ll be the following constraint on what thresholds can be set: in a context where n propositions are being entertained, then the threshold for binary belief must be at least 1-1/n.

There is, of course, lots to clarify about this. But notice that now relative to every context, we’ll get logical consistency as a constraint on the pattern of binary belief (assuming that to belief that p is in part to entertain that p).

[As Christensen emphasises, this is not the same thing as getting closure holding in every context. Suppose we consider the three propositions, A, B, and A&B. Consistency means that we cannot accept the first two and accept the negation of the last. And indeed, with the threshold set at 2/3, we get this result. But closure would tell us that every situation in which we believe the first two, we should believe the last. But it’s quite consistent to believe A and B (say, by having credence 2/3 in each) and to fail to believe A&B (say, by having credence 1/3 in this proposition). Probabilistic coherence isn’t going to save the extendability of beliefs by deduction, for any reasonable choice of threshold.

Of course, if we allow a strong notion of disbelief or rejection, such that someone disbelieves that p iff their uncertainty of p is past the threshold (the same threshold as for belief), then we’ll be able to read off from the consistency constraint that in a valid argument, if one believes the premises, one should abandon disbelief in the conclusion. This is not closure, but perhaps it might sweeten the pill of giving up on closure.]

Without logical consistency being a pro tanto normative constraint on believing, I’m sceptical that we’re really dealing with a notion of binary belief at all. Suppose this is accepted. Then we can use the considerations above to argue (1) that if the threshold account of binary belief is right, then thresholds (if not extreme) must be context dependent, since for no choice of threshold less than 1 will consistency be upheld. (2) that there’s a natural constraint on thresholds in terms of the number of propositions obtained.

The minimal conclusion, for this threshold theorist, is that the more propositions they entertain, the harder it will be for them to count as beliefs. Consider the lottery paradox construed this way:


1 loses

2 loses

N loses

So: everyone loses

Present this as the following puzzle: We can believe all the premises, and disbelieve the conclusion, yet the latter is entailed by the former.

We can answer this version of the lottery paradox using the resources described above. In a context where we’re contemplating this many propositions, the threshold for belief is so high that we won’t count as believing the individual props. But we can explain why it seems so compelling: entertain each individually, and we will believe it (and our credences remain fixed throughout).

Of course, there’s other versions of the lottery paradox that we can formulate, e.g. relying on closure, for which we have no answer. Or at least, our answer is just to reject closure as a constraint on rational binary beliefs. But with a contextually variable threshold account, as opposed to a fixed threshold account, we don’t have to retreat any further.