Monthly Archives: September 2020

Taking things for granted

If you believe p, do you believe you believe p?

Here’s one model for thinking about this. You have the first order belief—you believe p. On top of that, you have some extra mechanism (call it doxastic introspection) that monitors your internal state and extracts information about what beliefs you have. If that extra mechanism is working reliably in this instance, you’ll end up with the (true) belief that you believe p.

On the introspective model, it’s easy to see how first and second order beliefs can get out of alignment. One forms the first order belief, and the extra mechanism for some reason doesn’t work (maybe it’s unreliable in edge cases, and this is an edge case of having a first order belief). So you end up with the false belief that you don’t believe p, or (more modestly, suspending judgement on the issue). Only if we had a magical 100% reliable intrspective mechanism should we expect the original conditional to be always true.

There’s a rival way of thinking about this: the entailment model. On this model, the basic doxastic attitude is not belief, but a propositional attitude we can call “taking for granted”. Whenever you take p for granted in the relevant sense, it automatically follows that you believe p; and it also follows that you believe that you believe p, and so on. So long as the only way humans get to believe p is by taking p for granted, it’ll follow that whenever you believe p, you believe that you believe p. So the original conditional is always true, and not by any magical flawless introspective mechanism, but a “common cause” psychological structure that ensures the first order and higher order belief are formed together.

(Compatibly with this, it might be the case that sometimes you believe you believe p, even though you don’t believe p. After all, there’s nothing in the entailment model that guarantees that when you don’t believe p, you believe you don’t believe p. You’d get that additional result if you added to the assumptions above that the only way humans have of forming the higher order belief that they believe p is by taking p for granted. But as things stand, the entailment model allows that your most basic attitude can be: taking for granted that one believes that p. And that doesn’t itself require you to believe that p).

What might “taking for granted” be, such that the entailment relations mentioned above hold? Here I’m taking a leaf out of work I’ve been doing on common knowledge and public information. There, I’ve been starting from a psychological attitude I’ve called “taking for granted among a group G” (or “treating as public among G”). The idea is that things we take for granted among a group are things we hold fixed in deliberation even when simulating other group member’s perspectives. So, for example, I might see a car about to pull out in front of you from a side street, but also see that you can’t see the car. In one sense, I hold fixed in my own deliberation about what to do next that the car is about to pull out. But I do not hold that proposition fixed in the stronger sense, because in simulating your perspective (and so expected choices) in the next few seconds, most of the scenarios involve no car pulling out. On the other hand, that drivers will slam the breaks when they see a new obstacle in their way, that things fall downward when dropped, that every driver wants to avoid crashes–these are all things I hold fixed in simulating any relevant perspective. They are things that I take for granted among the group consisting of me and you. What I take for granted between us has an important role in rationalizing my actions in situations of interdependent decision.

It’s plausible that x taking-p-for-granted among G entails (i) x believes p (since they hold p fixed in their own deliberation); (ii) x believes that all Gs believe p (since they hold p fixed in their simulations of other group-member’s deliberations). Further iterations also follow. I’ve got work elsewhere that lays out a bunch of minimal conditions on this attitude which deliver the result: for x to take-p-for-granted among G is for x to believe that it is commonly believed that p (where common belief is understood as the standard infinite conjunction of iterated higher order belief conditions).

But consider now the limiting singleton case, where x takes-p-for-granted among the group {x}. Following the pattern above, that requires inter alia that (i) x believes p; (ii) x believes that everyone in {x} believes p. The latter is equivalent to: x believes that x believes p. So this primitive attitude of taking for granted, in the strong sense relevant to group deliberation, has as its limiting singleton case an attitude which satisfies the conditions of the entailment model.

Now, it’s a contingent matter whether human psychology contains an attitude like taking-p-for-granted-among-G. But suppose it does do so. Then it would seem otiose for it to contain an additional primitive attitude of first-order belief, when the limiting de se singleton case of taking-for-granted-among-{x} could do the job. Now, it does the job by way of an attitude that is more committal than belief, in one sense. Taking-p-for-granted is correctly held only when p, plus the world meet some logically independent condition q (which includes that one believes that p). But crucially, these extra conditions on taking-for-granted are self-vindicating. When one takes-for-granted among {oneself} that p, then one can go wrong if not-p. But one cannot go wrong by it failing to be the case that one doesn’t believe p, because ex hypothesi, taking p for granted entails believing p. And this goes for all the extra conditions that it takes for taking-for-granted to be correct that go beyond what it takes for believing-p to be correct. So even though “taking for granted” is stronger than belief, it’s no riskier.

On this model of human psychology rather than having to deal with an array of primitive attitudes with metacognitive contents (I believe that p, I believe that I believe that p, etc), we work with attitudes with simple first-order content, but which have an internal functional role which does the work for which you’d otherwise need metacognitive content. There can then be, in addition, really genuine cases of basic attitudes with true metacognitive content (as when I take for granted that I believe p, but do not take for granted p). And there may be specialized situations where that true metacognitive thinking is necessary or helpful. But for the general run of things, we’ll get by with first-order content alone.

Why might we hestitate to go with the entailment model? Well, if we had clear counterinstances to the original conditional, we’d need to be able to describe how they arise. And counterinstances do seem possible. Someone might, for example, accept bets about how they will act in the future (e.g. bet that they’d pick up the right hand box in an attempt to get cake) but when the moment comes, acts in another way (e.g. choose the left hand box). The final behaviour is in line with a belief that there’s cake in the right hand box; the earlier betting behaviour is in line with the agent failing to believe that they believe there’s cake in that box (it is in line, instead, with a belief that they believe there’s cake in the other box).

Now these kind of cases are easily explained by the introspection model as cases where the introspective mechanism misfires. Indeed, that model esssentially posits a special purpose mechanism plugging away in all the normal ways, just so we can say this about the recherche cases where first and higher order beliefs seem to come apart. What can the friend of the entailment model say about this?

There are two strategies here.

One is to appeal to “fragmentation”. The person concerned has a fragmented mind, one of which includes a taking-for-granted-p, and the other of which doesn’t (and instead includes a taking-for-granted-I-believe-p, or perhaps even a taking-for-granted-not-p). The fragments are dominant in different practical situations. If one already thinks that fragments indexed to distinct practical situations is part of what we need to model minds, then it’s no new cost to deploy the resources to make for the kind of case just sketched. By contrast to the introspective model, we don’t have any special machinery functioning in the normal run of cases, but rather a special (but independently motivated) phenomenon arising which accounts for what happens in the rare cases where first and higher order belief comes apart.

Another strategy that’s available is to loosen the earlier assumption that the only way that humans believe p is via taking p for granted. One insists that this is the typical way that humans believe p (and so, typically, when one believes p that’s because one takes p for granted, and hence believes one believes p too). But one allows that there are also states of “pure” believing-that-p, which match only the first-order functional role of taking for granted. (Compare: most of us think there are acceptance-states other than belief–pretense, say–which are like belief in many ways, but where acceptance-that-p is tied to acting-as-if-p only for a restricted range of contexts. Just so, on this account pure belief will be an artificially restricted version of the taking-for-granted, not the usual stock in trade of our cognitive processing, but something which can get into if the need demands, or lapse into as the result of unusual circumstances).

(I don’t want to pin anybody else with either of these models. But I should say that when I’m thinking about the entailment model, I have in mind certain things that Stalnaker says in defence of principles like the conditional from which I start—the idea that believing you believe p when you believe p is the default case, and that it failures of that tie that require justification, not the other way around.)