Monthly Archives: April 2008

Partial emotions?

If you think emotional states have representational content, it seems reasonable to think that there are rational constraints between the having of a certain emotion (say, feeling regretful that one has dropped something on one’s foot) and the having of a certain belief (say, believing that one has dropped something on one’s foot). Now, I imagine that some would want to question such a connection, but it seems at least a decent position to consider something like:

  • it is rational to regret that p only if it is rational to believe that p.

But now suppose that we think that for theoretical purposes (say in characterizing instrumentally rational action) we should really be talking in terms of partial beliefs rather than all-or-nothing beliefs. In the official idiom, it seems, we dispense with talking about “believing that one has dropped something on one’s foot” and instead talk of things like “believing to degree d that one has dropped something on one’s foot”. (I’ll come back later to the question about whether we just ditch the all-or-nothing belief talk).

What then should we say about the rational connections between doxastic and emotional states? How are emotions rationally constrained by belief? Here’s a very natural thing to write down:

  • it is rational to regret that p to degree d only if it is rational to believe that p to degree d.

The trouble with this is that I’m not sure I really understand the notion of “partial regret” that is now being talked about. Of course, I understand the idea of intensity of regret: I might regret insulting someone with a greater intensity than I regret forgetting my umbrella this morning. But “degrees of regret” in the intensity sense aren’t obviously what we want in this connection (I’m tempted to say they’re obviously not what we want). But do we really any other grip on the notion of a degreed emotion?

Of course, some people are likely to have similar sceptical thoughts about the notion of belief–finding all-or-nothing belief talk familiar home turf, and talk of partial belief rather mysterious. But the cases, to me, seem only superficially similar. To begin with, I think I had some pre-theoretic grip on the notion of degree of belief/confidence (though even here I think that there is a phenomenological intensity sense of “degree of confidence” that needs to be cleared out of the way). And even if I were to give up the pre-theoretic grip, I’ve at least got a theoretical/operational grip on the notion of degree of belief through decision-theoretic connections to action.

With partial emotions, I’m all at sea—things like regret seem to me, pre-theoretically, all-or-nothing (setting aside differences of felt intensity). And neither do I have a natural operational/theoretical grip on such partial emotions to reach for.  I’d be very glad if someone could convince me that I do understand the notion, or point to literature where such issues are discussed!

Another strategy, I suppose, is to think of all-or-nothing belief as distinct from the degreed notion. If that’s the case, then we could formulate the connection between beliefs and regret just as originally stated. This’d be interesting to me, since previously I’ve never really been clear what’s lost if we ditch all-or-nothing belief-talk (and ensuing puzzles like the lottery paradox) and only appeal in our theories of mind to the partial beliefs. But if other emotional states with intentional content have rational connections to all-or-nothing beliefs, then it seems we’ve got a real theoretical role for such states.

Of course, this line of thought gives urgency to puzzles about how to relate partial beliefs and all-or-nothing beliefs — (e.g. all-or-nothing belief as partial belief above a certain threshold). That’s a whole literature in itself.

What do people think? Am I being really naive in thinking there are rational connections like the above to get worried about? Do they require reformulation when we introduce partial beliefs, or (as suggested at the end) is this a way of arguing the importance of retaining all-or-nothing belief talk as well as partial belief talk? Can anyone make sense of the notion of a partial emotion (when distinguished from the phenomenological-intensity reading)?

Defending conditional excluded middle

So things have been a little quiet on this blog lately. This is a combination of (a) trips away, (b) doing administration-stuff for the Analysis Trust, and (c) the fact that I’m entering the “writing up” phase of my current research leave.

I’ve got a whole heap of papers that in various stages of completion that I want to get finished up. As I post drafts online, the blogging should become more regular. So here’s the first installment—a new version of an older paper that discusses conditional excluded middle, and in particular, a certain style of argument that Lewis deploys against it, and which Bennett endorses (in an interestingly varied form) in his survey book.

What I try to do in the present version—apart from setting out some reasons for being interested in conditional excluded middle for counterfactuals that I think deserve more attention—is try to disentangle two elements of Bennett’s discussion. One element is a certain narrow-scope analysis of “might”-counterfactuals (roughly: “if it were that P it might be that Q” has the form: P\rightarrow \Diamond Q —where the modal expresses an idealized ignorance). The second is an interesting epistemic constraint on true counterfactuals I call “Bennett’s Hypothesis”.

One thing I argue is that Bennett’s Hypothesis all on its own conflicts with conditional excluded middle. And without Bennett’s Hypothesis, there’s really no argument from the narrow-scope analysis alone against conditional excluded middle. So really, if counterfactuals work the way Bennett thinks they do, we can forget about the fine details of analyzing epistemic modals when arguing against conditional excluded middle. All the action is with whether or not we’ve got grounds to endorse the epistemic constraint on counterfactual truth.

The second thing I argue is that there are reasons to be severely worried about Bennett’s Hypothesis—it threatens to lead us straight into an error theory about ordinary counterfactual judgements.

If people are interested, the current version of the paper is available here. Any thoughts gratefully received!

CFP: CMM Graduate Conference at Leeds

The Centre for Metaphysics and Mind at the University of Leeds is hosting the 3rd Annual CMM Graduate Conference on Thursday 4th September. This is will run immediately before the metaphysics conference, Perspectives on Ontology, that is being held at University of Leeds from Friday 5th to Sunday 7th September.

Submissions are welcome on any area of metaphysics. Metaphysics should be broadly construed to include not only traditional metaphysical topics, but also the metaphysical aspects of e.g. philosophy of mind, philosophy of physics, philosophy of religion, and aesthetics.

Submissions of any length up to  5,000 words will be considered.

Each paper presented at the conference will be followed by a response from a member of academic staff from the University of Leeds Department of Philosophy.

As with last year’s conference we hope to be able to pay some or all of the travel and accommodation costs for those people whose papers are accepted. (This is dependent on successful funding applications.)

Please submit complete papers, preferably by e-mail, to Sarah Grant, phl2skg@leeds.ac.uk. Please mark your submission clearly as such. Receipt will be acknowledged asap. Submissions will also be accepted by mail:

Sarah Grant
School of Philosophy
University of Leeds
Woodhouse Lane
LS2 9JT

All papers should be suitable for blind review (we cannot guarantee anonymised refereeing if your paper is not suitably anonymised). Please include a cover page with title, abstract and contact details. Mailed submissions should include two copies.

Deadline for receipt of submissions is Friday 18th July 2008.

Decisions will be made by Friday 8th August 2008.

For more general details on the conference please consult:

http://www.personal.leeds.ac.uk/~phsk/cmmgc08/index.htm

or e-mail Duncan Watson at phl5dw@leeds.ac.uk

Metaphysics at Leeds: Perspectives on ontology conference

Registration details are now available for Perspectives on Ontology. Please see the website here.
Attendance at the conference is limited, so early registration is urged.

Details are also available for the graduate bursaries.

Perspectives on Ontology

A major international conference on metaphysics to be held at the University of Leeds, Sep 5th-7th 2008.

Speakers:
Karen Bennett (Cornell)
John Hawthorne (Oxford
Jill North (Yale)
Helen Steward (Leeds)
Gabriel Uzquiano (Oxford)
Jessica Wilson (Toronto)

Commentators:
Benj Hellie (Toronto)
Kris McDaniel (Syracuse)
Juha Saatsi (Leeds)
Ted Sider (NYU)
Jason Turner (Leeds)
Robbie Williams (Leeds)

There’s also going to be a graduate conference directly prior to this. Details, including a call for papers, are available here.

Fafblog is back!

Hooray!

[Ht: Crooked Timber]

Probabilities and indeterminacy

I’ve just learned that my paper “Vagueness, Conditionals and Probability” has been accepted for the first formal epistemology festival in Konstanz this summer. It looks like the perfect place for me to get feedback on, and generally learn more about, the issues raised in the paper. So I’m really looking forward to it.

I’m presently presenting some of this work as part of a series of talks at Arche in St Andrews. I’m learning lots here too! One thing that I’ve been thinking about today relates directly to the paper above.

One of the main things I’ve been thinking about is how credences, evidential probability and the like should dovetail with supervaluationism. I’ve written about this a couple of times in the past, so I’ll briefly set out one sort of approach that I’ve been interested in, and then sketch something that just occurred to me today.

The basic question is: what attitude should we take to p, if we are certain that p is indeterminate? Here’s one attractive line of thought. First of all, it’s a familiar thought that logic should impose some rationality constraints on belief. Let’s formulate this minimally as the constraint that, for the rational agent, probability (credence or evidential probability) can never decrease across a valid argument:

A\models B \Rightarrow p(A)\leq p(B)

Now take one of the things that supervaluational logics are often taken to imply, where ‘D‘ is read as ‘it is determinate that’:

A\models DA

Then we note that this and the logicality constraint on probabilities entails that

p(A)\leq p(DA)

So in particular, if we fully reject A being determinate (e.g. if we fully accept that it’s indeterminate) then the probability of the RHS will be zero, and so by the inequality, the probability of the RHS is zero. (The particular supervaluational consequence I’m appealing to is controversial, since it follows only in settings which seem inappropriate for modelling higher-order indeterminacy, but we can argue by adding a couple of extra assumptions for the same result in other ways. This’ll do us for now though).

The result is that if we’re fully confident that A is indeterminate, we should have probability zero in both A and in not-A. That’s interesting, since we’re clearly not in Kansas anymore—this result is incompatible with classical probability theory. Hartry Field has argued in the past for the virtues of this result as giving a fix on what indeterminacy is, and I’m inclined to think that it captures something at the heart of at least one way of conceiving of indeterminacy.

Rather than thinking about indeterminate propositions as having point-valued probabilities, one might instead favour a view whereby they get interval values. One version of this can be defined in this setting. For any A, let u(A) be defined to be 1-p(\neg A). This quantity—how little one accepts the negation of a proposition—might be thought of as the upper bound of an interval whose lower bound is the probability of A itself. So rather than describe one’s doxastic attitudes to known indeterminate A as being “zero credence” in A, one might prefer the description of them as themselves indeterminate—in a range between zero and 1.

There’s a different way of thinking about supervaluational probabilities, though, which is in direct tension with the above. Start with the thought that at least for supervaluationism conceived as a theory of semantic indecision, there should be no problem with the idea of perfectly sharp classical probabilities defined over a space of possible worlds. The ways the world can be, for this supervaluationist, are each perfectly determinate, so there’s no grounds as yet for departing from orthodoxy.

But we also want to talk about the probabilities of what is expressed by sentences such as “that man is bald” where the terms involved are vague (pick your favourite example if this one won’t do). The supervaluationist thought is that this sentence picks out a sharp proposition only relative to a precisification. What shall we say of the probability of what this sentence expresses? Well, there’s no fact of the matter about what it expresses, but relative to each precisification, it expresses this or that sharp proposition—and in each case our underlying probability measure assigns it a probability.

Just as before, it looks like we have grounds for assigning to sentences, not point-like probability values, but range-like values. The range in question will be a subset of [0,1], and will consist of all the probability-values which some precisification of the claim acquires. Again, we might gloss this as saying that when A is indeterminate, it’s indeterminate what degree of belief we should have in A.

But the two recipes deliver totally utterly different results. Suppose, for example, I introduce a predicate into English, “Teads”, which has two precisifications: on one it applies to all and only coins which land Heads, on the other it applies to all and only coins that land Tails (or not Heads). Consider the claim that the fair coin I’ve just flipped will land Teads. Notice that we can be certain that this sentence will be indeterminate—whichever way the coin lands, Heads or Tails, the claim will be true on one precisification and false on the other.

What would the logic-based argument give us? Since we assign probability 1 to indeterminacy, it’ll say that we should assign probability 0, or a [0,1] interval, to the coin landing Teads.

What would the precisification-based argument give us? Think of the two propositions the claim might express: that the coin will land heads, or that the coin will land tails. Either way, it expresses a proposition that is probability 1/2. So the set of probability values associated with the sentence will be point-like, having value 1/2.

Of course, one might think that the point-like value stands in an interesting relationship to the [0,1] range—namely being its midpoint. But now consider cases where the coin is biased in one way. For example, if the coin is biased to degree 0.8 towards heads, then the story for the logic-based argument will remain the same. But for the precisification-based person the values will change to {0.8,0.2}. So we can’t just read off the values the precisificationist arrives at from what we get from the logic-based argument. Moral: in cases of indeterminacy, thinking of probabilities in the logic-based way wipes out all information other than that the claim in question is indeterminate.

This last observation can form the basis for criticism of supervaluationism in a range of circumstances in which we want to discriminate between attitudes towards equally indeterminate sentences. And *as an argument* I take it seriously. I do think there should be logical constraints on rational credence, and if the logic for supervaluationism is as its standardly taken to be, that enforces the result. If we don’t want the result, we need to argue for some other logic. Doing so isn’t cost free, I think—working within the supervaluational setting, bumps tend to arise elsewhere when one tries to do this. So the moral I’d like to draw from the above discussion is that there must be two very different ways of thinking about indeterminacy that both fall under the semantic indecision model. These two conceptions are manifest in different attitudes towards indeterminacy described above. (This has convinced me, against my own previous prejudices, that there’s something more-than-merely terminological to the question of “whether truth is supertruth”).

But let’s set that aside for now. What I want to do is just note that *within* the supervaluational setting that goes with the logic-based argument and thinks that all indeterminate claims should be rejected, there shouldn’t be any objection to the underlying probability measure mentioned above, and given this, one shouldn’t object to introducing various object-language operators. In particular, let’s consider the following definition:

“P(S)=n” is true on i, w iff the measure of {u: “S” is true on u,i}=n

But it’s pretty clear to see that the (super)truths about this operator will reflect the precisification-based probabilities described earlier. So even if the logic-based argument means that our degree of belief in indeterminate A should be zero, still there will be object-language claims we could read as “P(the coin will land Teads)=1/2” that will be supertrue. (The appropriate moral from the perspective of the theorist in question would be that whatever this operator expresses, it isn’t a notion that can be identified with degree of belief).

If this is right, then arguments that I’m interested in using against certain applications of the “certainty of indeterminacy entails credence zero” position have to be handled with extreme care. So, for example, in the paper mentioned right at the beginning of this post, I appeal to empirical data about folk judgements about the probabilities of conditionals. I was assuming that I could take this data as information on what the folk view about credences of conditionals is.

But if, compatibly with taking the “indeterminacy entails zero credence” view of conditionals, one could have within a language a P-operator which behaves in the ways described above, this isn’t so clear anymore. Explicit probability reports might be reporting on the P-operator, rather than subjective credence. So everything becomes rather delicate and very confusing.