Psychology without semantics or psychologically loaded semantics?

Here’s a naive view of classical semantics, but one worth investigating. According to this view, semantics is a theory that describes a function from sentences to semantic properties (truth and falsity) relative to a given possible circumstance (“worlds”). Let’s suppose it does this via a two-step method. First, it assigns to each sentence a proposition. Second, it assigns to each proposition, a function from worlds to {True, False} (“truth conditions”).

Let’s focus on the bit where propositions (at a world) are assigned truth-values. One thing that’s leaps out is that the “truth values” assigned have significance beyond semantics. For propositions, we may assume, are the objects of attitudes like belief. It’s natural to think that in some sense, one should believe what’s true, and reject what’s false. So the statuses attributed to propositions as part of the semantic theory (the part that describes the truth-conditions of propositions) are psychologically loaded, in that propositions that have one of the statuses are “to be believed” and those that have the other are “to be rejected”. The sort of normativity involved here is extremely externalistic, of course — it’s not a very interesting criticism of me that I happen to believe p, just on the basis p is false, if my evidence suggested p overwhelmingly. But the idea of an external ought here is familiar and popular. It is often reported, somewhat metaphorically, as the idea that beliefs aims at truth (for discussion, see e.g. Ralph Wedgwood on the aim of belief).

Suppose we are interested in one of the host of non-classical semantic theories that are thrown around  when discussing vagueness. Let’s pick a three-valued Kleene theory, for example. On this view, we have three different semantic statuses that propositions (relative to a circumstance) are mapped to. Call them neutrally A, B and C (much of the semantic theory is then spent telling us how these abstract “statuses” are distributed around the propositions, or sentences which express the propositions). But what, if any, attitude is it appropriate to take to a proposition that has one of these statuses? If we have an answer to this question, we can say that the semantic theory is psychologically loaded (just as the familiar classical setting was).

Rarely do non-classical theorists tell us explicitly what the psychological loading of the various states are. But you might think an answer is implicit in the names they are given. Suppose that status A is called “true”, status C is called “falsity”. Then, surely, propositions that are A are to be believed, and propositions with C are to be rejected. But what of the “gaps”, the propositions that have status B, the ones that are neither true nor false? It’s rather unclear what to say; and without explicit guidance about what the theorist intends, we’re left searching for a principled generalization. One thought is that they’re at least untrue, and so are intended to have the normative role that all untrue propositions had in the classical setting—they’re to be rejected. But of course, we could equally have reasoned that, as propositions that are not false, they might be intended to have the status that all unfalse propositions have in the classical setting—they are to be accepted. Or perhaps they’re to have some intermediate status—-maybe a proposition that has B is to be half-believed (and we’d need some further details about what half-belief amounts to). One might even think (as Maudlin has recently explicitly urged) that in leaving a gap between truth and falsity, the propositions are devoid of psychological loading—that there’s nothing general to say about what attitude is appropriate to the gappy cases.

But notice that these kind of questions are at heart, exegetical—that we face them just reflects the fact that the theorist hasn’t told us enough to fix what theory is intended. The real insight here is to recognize that differences in psychological loading give rise to very different theories (at least as regards what attitudes to take to propositions), which should each be considered on their own merits.

Now, Stephen Schiffer has argued for some distinctive views about what the psychology of borderline cases should be like. As John Macfarlane and Nick Smith have recently urged, there’s a natural way of using Schiffer’s descriptions to fill out in detail one fully “psychologically loaded” degree-theoretic semantics. To recap, Schiffer distinguishes between “standard” partial beliefs (SPBs) which we can assume behave in familiar (probabilistic) ways and have their familiar functional role when there’s no vagueness or indeterminacy at issue. But then we also have special “vagueness-related” partial beliefs (VPBs) which come into play for borderline cases. Intermediate standard partial beliefs allow for uncertainty, but are “unambivalent” in the sense that when we are 50/50 over the result of a fair coin flip, we have no temptation to all-out judge that the coin will land heads. By contrast, VPBs exclude uncertainty, but generate ambivalence: when we say that Harry is smack-bang borderline bald, we are pulled to judge that he is bald, but also (conflictingly) pulled to judge that he is not bald.

Let’s suppose this gives us enough for an initial fix on the two kinds of state. The next issue is to associate them with the numbers a degree-theoretic semantics assigns to propositions (with Edgington, let’s call these numbers “verities”). Here is the proposal: a verity of 1 for p is ideally associated with (standard) certainty that p—an SPB of 1. A verity of 0 for p is ideally associated with (standard) utter rejection of p—an SPB of 0. Intermediate verities are associated with VPBs. Generally, a verity of k for p is associated with a VPB of degree k in p. [Probably, we should say for each verity, both what the ideal VPB and SPB are. This is easy enough: one should have VPBs of zero when the verity is 1 or 0; and SPB of zero for any verity other than 1.]

Now, Schiffer’s own theory doesn’t make play with all these “verities” and “ideal psychological states”. He does use various counterfactual idealizations to describe a range of “VPB*s”—so that e.g. relative to a given circumstance, we can talk about which VPB an idealized agent would take to a given proposition (though it shouldn’t be assumed that the idealization gives definitive verdicts in any but a small range of paradigmatic cases). But his main focus is not on the norms that the world imposes on psychological attitudes, but norms that concern what combinations of attitudes we may properly adopt—-requirements of “formal coherence” on partial belief.

How might a degree theory psychologically loaded with Schifferian attitudes relate to formal coherence requirements? Macfarlane and Smith, in effect, observe that something approximating Schiffer’s coherence constraints arises if we insist that the total partial belief in p (SPB+VPB) is always representable as an expectation of verity (relative to a classical credence distribution over possible situations). We might also observe that component corresponding to Schifferian SPB within this is always representable as the expectation of verity 1 (relative to the same credence). That’s suggestive, but it doesn’t do much to explain the connection between the external norms that we fed into the psychological loading, and the formal coherence norms that we’re now getting out. And what’s the “underlying credence over worlds” doing? If all the psychological loading of the semantics is doing is enabling a neat description of the coherence norms, that may have some interest, but it’s not terribly exciting—what we’d like is some kind of explanation for the norms from facts about psychological loading.

There’s a much more profound way of making the connection: a way of deriving coherence norms from psychologically loaded semantics. Start with the classical case. Truth (truth value 1) is associated with certainty (credence 1). Falsity (truth value 0) is associated with utter rejection (credence 0). Think of inaccuracy as a way of measuring how far a given partial belief is from the actual truth value; and interpret the “external norm” as telling you to minimize overall inaccuracy in this sense.

If we make suitable (but elegant and arguably well-motivated) assumptions about how “accuracy” is to be measured, then it turns out probabilistic belief states emerge as a special class in this setting. Every improbabilistic belief state can be shown to be accuracy-dominated by a probabilistic one—-there’s some particular probability that’ll be necessarily more accurate than the improbability you started with. No probabilistic belief state is dominated in this sense.

Any violations of formal coherence norms thus turns out to be needlessly far from the ideal aim. And this moral generalizes. Taking the same accuracy measures, but applying them to verities as the ideals, we can prove exactly the same theorem. Anything other than the Smith-Macfarlane belief states will be needlessly distant from the ideal aim. (This is generated by an adaptation of Joyce’s 1998 work on accuracy and classical probabilism—see here for the generalization).

There’s an awful lot of philosophy to be done to spell out the connection in the classical case, let alone its non-classical generalization. But I think even the above sketch gives a view on how we might not only psychologically load a non-classical semantics, but also use that loading to give a semantically-driven rationale for requirements of formal coherence on belief states—and with the Schiffer loading, we get the Macfarlane-Smith approximation to Schifferian coherence constraints.

Suppose we endorsed the psychologically-loaded, semantically-driven theory just sketched. Compare our stance to a theorist who endorsed the psychology without semantics—that is, they endorsed the same formal coherence constraints, but disclaimed commitment to verities and their accompanying ideal states. They thus give up on the prospect of giving the explanation of the coherence constraints sketched above. We and they  would agree on what kinds of psychological states are rational to hold together—including what kind of VPB one could rationally take to p when you judge p to be borderline. So they could both agree on the doxastic role of the concept of “borderlineness”, and in that sense give a psychological specification of the concept of indeterminacy. We and they would be allied against rival approaches—say, the claims of the epistemicists (thinking that borderlineness generates uncertainty) and Field (thinking that borderlineness merits nothing more than straight rejection).  The fan of psychology-without-semantics might worry about the metaphysical commitments of his friend’s postulate of a vast range of fine-grained verities (attaching to propositions in circumstances)—metasemantic explanatory demands and higher-order-vagueness puzzles are two familiar ways in which this disquiet might be made manifest. In turn, the fan of psychologically loaded, semantically driven theory might question his friend’s refusal to give any underlying explanation of the source of the requirements of formal coherence he postulates. Can explanatory bedrock really be certain formal patterns amongst attititudes? Don’t we owe an illuminating explanation of why those patterns are sensible ones to adopt? (Kolodny mocks this kind of attitude, in recent work, as picturing coherence norms as a mere “fetish for a certain kind of mental neatness”). That explanation needn’t take a semantically-driven form—but it feels like we need something.

To repeat the basic moral. Classical semantics, as traditionally conceived, is already psychologically loaded. If we go in for non-classical semantics at all (with more than instrumentalist ambitions in mind) we underspecify the theory until we’re told what what the psychological loading of the new semantic values is to be. That’s one kind of complaint against non-classical semantics. It’s always possible to kick away the ladder—to take the formal coherence constraints motivated by a particular elaboration of this semantics, and endorse only these without giving a semantically-driven explanation of why these constraints in particular are in force. Repeating this stance, we can find pairs of views that, while distinct, are importantly allied on many fronts. I think in particular this casts doubt on the kind of argument that Schiffer often sounds like he’s giving—i.e. to argue from facts about appropriate psychological attitudes to borderline cases, to the desirability of a “psychology without semantics” view.

17 responses to “Psychology without semantics or psychologically loaded semantics?

  1. “Can explanatory bedrock really be certain formal patterns amongst attititudes? Don’t we owe an illuminating explanation of why those patterns are sensible ones to adopt?”

    Bayesians usually argue that if your beliefs don’t obey the formal coherence constraints then you could be offered a bet which your credences recommend but which can be seen a priori to lose money (where what’s a priori obviously depends on the background logic.) Do you think there’s a reason this couldn’t be applied in the non-classical case?

    I guess it partly depends what the coherence constraints you’re thinking of are. The Dutch book arguments work well enough to show that all theorems of your logic should get credence 1, anti-theorems credence 0, and so on, but if you’re also constraining credences to be the expected truth value in a degree theory I think that also guarantees things like Cr(p or ~p) >= 1/2, which don’t follow from those constraints alone. (I believe it’s compatible with, e.g., Lukasiewicz logic that every non-theorem has credence 0, even if you stipulate that Cr(p or q) = Cr(p) + Cr(q) for logically incompatible p and q, Cr(p)=1 (0) for theorems (anti-theorems) and Cr(p) =q is a theorem.)

  2. But maybe being the expected truth value of a degree valuation isn’t a good constraint on credences anyway, since it involves having Cr(p and ~p) > 0 whenever p is vague.

    By the way the last constraint got eaten by HTML tags, it should have been Cr(p) \leq Cr(q) whenever \vdash p \rightarrow q.

  3. Hi Andrew,

    Sorry—I missed these comments!

    There’s a result by Jeff Paris, extending the Dutch book arguments to a whole variety of non-classical settings. It’s worth thinking of these things in the following way: for an arbitrary “belief state” (which can be crazy), we show that (A) it’s being immune from dutch books is equivalent to (B) it being an expectation of truth values, which is in turn equivalent to (C) it meeting a certain set of constraints formulated using an appropriate nonclassical logic. Paris has a general result that (A) and (B) are equivalent. And showing that (B) entails (C) is often pretty straightforward. In the classical case, intuitionist case, and certain other cases we have results that get us from (C) to (B) (with a small group of “local” constraints fed into (C)). That’s where the question of Lukasiewicz logic and credences arises. I know there are results for the Lukasiewicz 1-preservation logic, linking it up to “expectation of getting value 1” rather than “expectation of degree of truth”. But I don’t know of a comparable result for the Lukasiewicz degree-preservation logic, with the full “expectation of truth value” constraint.

    The accuracy domination results give you an alternative to (A)—(A*) is “not being accuracy dominated”. Joyce-style results give you that (A*) entails (B), and depending how we set things up, we can get the converse. The question of the relation between the global and local (B-style or C-style) representations of these “generalized probabilities” is a problem common to both approaches.

    Given the Paris result, a dutch-book based rationale is in just as good technical shape than the accuracy-style rationale. But with both of these, the key thing is the philosophical interpretation of the results. In particular, you’ve got to assume a particular kind of formula for calculating the returns you get on your bet (related to the truth value of the outcome) in order to get the dutch book theorem. And it’s a bit murky for me what you have to assume (e.g. in terms of realistic attitudes to intended models) to get this up and running. But it’s certainly an interesting thing to investigate.

    My suspicion is that we won’t get a proper understanding of the dutch book arguments in the non-classical setting until we’ve got a proper grip on decision theory in a non-classical setting, and in particular, rational constraints on desirability when it’s indeterminate you get what you want.

    On the second post—yeah, I’m not a great fan of having intermediate credences in explicit contradictions. But I like the degree versions of supervaluationism, which don’t have that feature, but which can be interpreted in Schifferian ways if we like.

    [I’ve edited out the orphaned paragraph here.]

  4. Hi Robbie,

    Thanks for this. I had never heard of the Paris paper. I had a brief look at “a note on the dutchbook method” – is this the one you meant?

    I’m interested that some versions of (A), (B) and (C) can be shown to be equivalent. But I think the devil really is in the details here as I can think of a couple of ways of formulating (A) and (C) which leave them compatible (or even mandating) Field style rejectionism as opposed to the expected truth value.

    For example if your constraints are:
    (1) Cr(p)=1 (0) if |- p (p |-)
    (2) Cr(p or q) + Cr(p and q) = Cr(p) + Cr(q)
    (3) Cr(p) \leq Cr(q) if |- p\rightarrow q (or weaker: if p |- q)
    then the function that assigns 1 to all theorems and 0 to all non-theorems satisfies this constraint for any logic with the disjunction property (plus some basic stuff like and/or intro/elim) so this includes intuitionistic logic and Kleene logic.

    An important case, Lukasiewicz logic, doesn’t have the disjunction property: “(p -> q) or (q -> p)” is a theorem while neither disjuncts are, and you can verify the above function would violate (2). However if you modified (2) to:
    (2′) Cr(p or q) = Cr(p) + Cr(q) whenever p, q |-
    the “rejector” function once again satifies the constraints.

    (Argument sketch: clearly the rejector function satisfies (1) and (3). If there were phi and psi which failed (2′) then (a) no Lukasiewicz valuation can assign both phi and psi value 1, (b) every valuation assigns at least one of them value 1 and (c) there are valuations for which both are less than 1. We can think of phi and psi as *continuous* functions f and g from [0,1]^n into [0,1] where n is the number of prop letters in phi and psi jointly. Give (c) there must be \bar{x} and \bar{y} such that f(\bar{x})< 1 and g(\bar{y}) 0 and h(\bar{y})< 0, so by the intermediate value theorem on the interval [\bar{x}, \bar{y}] there is a \bar{z} such that h(\bar{z})=0, i.e. f(\bar{z})=g(\bar{z}) \not= 1 by (a), which contradicts (b).)

    Regarding non-classical decision theory: I think that's definitely an important thing to think about. But don't the Dutch book arguments rely on cases where you can show by logic alone (whatever the logic in question is) that you'll lose money, and are thus mainly concerned with cases where it's determinate whether you get what you want (although obviously the DB arguments only put very weak constraints on credences, a full decision theory would have to think about the general question.)

  5. Just so you know: something has disappeared from the third line from the bottom of the penultimate paragraph, which is why it won’t make sense (maybe it’ll still be viewable if you use edit comment?.)

  6. Hi Andrew,

    Yep, that’s the Paris paper I was thinking of!

    Note that the interpretation of the Dutch Book must be the following. Suppose you buy a bet on P, promising a “return” of y dollars. We need to know what happens to the bet in all possible situations, including ones where the truth value of P is intermediate (call this V(P)). For the Paris theorem to be interpreted as a guaranteed loss in a series of fair bets, we need to say that what you get is y multiplied by V(P). At least, that’s how I read it…

    And the question I was worried about was: why think that this is the way things work? If we’re really talking about bookies, presumably we can come up with any rules we like. For example, within a Lukasiewicz continuum valued setting, suppose bookie A offers us a bet on “P” under the following rules: a return of y if the Luk-value of P is 1; and 0 otherwise. Suppose bookie B offers a bet on “P” under the rule that gives a return on y multiplied by the Luk-value of P. In the first case, by Paris’s theorem we’ll be dutch-bookable unless our betting odds are expectations of value 1 (since in effect the V to which the theorem applies isn’t the Luk-val, but the indicator function for the designatedness of the Luk-val). In the second case, by the same theorem we’ll be dutch-bookable unless our betting odds are expectations of Luk-val.

    Which set of betting rules elicits credences? The patterns of betting odds are clearly different (e.g. think about the fair odds for betting on A&~A); so without deciding on the privileged scheme the dutch book theorem doesn’t give guidance. (I think Richard Dietz has a paper on this point, in the context of supervaluationism.)

    So I think we need some further argument about what the “right” scheme of returns is for a given non-classical setting. And I hope that we get some principled guidance once we have a more general story about utilities in P, and how that translates to utilities for worlds where V(P) takes various values.

    (I’m afraid I couldn’t figure out from the text what was going on with the line in the penultimate para—it had codes for inequalities, I think—if you want to try to repaste I could edit it in!)

  7. Hi Andrew,

    About the main point you raise. I agree we need to take care in the statement of results—I try to set things up carefully in “Gradational accuracy and non-classical semantics” (In the context of accuracy domination rather than dutch books, but the point is the same). There’s some subtle issues over what you mean by “truth values” (compare the discussion of dutch books for Lukasiewicz above) and what characterization of logic is then appropriate.

    I have to say it is mostly rejectionist-friendly settings that I know of completeness results for (i.e. proofs that all functions satisfying a small set of axioms are expectations of truth values, of the relevant kind). Paris’s results in the paper, and the followup discussion of infinitary Lukasiewicz 1-preservation logic are of that type.

    On how far we can get with axiomatizing anti-rejectionist settings—I don’t really know. The Lukasiewicz degree-preservation logic + expectation of Luk value, with the full version of (2) in the axioms, is a natural place to start. Things get even more tricky once you add in extra resources like degree-determinacy operators. I’ve written a bit about one non-rejectionist case, and looked at a few things you might need/want to do (e.g. appeal to a family of degreed consequence relations). This is in “Degree supervaluational logic” (forthcoming in RSL). There’s also the thought of doing something with multi-premise/conclusion logics constraining credences (we’ve talked about that before—some of the resulting material is in the latest draft of “Gradational accuracy”). But basically, it’s underexplored as far as I know (if you find relevant results, do let me know!) The holy grail would be some general recipe for constructing axiomatizations, but that’s probably too much to hope for.

  8. I’m not sure how ones betting behaviour on either of those bets would reflect ones credential state *in p*. They are more likely to reflect something like your credential state in p having truth value 1 (in the latter case, I’m not sure what the former rule is measuring.) The proposition that p and the proposition that p has value 1 are very different: according to a naïve truth degree theory the former can be vague while the latter is sharp, but even if you allow for higher order vagueness you’re presumably not going to have “p if and only if p has value 1” (e.g. if you add a “has value 1” operator to Lukasiewicz this biconditional always has the same value as p.) But importantly in this case, I might be uncertain whether p but certain p doesn’t have value 1.

    Surely what you want is a bet that pays out *if p*, and doesn’t pay out if ~p. That seems to be the usual way of spelling out Dutch book arguments. (Although admittedly this is equivalent to your two rules in the classical case with the T-schema for “has value 1”.) Obviously if it’s indeterminate whether p then it’s indeterminate whether the bet will be payed out – I’m not sure if this is a problem in itself, it just means we’ll have to make the payoffs something you intrinsically care about which can vaguely obtain (i.e. not money.) But at any rate, you can see how a Dutch book could motivate (1)-(3) (we might need to weaken (1) to: Cr(p)=0 if |-~p), for example, if you can prove p, then rejecting a bet of anything less than 1 on p with a payoff of 1 if p and 0 in ~p will be suboptimal.

  9. Thanks for those references. Is the latest draft of “gradational accuracy” online?

    Here’s the argument that got mangled, I’ve swapped the less than signs:

    “Clearly the rejector function satisfies (1) and (3). If there were phi and psi which failed (2′) then (a) no Lukasiewicz valuation can assign both phi and psi value 1, (b) every valuation assigns at least one of them value 1 and (c) there are valuations for which both are less than 1. We can think of phi and psi as *continuous* functions f and g from [0,1]^n into [0,1] where n is the number of prop letters in phi and psi jointly. Give (c) there must be \bar{x} and \bar{y} such that f(\bar{x}) \prec 1 and g(\bar{y}) \prec 1, and by (b) f(\bar{y}) = 1 and g(\bar{y}) = 1. Let h = g – f. Then h(\bar{x}) \succ 0 and h(\bar{y}) \prec 0, so by the intermediate value theorem on the interval [\bar{x}, \bar{y}] there is a \bar{z} such that $h(\bar{z})=0$, i.e. f(\bar{z})=g(\bar{z}) \not= 1 by (a), which contradicts (b).”

  10. Hi Andrew,

    I totally agree with a bunch of stuff you say. In particular—the idea that we need to think of bets that return something you intrinsically care about but vaguely obtain. Question is: how should you value a quite definite outcome, such that it’s indeterminate whether it instantiates the sole thing you care about? Here’s one answer: you don’t value it at all. Here’s another: you value it a scaled version of the amount you care about a determinate case of getting that thing. Those two, I think, end up being modelled by the two versions of the DB argument earlier (and yes—I think the obvious description of the first is “betting on p being value 1”; but neverless, this might deserve the name “betting on p” if we have the right kind of philosophical interpretation of the non-classical semantics—in the same way as for rejectionists “credence in p” and “credence in p having value 1” look the same, even though the biconditional between p and p having value 1 fails). Thinking about the relation between intrinsic care for p and utility attached to indeterminate cases I think is the central question for formulating the decision theory (probably we can formulate the q independent of that issue, but it’s where it’s come up for me).

    I’m quite attracted to the following alternative to the above two recipes. Think of Schiffer’s account of judgements in the original post. Given an indeterminate case of p, say V(p)=0.5, as generating a 50/50 inclination to judge that the case is p (otherwise judging that it’s ~p). Now suppose you intrinsically desire that p, attaching utility 1 to outcomes that are clear cases of p. What utility attaches to an outcome where it’s indetermiante? Let this depend on whether, in your judgement, the outcome is such that p—there’s a 50/50 shot of this. That gives a quite different view of rational action.

    The view you describe is interesting. Paris’ strategy is to prove the DB for expectations of truth value, which requires we be able to describe a bet in a particular way, and the first two views I sketched are examples that fit the bill. There’s then a technical question about how to axiomatize that set. That’s why I claimed originally that we need to get into all this stuff about indeterminate cases, in order to run the Dutch Book argument—without it, I really can’t interpret Paris’ theorem as a Dutch Book. You’re suggesting, if I get you right, that we can give DBs directly against particular violations of axioms, without assuming anything in particular about the intermediate cases. I’d be interested to see that—of course, it might well give us a much weaker structure than the Paris ones, if only because in some cases we don’t (I think) know which axiomatizations to target.

  11. “Question is: how should you value a quite definite outcome, such that it’s indeterminate whether it instantiates the sole thing you care about? Here’s one answer: you don’t value it at all. Here’s another: you value it a scaled version of the amount you care about a determinate case of getting that thing. ”

    What I’m not getting here is the distinction between the “value” of p and how much you care about p. I’m used to thinking of utility in decision theory as just a measure of how much you care about something, so if it’s indeterminate whether what you care about is satisfied it’s also indeterminate whether that’s a high value scenario or not. (If utility isn’t connected to caring in this way, then I no longer see the normative force in the expected utility equation.) So regarding, e.g., the first answer, if you don’t value it at all, on my understanding of these technical terms, I claim you never really cared about it either. Maybe you could give me an independent grasp on the notion of utility?

    I think my point was just: if you want to measure someone’s credence in p you have to offer them a bet on p, not on some other proposition which isn’t even determinately materially equivalent to p. It may *turn out*, as a rejectionist claims, that these credences are the same, but I don’t think it’s safe to assume that. Even if the rejectionist was right, I might irrationally assign different credences to “p” and “p has value 1”, so it wouldn’t do when measuring my credences to offer me bets only on the latter proposition.

  12. I’d like to have a further think about the Schiffer case you mentioned. Regarding running Dutch books against violations of particular axioms; I was a bit careless in thinking it would be straightforward to show violations of (1)-(3) would be Dutch-bookable (I already had to weaken (1).) For example, I realise now the standard argument for (2), as currently stated, appeals to excluded middle (and a weak form of reasoning by cases). Given the setup: if p&~q then payoff = 2-x, if ~p&q then payoff=2-x, if p&q then payoff=4-x and if ~p&~q then payoff=-x, where x=Cr(p)+cr(q)+Cr(p&q)+Cr(p or q), you can run the Dutch book by manipulating the payoffs on p, q, p&q and (p or q) by showing that no matter which of the four antecedents hold you lose money. But of course that’s just excluded middle! Maybe it’s possible to still run the argument a different way?

  13. On the second post—I’d like to see this explored. As things stand, I think Paris’ is the best resource we have, but it forces us in a particular direction, and raises questions we might want to avoid.

    On the value/care—let’s agree to use them interchangeably. Here’s how I was thinking about things. Suppose the only thing I care about is whether I get goodies. Suppose there’s a situation where it’s indeterminate whether I survive some episode, but determinate that whoever survives gets goodies. So it’s indeterminate whether the situation exhibits the feature that I’m fetishizing.

    Suppose that I’m certain this situation has arisen. What credence should I have that the feature I care about is instantiated? As is familiar, there’s a variety of answers: I should be credence 0 in this; I should be credence 1/2 in it; it’s indeterminate what I should believe; I should be in an intermediate Schifferian state, etc.

    Suppose our answer is that I should have credence 0 that I get goodies. Then I utterly reject the proposition that the outcome has the one and only feature I care about. So, I think, it’d be irrational me to find the outcome desirable. On the other hand, suppose I should have credence 1/2 that I get goodies. Then I am 50/50 that the proposition has the one and only feature I care about. So, I think, I should find the outcome desirable to a scaled extent.

    What I’m doing is using a desire-belief connection to get traction on the desirability of outcomes. Maybe our differences come down to two different readings of what it is to intrinsically care (only) about whether an outcome has F. I’m reading that as imposing a internal rational constraint between beliefs about the instantiation of F in w and the desirability of w. I see there may be other ways to think about what it takes to care instriniscally about F—but it seems to me to be a legitimate reading.

  14. Yes, I think it definitely comes down to how you interpret “care intrinsically about”. I was taking it that to care intrinsically about goodies meant that your desires are fulfilled if and only if you have goodies. What the rejectionist is saying is that it’s not rational to care intrinsically about having goodies in this sense (you can only care about determinately having goodies – although this raises some difficult questions when you consider higher order vagueness.)

  15. Regarding the argument for finite additivity, I wonder if you do need LEM. The principle used in Dutch books something like this:

    (DB) No coherent credence function is such that: there is a set of bets such that (a) your credence recommends accepting them and (b) you can see a priori that if you accept them you’ll lose money.

    Now (DB) won’t do because there are cases where you accept, but it’s vague whether you lose money, so the conditional in (a) is vague which presumably means you couldn’t see it a priori. But maybe the full strength of (DB) isn’t needed to motivate the Dutch book. Perhaps this would do:

    (DB’) No coherent credence function is such that: there is a set of bets such that (a) your credence recommends accepting them and (b) you can’t see a priori that if you accept them you won’t lose money.

    the thought being that for a rational person betting by your credences is surely something you ought know won’t lead you into trouble. But this principle allows you to run the Dutch book for finite additivity as I was suggesting above. (Given the set up you can argue in a Lukasiewicz-acceptable way that it’s not determinately not the case that if you accept the bets, you’ll lose money.)

  16. What was I thinking! (DB’) doesn’t say what I wanted to say at all. At any rate, the point is you can see a priori that if you accept the bets, you at best vaguely lose money, and at worse determinately lose money.

  17. This blog doesn’t render correctly on my android – you may want to try and repair that

Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s