Category Archives: Metaphysics

Events and processes

I’ve been reading up on Lewis on causation, and in particular on the account of events he uses. The big thing that his metaphysics of events delivers is a way of getting rid of spurious causal dependence. I say hello, in the course of saying hello abruptly and loudly. I go for a walk, in the course of myself and my girlfriend going for a walk. These patterns of events arise not because one event causes the other, but because one event is part of the other. Pairs of events like these can stand in relations of counterfactual dependence. Suppose I say hello abruptly and loudly. Had I not said hello, then I wouldn’t have said hello abruptly and loudly. So Lewis says: causal dependence between events is not just a matter of counterfactual dependence: it’s counterfactual dependence between distinct events (events that don’t share a part).

When you dig into his metaphysics of events, you see that two notions of parthood are in play. One is broadly logical: event E is a logical part of event F if E occurring in region R entails that F occurs in region R.

A second notion of parthood he uses is spatio-temporal: event E is a st-part of event F if, necessarily, if F occurs in region R, then E occurs some subregion of R. Saying hello abruptly is an l-part of saying hello; my walking is an st-part of myself and my girlfriend walking.

But this still doesn’t cover all cases. Consider Trafalgar and the Napoleonic war. Intuitively, that battle is part of the wars (and not caused by the war, though caused by its earlier parts). But it’s not an l-part, since the region in which the war occurs is more extensive than the region in which the battle occurs. And it’s not an st-part, since the war could have been completed before Trafalgar happened. So Lewis defines up an “accidental” variant of spatio-temporal parthood between occurrent events: E is an a-part of F iff E and F are occurrent, and there’s an occurrent l-part x of E, which is a st-part of some occurrent y which is an l-part of F. I take it the idea is that as well as the Napoleonic wars, there’s another event, the Napoleonic-war-as-it-happened, that is a l-part of the Napoleonic wars; and there’s also Trafalgar-as-it-happened, which is an l-part of Trafalgar. And the latter is an st-part of the former; hence, derivatively, Trafalgar is an st-part of the war.

(Some notes on the interrelation of these notions: if E and F are occurrent, and E is an l-part of F, then it’s an a-part of F (take x=E, y=E). And if E is an st-part of F, then it’s an a-part of F (take x=E, y=F). Rather weirdly, note that when E is an l-part of F, then wherever E occurs, F occurs in an (improper) subregion. Hence F is an st-part of E. And so by the above, if they’re occurrent, we’ll have F an a-part of E. That is, when E is an l-part of F, and both are occurrent, then E and F are a-parts of each other (though of course they may still be distinct).)

Lewis’s requirement that events be “distinct” in order to be candidates for causing one another is that they don’t share a common part in any of these senses.

Lewis notes several times that this would be way too strong a constraint if we allowed events with very rich essences—I’m interested in what this tells us about what sorts of events we can think are hanging around.

Ok: so here is my puzzle. Here’s a first shot—an objection which is plausible but mistaken. Right now, a ball drops, and hits the floor. Consider the conjunctive event or “process” of the ball dropping and hitting the floor. Now (here comes the fallacy) doesn’t this event imply that the ball drops? And so doesn’t that mean the process is an l-part of the ball dropping, and likewise of the ball hitting the floor? But if so, then these two events wouldn’t be distinct, and so couldn’t stand in causal relations. It would be impossible to have a conjunctive process, whose constituents were causally interrelated.

That worried me for a bit but I reckon it’s not a problem. Necessarily, the region *in* which the dropping-and-hitting the floor is a region *within* which the dropping occurs; but it’s not a region *in* which the dropping occurs. “in” is like exact location; an event is then *within* any region it is “in”. But it’s only when every region in which the first occurs is a region *in* which the second occurs, that we have implication or l-parthood. What we have here is just st-parthood, running in the direction you’d have imagined—from constituents to process rather than vice versa.

So that exact puzzle isn’t an objection to Lewis; but I suspect he’s escaped on a technicality, and the underlying trouble with processes will rearise if we tweak the example. Lewis allows for colocated events—and allows they may stand in causal relations. He contemplates a battle of invisible goblins having causal influence on the progress of the AAP conference with which it’s colocated. More seriously, he thinks the presence of an electron in a electric field might cause its acceleration. But the location of the electron, and its acceleration, are colocated events. But in examples of this kind, we really are in trouble if we allow for the conjunctive “process”—the electron-being-so-located-and-accelerating. For necessarily, wherever we have that process in a given region, we have the accelaration *in that region*. So the process is an l-part of the acceleration. Likewise for the locatedness of the electron. But then the two events share a part, and are not distinct—so they couldn’t cause one another!

The trouble for Lewis will arise if we both allow (i) cause and effect to be located in the same region; and (ii) the existence of a “process” encompassing both cause and effect. Lewis says he wants to allow (i); and denying the existence of conjunctive events/processes in (ii) looks unprincipled if we allow them in parallel cases (where the ball drops to the floor). So I conclude there’s pressure on Lewis to rule out conjunctive events/processes across the board.

Nominalizing statistical mechanics

Frank Arntzenius gave the departmental seminar here at Leeds the other day. Given I’ve been spending quite a bit of time just recently thinking about the Fieldian nominalist project, it was really interesting to hear about his updating and extension of the technical side of the nominalist programme (he’s working on extending it to differential geometry, gauge theories and the like).

One thing I’ve been wondering about is how  theories like statistical mechanics fit into the nominalist programme. These were raised as a problem for Field in one of the early reviews (by Malament). There’s a couple of interesting papers recently out in Philosophia Mathematica on this topic, by Glen Meyer and Mark Colyvan/Aidan Lyon. Now, one of the assumptions, as far as I can tell, is that even sticking with the classical, Newtonian framework, the Field programme is incomplete, because it fails to “nominalize” statistical mechanical reasoning (in particular, stuff naturally represented by measures over phase space).

Now one thing that I’ll mention just to set aside is that some of this discussion would look rather different if we increased our nominalistic ontology. Suppose that reality, Lewis-style, contains a plurality of concrete, nominalistic, space-times—at least one for each point in phase space (that’ll work as an interpretation of phase space, right?). Then the project of postulating qualitative probability synthetic structure over such worlds from which a representation theorem for the quantitative probabilities of statistical mechanics looks far easier. Maybe it’s still technically or philosophically problematic. Just a couple of thoughts on this. From the technical side, it’s probably not enough to show that the probabilities can be represented nominalistically—we want to show how to capture the relevant laws. And it’s not clear to me what a nominalistic formulation of something like the past hypothesis looks like (BTW, I’m working with something like the David Albert picture of stat mechanics here). Philosophically, what I’ve described looks like a nominalistic version of primitive propensities, and there are various worries about treating probability in this primitive way (e.g. why should information about such facts constrain credences in the distinctive way information about chance seems to)? I doubt Field would want to go in for this sort of ontological inflation in any case, but it’d be worth working through it as a case study.

Another idea I won’t pursue is the following: Field in the 80’s was perfectly happy to take a (logical) modality as primitive. From this, and nominalistic formulations of Newtonian laws, presumably a nomic modality could be defined. Now, it’s one thing to have a modality, another thing to earn the right to talk of possible worlds (or physical relations between them). But given that phase space looks so much like we’re talking about the space of nomically possible worlds (or time-slices thereof) it would be odd not to look carefully about whether we can use nomic modalities to help us out.

But even setting these kind of resources aside, I wonder what the rules of the game are here. Field’s programme really has two aspects. The first is the idea that there’s some “core” nominalistic science, C. And the second claim is that mathematics, and standard mathematized science, is conservative over C. Now, if the core was null, the conservativeness claim would be trivial, but nobody would be impressed by the project! But Field emphasizes on a number of occasions that the conservativeness claim is not terribly hard to establish, for a powerful block of applied mathematics (things that can be modelled in ZFCU, essentially).

(Actually, things are more delicate than you’d think from Science without Numbers, as emerged in the JPhil exchange between Shapiro and Field. The upshot, I take it  if (a) we’re allowed second-order logic in the nominalistic core; or (b) we can argue that best justified mathematized theory aren’t quite the usual versions, but systematically weakened versions; then the conservativeness results go through).

As far as I can tell, we can have the conservativeness result without a representation theorem. Indeed, for the case of arithmetic (as opposed to geometry and Newtonian gravitational theory) Field relies on conservativeness without giving anything like a representation theorem. I think therefore, that there’s a heel-digging response to all this open to Field. He could say that phase-space theories are all very fine, but they’re just part of the mathematized superstructure—there’s nothing in the core which they “represent”, nor do we need there to be.

Now, maybe this is deeply misguided. But I’d like to figure out exactly why. I can think of two worries: one based on loss of explanatory power; the other on the constraint to explain applicability.

Explanations. One possibility is that nominalistic science without statistical mechanics is a worse theory than mathematized science including phase space formulations—in a sense relevant to the indispensibility argument. But we have to treat this carefully. Clearly, there’s all sorts of ways in which mathematized science is more tractable than nominalized science—that’s Field’s explanation for why we indulge in the former in the first place. One objective of the Colyvan and Lyon article cited earlier is to give examples of the explanatory power of stat mechanical explanations, so that’s one place to start looking.

Here’s one thought about that. It’s not clear that the sort of explanations we get from statistical mechanics, cool though they may be, are of relevantly similar kind to the “explanations” given in classical mechanics. So one idea would be to try to pin down this difference (if there is one) and figure out how they relate to the “goodness” relevant to indispensibility arguments.

Applicability. The second thought is that the “mere conservativeness” line is appropriate either where the applicability of the relevant area of mathematics is unproblematic (as perhaps in arithmetic) or where there aren’t any applications to explain (the higher reaches of pure set theory). In other cases—like geometry, there is a prima facie challenge to tell a story about how claims about abstracta can tell us stuff about the world we live in. And representation theorems scratch this itch, since they show in detail how particular this-worldly structures can exactly call for a representation in terms of abstracta (so in some sense the abstracta are “coding for” purely nominalistic processes—“intrinsic processes” in Field’s terminology). Lots of people unsympathetic to nominalism are sympathetic to representation theorems as an account of the application of mathematics—or so the folklore says.

But, on the one hand, statistical mechanics does appear to feature in explanations of macro-phenomena; and second, the reason that talking about measures over some abstract “space” can be relevant to explaining facts about ripples on a pond is at least as unobvious as the applications of geometry.

I don’t have a very incisive way to end this post. But here’s one thought I have if the real worry is one of accounting for applicability, rather than explanatory power. Why think in these cases that applicability should be explained via representation theorems? In the case of geometry, Newtonian mechanics etc, it’s intuitively appealing to think there are nominalistic relations that our mathematized theories are encoding. Even if one is a platonist, that seems like an attractive part of a story about the applicability of the relevant theories. But when one looks at statistical mechanics, is there any sense that it’s applicability would be explained if we found a way to “code” within Newtonian space-time all the various points of phase space (and then postulate relations between the codings)? It seems like this is the wrong sort of story to be giving here. That thought goes back, I guess, to the point raised earlier in the “modal realist” version: even if we had the resources, would primitive nominalistic structure over some reconstruction of configuration of phase space really give us an attractive story about the applicability of statistical mechanical probabilities?

But if representation theorems don’t look like the right kind of story, what is? Can the Lewis-style “best theory theory” of chance, applied to stat mechanical case (as Barry Loewer has suggested) be wheeled in here? Can the Fieldian nominalist just appeal to (i) conservativeness (ii) the Lewisian account of how the probability-invoking theory and laws gets fixed by the patterns of nominalistic facts in a single classical space? Questions, questions…

Error theories and Revolutions

I’ve been thinking about Hartry Field’s nominalist programme recently. In connection with this (and a draft of a paper I’ve been preparing for the Nottingham metaphysics conference) I’ve been thinking about parallels between the error theories that threaten if ontology is sparse (e.g. nominalistic, or van Inwagenian); and scientific revolutions.

One (Moorean) thought is that we are better justified in our commonsense beliefs (e.g. `I have hands’) than we could be in any philosophical premises incompatible with them. So we should always regard “arguments against the existence of hands” as reductios of the premises that entail that one has no hands. This thought, I take it, extends to commonsense claims about the number of hands I possess. Something similar might be formulated in terms of the comparative strength of justification for (mathematicized) science as against the philosophical premises that motivate its replacement.

So presented, Field (for one) has a response: he argues in several places that we exactly lack good justification for the existence of numbers. He simply rejects the premise of this argument.

A better way presentation of the worry focuses, not on the relative justification for one’s beliefs, but on conditions under which it is rational to change one’s beliefs. I presently have a vast array of beliefs that, according to Field, are simply false.

Forget issues of relative justification. It’s simply that the belief state I would have to be in to consistently accept Field’s view is very distant from my own—it’s not clear whether I’m even psychologically capable of genuinely disbelieving that if there are exactly two things in front of me, then the number of things in front of me is two. (If you don’t feel the pressure in this particular case, consider the suggestion that no macroscopic objects exist—then pretty much all of your existing substantive beliefs are false). Given my starting set of beliefs, it’s hard to see how speculative philosophical considerations could make it rational to change my views so much.

Here’s one way of trying to put some flesh on this general worry. In order to assess an empirical theory, we need to measure it against relevant phenomena to establish theory’s predictive and explanatory power. But what do we take these phenomena to be? A very natural thought is that they include platitudinous statements about the positions of pointers on readers, statements about how experiments were conducted, and whatever is described by records of careful observation. But Field’s theory says that the content of numerical records of experimental data will be false; as will be claims such as “the data points approximate an exponential function”. On a van Inwagenian ontology, there are no pointers, and experimental reports will be pretty much universally false (at least on an error-theoretic reading of his position). Sure, each theorist has a view on how to reinterpret what’s going on. But why should we allow them to skew the evidence to suit their theory? Surely, given what we reasonably take the evidence to be, we should count their theories as disastrously unsuccessful?

But this criticism is based on certain epistemological presuppositions, and these can be disputed. Indeed Field in the introduction to Realism Mathematics and Modality (preemptively) argues that the specific worries just outlined are misguided. He points to cases he thinks analogous, where scientific evidence has forced a radical change in view. He argues that when a serious alternative to our existing system of beliefs (and rules for belief-formation) is suggested to us, it is rational to (a) bracket relevant existing beliefs and (b) consider the two rival theories on their individual merits, adopting whichever one regards as the better theory. The revolutionary theory is not necessarily measured against we believe the data to be, but against what the revolutionary theory says the data is. Field thinks, for example, that in the grip of a geocentric model of the universe, we should treat `the sun moves in absolute upward motion in the morning’ as data. However, even for those within the grip of that model, when the heliocentric model is proposed, it’s rational to measure its success against the heliocentric take on what the proper data is (which, of course, will not describe sunrises in terms of absolute upward motion). Notice that on this model, there’s is effectively no `conservative influence’ constraining belief-change—since when evaluating new theories, one’s prior opinions on relevant matters are bracketed.

If this is the right account of (one form of) belief change, then the version of the Moorean challenge sketched above falls flat (maybe others would do better). Note that for this strategy to work, it doesn’t matter that philosophical evidence is more shaky than scientific evidence which induces revolutionary changes in view—Field can agree that the cases are disanalogous in terms of the weight of evidence supporting revolution. The case of scientific revolutions is meant to motivate the adoption of a certain epistemology of belief revision. This general epistemology, in application to the philosophy of mathematics, tells us we need not worry about the massive conflicts with existing beliefs that so concerned the Mooreans.

On the other hand, the epistemology that Field sketches is contentious. It’s certainly not obvious that the responsible thing to do is to measure revisionary theory T against T’s take on the data, rather than against one’s best judgement about what the data is. Why bracket what one takes to be true, when assessing new theories? Even if we do want to make room for such bracketing, it is questionable whether it is responsible to pitch us into such a contest whenever someone suggests some prima facie coherent revolutionary alternative. A moderated form of the proposal would require there to be extant reasons for dissatisfaction with current theory (a “crisis in normal science”) in order to make the kind of radical reappraisal appropriate. If that’s right, it’s certainly not clear whether distinctively philosophical worries of the kind Field raises should count as creating crisis conditions in the relevant sense. Scientific revolutions and philosophical error theories might reasonably be thought to be epistemically disanalogous in a way unhelpful to Field.

Two final notes. It is important to note what kind of objection a Moorean would put forward. It doesn’t engage in any way with the first-order case that the Field constructs for his error-theoretic conclusion. If substantiated, the result will be that it would not be rational for me (and people like me) to come to believe the error-theoretic position.

The second note is that we might save the Fieldian ontology without having to say contentious stuff in epistemology, by pursuing reconciliation strategies. Hermeneutic fictionalism—for example in Steve Yablo’s figuralist version—is one such. If we never really believed that the number of peeps was twelve, but only pretended this to be so, then there’s no prima facie barrier from “belief revision” considerations that prevents us from explicitly adopting a nominalist ontology. Another reconciliation strategy is to do some work in the philosophy of language to make the case that “there are numbers” can be literally true, even if Field is right about the constituents of reality. (There are a number of ways of cashing out that thought, from traditional Quinean strategies, to the sort of stuff on varying ontological commitments I’ve been working on recently).

In any case, I’d be really interested in people’s take on the initial tension here—and particularly on how to think about rational belief change when confronted with radically revisionary theories—pointers to the literature/state of the art on this stuff would be gratefully received!

Counting delineations

I presented my paper on indeterminacy and conditionals in Konstanz a few days ago. The basic question that paper poses is: if we are highly confident that a conditional is indeterminate, what sorts of confidence in the conditional itself are open to us?

Now, one treatment I’ve been interested in for a while is “degree supervaluationism”. The idea, from the point of view of the semantics, is to replace appeal to a single intended interpretation (with truth=truth at that interpretation) or set of “intended interpretations” (with truth=truth at all of them) with a measure over the set of interpretations (with truth to degree d = being true at exactly measure d of the interpretations). A natural suggestion, given that setting, is that if you know (/are certain) S is true to measure d, then your confidence in S should be d.

I’d been thinking of degree-supervaluationism in this sense, and the more standard set-of-intended-interpretations supervaluationism, as distinct options. But (thanks to Tim Williamson) I realize now that there may be an intermediate option.

Suppose that S= the number 6 is bleh. And we know that linguistic conventions settle that numbers <5 are bleh, and numbers >7 are not bleh. The available delineations of “nice”, among the integers, are ones where the first non-bleh number is 5, 6, 7 or 8. These will count as the “intended interpretations” for a standard supervaluational treatment, so “6 is bleh” will be indeterminate—in this context, neither true nor false.

I’ve discussed in the past several things we could say about rational confidence in this supervaluational setting. But one (descriptive) option I haven’t thought much about is to say that you should proportion your confidence to the number of delineations on which “6 is bleh” comes out true. In the present case, our confidence that 6 is bleh should be 0.5, our confidence that 5 is bleh should come out 0.25, and our confidence that 7 is bleh should come out 0.25.

Notice that this *isn’t* the same as degree-supervaluationism. For that just required some measure or other over the space of interpretations. And even if that was non-zero everywhere apart from ones which place first non-bleh number in 5-8, there are many options available. E.g. we might have a measure that assigns 0.9 to the interpretation which makes 5 the first non-bleh number, and distributes 0.3333… to the others. In other words, the degree-supervaluationist needn’t think that the measure is a measure *of the number of delineations*. I usually think of it (in the finite case), intuitively, as a measure of the “degree of intendedness” of each interpretation. In a sense, the degree-supervaluationists I was thinking of conceive of the measure as telling us to what extent usage and eligibility and other subvening facts favour one interpretation or another. But the kind of supervaluationists we’re now considering won’t buy into that at all.

I should mention that even if, descriptively, it’s clear what proposal here is, it’s less clear how the count-the-delineations supervaluationists would go about justifying the rule for assigning credences that I’m suggesting for them. Maybe the idea is that we should seek some kind of compromise between the credences that would be rational if we took D to be the unique intended interpretation, for each D in our set of “intended interpretations” (see this really interesting discussion of compromise for a model of what we might say—the bits at the end on mushy credence are particularly relevant). And they’ll be some oddities that this kind of theorist will have to adopt—e.g. for a range of cases, they’ll be assigning significant credence to sentences of the form “S and S isn’t true”. I find that odd, but I don’t think it blows the proposal out of the water.

Where might this be useful? Well, suppose you believe in B-theoretic branching time, and are going to “supervaluate” over the various future-branches (so “there will be a sea-battle” will a truth-value gap, since it is true on some but not all). (This approach originates with Thomason, and is still present, with tweaks, in recent relativistic semantics for branching time). “Branches” play the role of “interpretations”, in this setting. I’ve argued in previous work that this kind of indeterminacy about branching futures leads to trouble on certain natural “rejectionist” readings of what our attitudes to known indeterminate p should be. But a count-the-branches proposal seems pretty promising here. The idea is that we should proportion our credences in p to the *number* of branches on which p is true.

Of course, there are complicated issues here. Maybe there are just two qualitative possibilities for the future, R and S. We know R has a 2/3 chance of obtaining, and S a 1/3 chance of obtaining. In the B-theoretic branching setting, an R-branch will exist, and an S-branch will exist. Now, one model of the metaphysics at this point is that we don’t allow qualitatively duplicate future brnaches: so there are just two future-branches in existence, the R one and the S one. On a count-the-branches recipe, we’ll get the result that we should have 1/2 credence that R will obtain. But that conflicts with what the instruction to proportion our credences to the known chances would give us. Maybe R is primitively attached to a “weight” of 2/3—but our count-the-branches recipe didn’t say anything about that.

An alternative is that we multiply indiscernable futures. Maybe there are two, indiscernable R futures, and only one S future. Then apportioning  the credences in the way mentioned won’t get us into trouble. And in general, if we think whenever the chance (at moment m) that p is k, then the proportion of p-futures to non-p-futures is k, then  we’ll have a recipe that coheres nicely with the principal principle.

Let me be clear that I’m not suggesting that we identify chances with numbers-of-branches. Nor am I suggesting that we’ve got some easy route here for justifying the principal principle. The only thing I want to say is that *if* we’ve got a certain match between chances and numbers of future branches, then two recipes for assigning credences won’t conflict.

(I emphasized earlier that count-the-precisifications supervaluationism had less flexibility than degree-supervaluationism where the relevant measure was unconstrained by counting considerations. In a sense, what the above little discussion highlights is that when we move from “interpretations” to “branches” as the locus of supervaluational indeterminacy, this difference in flexibility evaporates. For in the case where that role is played by actually existing futures, then there’s at least the possibility of mutiplying qualitatively indiscernable futures. That sort of maneuver has little place in the original, intended-interpretations settings, since presumably we’ve got an independent fix on what the interpretations are, and we can’t simply postulate that the world gives us intended interpretations in proporitions that exactly match the credences we independently want to assign to the cases.)

Branching worlds

I’ve recently discovered some really interesting papers on how to think about belief in a future with branching time. Folks are interested in branching time as it (putatively) emerges out of “decoherence” in the Everett interpretation of standard Quantum mechanics.

The first paper linked to above is forthcoming in BJPS, by Simon Saunders and David Wallace. In it, they argue for a certain kind of parallel between the semantics for personal fission cases and the semantics most charitably applied to language users in branching time, and argue that this sheds lights on the way that beliefs should behave.

Now, lots of clever people are obviously thinking about this, and I haven’t absorbed all the discussion yet. But since it’s really cool stuff, and since I’ve been thinking about related material recently (charity-based metasemantics, fission cases, semantics in branching time) I thought I’d sit down and figure out how things look from my point of view.

I’m sceptical, in fact, whether personal fission itself (and associated de se uncertainty about who one will be) will really help us out here in the way that Saunders and Wallace think. Set aside for now the question of whether faced with a fission case you should feel uncertain which fission-product you will end up as (for discussion of that question, on the assumption that it’s indeterminate which of the Lewisian continuing persons is me, see the indeterminate survival paper I just posted up). But suppose that we do get some sense in which, when you’re about to fission, you have de se uncertainty about where you’ll be, even granted full knowledge of the de dicto facts.

The Saunders-Wallace idea is to try to generalize this de se ignorance as an explanation of the ignorance we’d have if we were placed in a branching universe, and knew what was to happen on every branch. We’d know all the de dicto truths about multiple futures—and we would literally be about to undergo fission, since I’d be causally related in the right kind of ways to multiple person stages in the different futures. So—they claim—ignorance of who I am maps onto ignorance of what I’m about to see next (whether I’m about to see the stuff in the left branch, or in the right). And that explains how we can get ignorance in a branching world, and so lays the groundwork for explaining how we can get a genuine notion of uncertainty/probability/degree of belief off the ground.

I’m a bit worried about the generality of the purported explanation. The basic thought there is that to get a complete story about beliefs in branching universes, we’re going to need to justify degrees of beliefs in matters that happen, if at all, long after we would go out of existence. And so it just doesn’t seem likely that we’re going to get a complete story about uncertainty from consideration of uncertainty about which branch I myself am located within.

To dramatize, consider an instantaneous, omniscient agent. She knows all the de dicto truths about the world (in every future branch) and also exactly where he is located—so no de se ignorance either. But still, this agent might care about other things, and have a certain degree of belief as to whether, e.g. the sea-battle will happen in the future. The kind of degree of belief she has (and any associated “ignorance”) can’t, I think, be a matter of de se ignorance. And I think, for events that happen if at all in the far future, we’re relevantly like the instantaneous omniscient agent.

What else can we do? Well—very speculatively—I think there’s some prospect for using the sort of charity-based considerations David Wallace has pointed to in the literature for getting a direct, epistemic account of why we should adopt this or that degree of belief in borderline cases. The idea would be that we *mimimize inaccuracy of our beliefs* by holding true sentences to exactly the right degrees.

A first caveat: this hangs on having the *right* kind of semantic theory in the background. A Thomason-style supervaluationist semantics for the branching future just won’t cut it, nor will MacFarlane-style relativistic tweaks. I think one way of generalizing the “multiple utterances” idea of Saunders and Wallace holds out some prospect of doing better—but best of all would be a degree-theoretic semantics.

A second caveat: what I’ve got (if anything) is epistemic reason for adopting certain kinds of graded attitude. It’s not clear to me that we have to think of these graded attitudes as a kind of uncertainty. And it’s not so clear why expected utility, as calculated from these attitudes, should be a guide to action. On the other hand, I don’t see clearly the argument that they *don’t* or *shouldn’t* have this pragmatic significance.

So I’ve written up a little note on some of these issues—the treatment of fission that Saunders-Wallace use, the worries about limitations to the de se defence, and some of the ideas about accuracy-based defences of graded beliefs in a branching world. It’s very drafty (far more so than anything I usually put up as work in progres). To some extent it seems like a big blog post, so I thought I’d link to it from here in that spirit. Comments very welcome!

Indeterminate survival: in draft

So, finally, I’ve got another draft prepared. This is a paper focussing on Bernard Williams’ concerns about how to think and feel about indeterminacy in questions of one’s own survival.

Suppose that you know that you know there’s an individual in the future who’s going to get harmed. Should you invest a small amount of money to alleviate the harm? Should you feel anxious about the harm?

Well, obviously if you care about the guy (or just have a modicum of humanity) you probably should. But if it was *you* that was going to suffer the harm, there’d be a particularly distinctive frisson. From a prudential point of view, you’d be compelled to invest minor funds for great benefit. And you really should have that distinctive first-personal phenomenology associated with anxiety on one’s own behalf. Both of these de se attitudes seem important features of our mental life and evaluations.

The puzzle I take from Williams is: are the distinctively first-personal feelings and expectations appropriate in a case where you know that it’s indeterminate whether you survive as the individual who’s going to suffer?

Williams thought that by reflecting on such questions, we could get an argument against account of personal identity that land us with indeterminate cases of survival. I’d like to play the case in a different direction. It seems to me pretty unavoidable that we’ll end up favouring accounts of personal identity that allow for indeterminate cases. So if , when you combine such cases with this or that theory of indeterminacy, you end up saying silly things, I want to take that as a blow to that account of indeterminacy.

It’s not knock-down (what is in philosophy?) but I do think that we can get leverage in this way against rejectionist treatments of indeterminacy, at least as applied to these kind of cases. Rejectionist treatments include those folks who think that characteristic attitudes to borderline cases includes primarily a rejection of the law of excluded middle; and (probably) those folks who think that in such cases we should reject bivalence, even if LEM itself is retained.

In any case, this is definitely something I’m looking for feedback/comments on (particularly on the material on how to think about rational constraints on emotions, which is rather new territory for me). So thoughts very welcome!

Primitivism about indeterminacy: a worry

I’m quite tempted by the view that it is indeterminate that might be one of those fundamental, brute bits of machinery that goes into constructing the world. Imagine, for example, you’re tempted by the thought that in a strong sense the future is “open”, or “unfixed”. Now, maybe one could parlay that into something epistemic (lack of knowledge of what the future is to be), or semantic (indecision over which of the existing branching futures is “the future”) or maybe mere non-existence of the future would capture some of this unfixity thought. But I doubt it. (For discussion of what the openness of the future looks like from this perspective, see Ross and Elizabeth’s forthcoming Phil Studies piece).

The open future is far from the only case you might consider—I go through a range of possible arenas in which one might be friendly to a distinctively metaphysical kind of indeterminacy in this paper—and I think treating “indeterminacy” as a perfectly natural bit of kit is an attractive way to develop that. And, if you’re interested in some further elaboration and defence of this primitivist conception see this piece by Elizabeth and myself—and see also Dave Barnett’s rather different take on a similar idea in a forthcoming piece in AJP (watch out for the terminological clashes–Barnett wants to contrast his view with that of “indeterminists”. I think this is just a different way of deploying the terminology.)

I think everyone should pay more attention to primitivism. It’s a kind of “null” response to the request for an account of indeterminacy—and it’s always interesting to see why the null response is unavailable. I think we’ll learn a lot about what the compulsory questions the a theory of indeterminacy must answer, from seeing what goes wrong when the theory of indeterminacy is as minimal as you can get.

But here I want to try to formulate a certain kind of objection to primitivism about indeterminacy. Something like this has been floating around in the literature—and in conversations!—for a while (Williamson and Field, in particular, are obvious sources for it). I also think the objection if properly formulated would get at something important that lies behind the reaction of people who claim *just not to understand* what a metaphysical conception of indeterminacy would be. (If people know of references where this kind of idea is dealt with explicitly, then I’d be really glad to know about them).

The starting assumption is: saying “it’s an indeterminate case” is a legitimate answer to the query “is that thing red?”. Contrast the following. If someone asks “is that thing red?” and I say: it’s contingent whether it’s red”, then I haven’t made a legitimate conversational move. The information I’ve given is simply irrelevant to it’s actual redness.

So it’s a datum that indeterminacy-answers are in some way relevant to redness (or whatever) questions. And it’s not just that “it is indeterminate whether it is red” has “it is red” buried within it – so does the contingency “answer”, but it is patently irrelevant.

So what sort of relevance does it have? Here’s a brief survey of some answers:

(1) Epistemicist. “It’s indeterminate whether p” has the sort of relevance that answering “I don’t know whether p” has. Obviously it’s not directly relevant to the question of whether p, but at least expresses the inability to give a definitive answer.

(2) Rejectionist (like truth-value gap-ers, inc. certain supervaluationists, and LEM-deniers like Field, intuitionists). Answering “it’s indeterminate” communicates information which, if accepted, should lead you to reject both p, and not-p. So it’s clearly relevant, since it tells the inquirer what their attitudes to p itself should be.

(3) Degree theorist (whether degree-supervaluationist like Lewis, Edgington, or degree-functional person like Smith, Machina, etc). Answering “it’s indeterminate” communicates something like the information that p is half-true. And, at least on suitable elaborations of degree theory, we’ll then now how to shape our credences in p itself: we should have credence 0.5 in p if we have credence 1 that p is half true.

(4) Clarification request. (maybe some contextualists?) “it’s indeterminate that p” conveys that somehow the question is ill-posed, or inappropriate. It’s a way of responding whereby we refuse to answer the question as posed, but invite a reformulation. So we’re asking the person who asked “is it red?” to refine their question to something like “is it scarlet?” or “is it reddish?” or “is it at least not blue?” or “does it have wavelength less than such-and-such?”.

(For a while, I think, it was assumed that every series account of indeterminacy would say that if p was indeterminate, one couldn’t know p (think of parallel discussion of “minimal” conceptions of vagueness—see Patrick Greenough’s Mind paper). If that was right then (1) would be available to everybody. But I don’t think that that’s at all obvious — and in particular, I don’t think it’s obvious the primitivist would endorse it, and if they did, what grounds they would have for saying so).

There are two readings of the challenge we should pull apart. One is purely descriptive. What kind of relevance does indeterminacy have, on the primitivist view? The second is justificatory: why does it have that relevance? Both are relevant here, but the first is the most important. Consider the parallel case of chance. There we know what, descriptively, we want the relevance of “there’s a 20% chance that p” to be: someone learning this information should, ceteris paribus, fix their credence in p to 0.2. And there’s a real question about whether a metaphysical primitive account of chance can justify that story (that’s Lewis’s objection to a putative primitivist treatment of chance facts).

The justification challenge is important, and how exactly to formulate a reasonable challenge here will be a controversial matter. E.g. maybe route (4), above, might appeal to the primitivist. Fine—but why is that response the thing that indeterminacy-information should prompt? I can see the outlines of a story if e.g. we were contextualists. But I don’t see what the primitivist should say.

But the more pressing concern right now is that for the primitivist about indeterminacy, we don’t as yet have a helpful answer to the descriptive question. So we’re not even yet in a position to start engaging with the justificatory project. This is what I see as the source of some dissatisfaction with primitivism – the sense that as an account it somehow leaves something unimportant explained. Until the theorist has told me something more I’m at a loss about what to do with the information that p is indeterminate

Furthermore, at least in certain applications, one’s options on the descriptive question are constrained. Suppose, for example, that you want to say that the future is indeterminate. But you want to allow that one can rationally have different credences for different future events. So I can be 50/50 on whether the sea battle is going to happen tomorrow, and almost certain I’m not about to quantum tunnel through the floor. Clearly, then, nothing like (2) or (3) is going on, where one can read off strong constraints on strength of belief in p from the information that p is indeterminate. (1) doesn’t look like a terribly good model either—especially if you think we can sometimes have knowledge of future facts.

So if you think that the future is primitively unfixed, indeterminate, etc—and friends of mine do—I think (a) you owe a response to the descriptive challenge; (b) then we can start asking about possible justifications for what you say; (c) your choices for (a) are very constrained.

I want to finish up by addressing one response to the kind of questions I’ve been pressing. I ask: what is the relevance of answering “it’s indeterminate” to first-order questions? How should I alter my beliefs in receipt of the information, what does it tell me about the world or the epistemic state of my informant?

You might be tempted to say that your informant communicates, minimally, that it’s at best indeterminate whether she knows that p. Or you might try claiming that in such circumstances it’s indeterminate whether you *should* believe p (i.e. there’s no fact of the matter as to how you should shape your credences on the question of whether p). Arguably, you can derive these from the determinate truth of certain principles (determinacy, truth as the norm of belief, etc) plus a bit of logic. Now, that sort of thing sounds like progress at first glance – even if it doesn’t lay down a recipe for shaping my beliefs, it does sound like it says something relevant to the question of what to do with the information. But I’m not sure about that it really helps. After all, we could say exactly parallel things with the “contingency answer” to the redness question with which we began. Saying “it’s contingent that p” does entail that it’s contingent at best whether one knows that p, and contingent at best whether one should believe p. But that obviously doesn’t help vindicate contingency-answers to questions of whether p. So it seems that the kind of indeterminacy-involving elaborations just given, while they may be *true*, don’t really say all that much.

Regimentation (x-post).

Here’s something you frequently hear said about ontological commitment. First, that to determine the ontological commitments of some sentence S, one must look not at S, but at a regimentation or paraphrase of S, S*. Second (very roughly), you determine the ontological commitments of S by looking at what existential claims follow from S*.

Leave aside the second step of this. What I’m perplexed about is how people are thinking about the first step. Here’s one way to express the confusion. We’re asked about the sentence S, but to determine the ontological commitments we look at features of some quite different sentence S*. But what makes us think that looking at S* is a good way of finding out about what’s required of the world for S to be true?

Reaction (1). The regimentation may be constrained so as to make the relevance of S* transparent. Silly example: regimentation could be required to be null, i.e. every sentence has to be “regimented” as itself. No mystery there. Less silly example: the regimentation might be required to preserve meaning, or truth-conditions, or something similar. If that’s the case then one could plausibly argue that the OC’s of S and S* coincide, and looking at the OC’s of S* is a good way of figuring out what the OC’s of S is.

(The famous “symmetry” objections are likely to kick in here; i.e. if certain existential statements follow from S but not from S*, and what we know is that S and S* have the same OC’s, why take it that S* reveals those OC’s better than S?—so for example if S is “prime numbers exist” and S* is a nominalistic paraphrase, we have to say something about whether S* shows that S is innocent of OC to prime numbers, or whether S shows that S* is in a hidden way committed to prime numbers).

Obviously this isn’t plausibly taken as Quine view—the appeal to synonymy is totally unQuinean (moreover in Word and Object, he’s pretty explicit that the regimentation relationship is constrained by whether S* can play the same theoretical role as we initially thought S played—and that’ll allow for lots of paraphrases where the sentences don’t even have the appearance of being truth-conditionally equivalent).

Reaction (2). Adopt a certain general account of the nature of language. In particular, adopt a deflationism about truth and reference. Roughly: T- and R-schemes are in effect introduced into the object language as defining a disquotational truth-predicate. Then note that a truth-predicate so introduced will struggle to explain the predications of truth for sentences not in one’s home language. So appeal to translation, and let the word “true” apply to a sentence in a non-home language iff that sentence translates to some sentence of the home language that is true in the disquotational sense. Truth for non-home languages is then the product of translation and disquotational truth. (We can take the “home language” for present purposes to be each person’s idiolect).

I think from this perspective the regimentation steps in the Quinean characterization of ontological commitment have an obvious place. Suppose I’m a nominalist, and refuse to speak of numbers. But the mathematicians go around saying things like “prime numbers exist”. Do I have to say that what they say is untrue (am I going to go up to them and tell them this?) Well, they’re not speaking my idiolect; so according to the deflationary conception under consideration, what I need to do is figure out whether there sentences translate to something that’s deflationarily true in my idiolect. And if I translate them according to a paraphrase on which their sentences pair with something that is “nominalistically acceptable”, then it’ll turn out that I can call what they say true.

This way of construing the regimentation step of ontological commitment identifies it with the translation step of the translation-disquotation treatment of truth sketched above. So obviously what sorts of constraints we have on translation will transfer directly to constraints on regimentation. One *could* appeal to a notion of truth-conditional equivalence to ground the notion of translatability—and so get back to a conception whereby synonymy (or something close to it) was central to our analysis of language.

It’s in the Quinean spirit to take translatability to stand free of such notions (to make an intuitive case for separation here, one might, for example, that synonymy should be an equivalence relation, whereas translatability is plausibly non-transitive). There are several options. Quine I guess focuses on preservation of patterns of assent and dissent to translated pairs; Field appeals to his projectivist treatment of norms and takes “good translation” as something to be explained in projective terms. No doubt there are other ways to go.

This way of defending the regimentation step in treatments of ontological commitment turns essentially on deflationism about truth; and more than that, on a non-universal part of the deflationary project: the appeal to translation as a way to extend usage of the truth-predicate to non-home languages. If one has some non-translation story about how this should go (and there are some reasons for wanting one, to do with applying “true” to languages whose expressive power outstrips that of one’s own) then the grounding for the regimentation step falls away.

So the Quinean regimentation-involving treatment of ontological commitment makes perfect sense within a Quinean translation-involving treatment of language in general. But I can’t imagine that people who buy into to the received view of ontological commitment really mean to be taking a stance on deflationism vs. its rivals; or about the exact implementation of deflationism.

Of course, regimentation or translatability (in a more Quinean, preservation-of-theoretical-role sense, rather than a synonymy-sense) can still be significant for debates about ontological commitments. One might think that arithmetic was ontologically committing, but the existence of some nominalistic paraphrase that was suited to play the same theoretical role gave one some reassurance that one doesn’t *have* to use the committing language, and maybe overall these kind of relationships will undermine the case for believing in dubious entities—not because ordinary talk isn’t committed to them, but because for theoretical purposes talk needn’t be committed to them. But unlike the earlier role for regimentation, this isn’t a “hermeneutic” result. E.g. on the Quinean way of doing things, some non-home sentence “there are prime numbers” can be true, despite there being no numbers—just because the best translation of the quoted sentence translates it to something other than the home sentence “there are prime numbers”. This kind of flexibility is apparently lost if you ditch the Quinean use of regimentation.

Aristotelian indeterminacy and partial beliefs

I’ve just finished a first draft of the second paper of my research leave—title the same as this post. There’s a few different ways to think about this material, but since I hadn’t posted for a while I thought I’d write up something about how it connects with/arises from some earlier concerns of mine.

The paper I’m working on ends up with arguments against standard “Aristotelian” accounts of the open future, and standard supervaluational accounts of vague survival. But one starting point was an abstract question in the philosophy of logic: in what sense is standard supervaluationism supposed to be revisionary? So let’s start there.

The basic result—allegedly—is that while all classical tautologies are supervaluational tautologies, certain classical rules of inference (such as reductio, proof by cases, conditional proof, etc) fail in the supervaluational setting.

Now I’ve argued previously that one might plausibly evade even this basic form of revisionism (while sticking to the “global” consequence relation, which preserves traditional connections between logical consequence and truth-preservation). But I don’t think it’s crazy to think that global supervaluational consequence is in this sense revisionary. I just think that it requires an often-unacknowledged premise about what should count as a logical constant (in particular, whether “Definitely” counts as one). So for now let’s suppose that there are genuine counterexamples to conditional proof and the rest.

The standard move at this point is to declare this revisionism a problem for supervaluationists. Conditional proof, argument by cases: all these are theoretical descriptions of widespread, sensible and entrenched modes of reasoning. It is objectionably revisionary to give them up.

Of course some philosophers quite like logical revisionism, and would want to face-down the accusation that there’s anything wrong with such revisionism directly. But there’s a more subtle response available. One can admit that the letter of conditional proof, etc are given up, but the pieces of reasoning we normally call “instances of conditional proof” are all covered by supervaluationally valid inference principles. So there’s no piece of inferential practice that’s thrown into doubt by the revisionism of supervaluational consequence: it seems that all that happens is that the theoretical representation of that practice has to take a slightly more subtle form than one might except (but still quite a neat and elegant one).

One thing I mention in that earlier paper but don’t go into is a different way of drawing out consequences of logical revisionism. Forget inferential practice and the like. Another way in which logic connects with the rest of philosophy is in connection to probability (in the sense of rational credences, or Williamson’s epistemic probabilities, or whatever). As I sketched in a previous post, so long as you accept a basic probability-logic constraint, which says that the probability of a tautology should be 1, and the probability of a contradiction should be 0, then the revisionary supervaluational setting quickly forces you to a non-classical theory of probability: one that allows disjunctions to have probability 1 where each disjunct has probability 0. (Maybe we shouldn’t call such a thing “probability”: I take it that’s terminological).

Folk like Hartry Field have argued completely independently of this connection to Supervaluationism that this is the right and necessary way to handle probabilities in the context of indeterminacy. I’ve heard others say, and argue, that we want something closer to classicism (maybe tweaked to allow sets of probability functions, etc). And there are Dutch Book arguments to consider in favour of the classical setting (though I think the responses to these from the perspective of non-classical probabilities are quite convincing).

I’ve got the feeling the debate is at a stand-off, at least at this level of generality. I’m particularly unmoved by people swapping intuitions about degrees of belief it is appropriate to have in borderline cases of vague predicates, and the like (NB: I don’t think that Field ever argues from intuition like this, but others do). Sometimes introspection suggests intriguing things (for example, Schiffer makes the interesting suggestion that one’s degree of belief in a conjunction of two vague propositions is typically matches one’s degree of belief in the propositions themselves). But I can’t see any real dialectical force here. In my own case, I don’t have robust intuitions about these cases. And if I’m to go on testimonial evidence on others intuitions, it’s just too unclear what people are reporting on for me to feel comfortable taking their word for it. I’m worried, for example, they might just be reporting the phenomenological level of confidence they have in the proposition in question: surely that needn’t coincide with one’s degree of belief in the proposition (thinking of an exam you are highly nervous about, but are fairly certain you will pass… your behaviour may well manifest a high degree of belief, even in the absence of phenomenological trappings of confidence). In paradigm cases of indeterminacy, it’s hard to see how to do better than this.

However, I think in application to particular debates we might be able to make much more progress. Let us suppose that the topic for the day is the open future, construed, minimally, as the claim that while there are definite facts about the past and present, the future is indefinite.

Might we model this indefiniteness supervaluationally? Something like this idea (with possible futures playing the role of precisifications) is pretty widespread, perhaps orthodoxy (among friends of the open future). It’s a feature of MacFarlane’s relativistic take on the open future, for example. Even though he’s not a straightforward supervaluationist, he still has truth-value gaps, and he still treats them in a recognizably supervaluational-style way.

The link between supervaluational consequence and the revisionionary behaviour of partial beliefs should now kick in. For if you know with certainty that some P is neither true nor false, we can argue that you should invest no credence at all in P (or in its negation). Likewise, in a framework of evidential probabilities, P gets no evidential probability at all (nor does its negation).

But think what this says in the context of the open future. It’s open which way this fair coin lands: it could be heads, it could be tails. On the “Aristotelian” truth-value conception of this openness, we can know that “the coin will land heads” is gappy. So we should have credence 0 in it, and none of our evidence supports it.

But that’s just silly. This is pretty much a paradigmatic case where we know what partial belief we have and should have in the coin landing heads: one half. And our evidence gives exactly that too. No amount of fancy footwork and messing around with the technicalities of Dempster-Shafer theory leads to a sensible story here, as far as I can see. It’s just plainly the wrong result. (One doesn’t improve matters very much by relaxing the assumptions, e.g. taking the degree of belief in a failure of bivalence in such cases to fall short of one: you can still argue for a clearly incorrect degree of belief in the heads-proposition).

Where does that leave us? Well, you might reject the logic-probability link (I think that’d be a bad idea). Or you might try to argue that supervaluational consequence isn’t revisionary in any sense (I sketched one line of thought in support of this in the paper cited). You might give up on it being indeterminate which way the coin will land—i.e. deny the open future, a reasonably popular option. My own favoured reaction, in moods when I’m feeling sympathetic to the open future, is to go for a treatment of metaphysical indeterminacy where bivalence can continue to hold—my colleague Elizabeth Barnes has been advocating such a framework for a while, and it’s taken a long time for me to come round.

All of these reactions will concede the broader point—that at least in this case, we’ve got an independent grip on what the probabilities should be, and that gives us traction against the Supervaluationist.

I think there are other cases where we can find similar grounds for rejecting the structure of partial beliefs/evidential probabilities that supervaluational logic forces upon us. One is simply a case where empirical data on folk judgements has been collected—in connection with indicative conditions. I talk about this in some other work in progress here. Another which I talk about in the current paper, and which I’m particularly interested in, concerns cases of indeterminate survival. The considerations here are much more involved than in indeterminacy we find in connection to the open future or conditionals. But I think the case against the sort of partial beliefs supervaluationism induces can be made out.

All these results turn on very local issues. None, so far as see, generalizes to the case of paradigmatic borderline cases of baldness and the rest. I think that makes the arguments even more interesting: potentially, they can serve as a kind of diagnostic: this style of theory of indeterminacy is suitable over here; that theory over there. That’s a useful thing to have in one’s toolkit.

Structured propositions and metasemantics

Here is the final post (for the time being) on structured propositions. As promised, this is to be an account of the truth-conditions of structured propositions, presupposing a certain reasonably contentious take on the metaphysics of linguistic representation (metasemantics). It’s going to be compatible with the view that structured propositions are nothing but certain n-tuples: lists of their components. (See earlier posts if you’re getting concerned about other factors, e.g. the potential arbitriness in the choice of which n-tuples are to be identified with the structured proposition that Dummett is a philosopher.)

Here’s a very natural way of thinking of what the relation between *sentences* and truth-conditions are, on a structured propositions picture. It’s that metaphysically, the relation of “S having truth-conditions C” breaks down into two more fundamental relations: “S denoting struc prop p” and “struc prop p having truth-conditions C”. The thought is something like: primarily, sentences express thoughts (=struc propositions), and thoughts themselves are the sorts of things that have intrinsic/essential representational properties. Derivatively, sentences are true or false of situations, by expressing thoughts that are true or false of those situations. As I say, it’s a natural picture.

In the previous posting, I’ve been talking as though this direction-of-explanation was ok, and that the truth-conditions of structured propositions should have explanatory priority over the truth-conditions of sentences, so we get the neat separation into the contingent feature of linguistic representation (which struc prop a sentence latches onto) and the necessary feature (what the TCs are, given the struc prop expressed).

The way I want to think of things, something like the reverse holds. Here’s the way I think of the metaphysics of linguistic representation. In the beginning, there were patterns of assent and dissent. Assent to certain sentences is systematically associated with certain states of the world (coarse-grained propositions, if you like) perhaps by conventions of truthfulness and trust (cf. Lewis’s “Language and Languages”). What it is for expressions E in a communal language to have semantic value V is for E to be paired with V under the optimally eligible semantic theory fitting with that association of sentences with coarse-grained propositions.

That’s a lot to take in all at one go, but it’s basically the picture of linguistic representation as fixed by considerations of charity/usage and eligibility/naturalness that lots of people at the moment seem to find appealing. The most striking feature—which it shares with other members of the “radical interpretation” approach to metasemantics—is that rather than starting from the referential properties of lexical items like names and predicates, it depicts linguistic content as fixed holistically by how well it meshes with patterns of usage. (There’s lots to say here to unpack these metaphors, and work out what sort of metaphysical story of representation is being appealed to: that’s something I went into quite a bit in my thesis—my take on it is that it’s something close to a fictionalist proposal).

This metasemantics, I think, should be neutral between various semantic frameworks for generating the truth conditions. With a bit of tweaking, you can fit in a Davidsonian T-theoretic semantic theory into this picture (as suggested by, um… Davidson). Someone who likes interpretational semantics but isn’t a fan of structured propositions might take the semantic values of names to be objects, and the semantic values of sentences to be coarse-grained propositions, and say that it’s these properties that get fixed via best semantic theory of the patterns of assent and dissent (that’s Lewis’s take).

However, if you think that to adequately account for the complexities of natural language you need a more sophisticated, structured proposition, theory, this story also allows for it. The meaning-fixing semantic theory assign objects to names, and structured propositions to sentences, together with a clause specifying how the structured propositions are to be paired up with coarse-grained propositions. Without the second part of the story, we’d end up with an association between sentences and structured propositions, but we wouldn’t make connection with the patterns of assent and dissent if these take the form of associations of sentences with *coarse grained* propositions (as on Lewis’s convention-based story). So on this radical interpretation story where the targetted semantic theories take a struc prop form, we get a simultaneous fix on *both* the denotation relation between sentences and struc props, and the relation between struc props and coarse-grained truth-conditions.

Let’s indulge in a bit of “big-picture” metaphor-ing. It’d be misleading to think of this overall story as the analysis of sentential truth-conditions into a prior, and independently understood, notion of the truth-conditions of structured propositions, just as it’s wrong on the radical interpretation picture to think of sentential content as “analyzed in terms of” a prior, and independently understood, notion of subsentential reference. Relative to the position sketched, it’s more illuminating to think of the pairing of structured and coarse-grained propositions as playing a purely instrumental role in smoothing the theory of the representational features of language. It’s language which is the “genuine” representational phenomenon in the vicinity: the truth-conditional features attributed to struc propositions are a mere byproduct.

Again speaking metaphorically, it’s not that sentences get to have truth-conditions in a merely derivative sense. Rather, structured propositions have truth-conditions in a merely derivative sense: the structured proposition has truth-conditions C if it is paired with C under the optimal overall theory of linguistic representation.

For all we’ve said, it may turn out that the same assignment of truth-conditions to set-theoretic expressions will always be optimal, no matter which language is in play. If so, then it might be that there’s a sense in which structured propositions have “absolute” truth-conditions, not relative to this or that language. But, realistically, one’d expect some indeterminacy in what struc props play the role (recall the Benacerraf point King makes, and the equally fitness of [a,F] and [F,a] to play that “that a is F” role). And it’s not immediately clear why the choice to go one way for one natural language should constrain way this element is deployed in another language. So it’s at least prima facie open that it’s not definitely the case that the same structured propositions, with the same TCs, are used in the semantics of both French and English.

It’s entirely in the spirit of the current proposal that we think of we identify [a,F] with the structured proposition that a is F only relative to a given natural language, and that this creature only has the truth-conditions it does relative to that language. This is all of a piece with the thought that the structured proposition’s role is instrumental to the theory of linguistic representation, and not self-standing.

Ok. So with all this on the table, I’m going to return to read the book that prompted all this, and try to figure out whether there’s a theoretical need for structured propositions with representational properties richer than those attributed by the view just sketched.

[update: interestingly, it turns out that King’s book doesn’t give the representational properties of propositions explanatory priority over the representational properties of sentences. His view is that the proposition that Dummett thinks is (very crudely, and suppressing details) the fact that in some actual language there is a sentence of (thus-and-such a structure) of which the first element is a word referring to Dummett and the second element is a predicate expressing thinking. So clearly semantic properties of words are going to be prior to the representational properties of propositions, since those semantic properties are components of the proposition. But more than this, from what I can make out, King’s thought is that if there was a time where humans spoke a language without attitude-ascriptions and the like, then sentences would have truth-conditions, and the proposition-like facts would be “hanging around” them, but the proposition-like facts wouldn’t have any representational role. Once we start making attitude ascriptions, we implicitly treat the proposition-like structure as if it had the same TCs as sentences, and (by something like a charity/eligibility story) the “propositional relation” element acquires semantic significance and the proposition-like structure gets to have truth-conditions for the first time.

That’s very close to the overall package I’m sketching above. What’s significant dialectically, perhaps, is that this story can explain TCs for all sorts of apparently non-semantic entities, like sets. So I’m thinking it really might be the Benacerraf point that’s bearing the weight in ruling out set-theoretic entities as struc propns—as explained previously, I don’t go along with *that*.]