Gavagai again again

A new version of my discussion of Quine’s “argument from below” is now up online (shorter! punchier! better!) Turns out it was all to do with counterpart theory all along.

Here’s the blurb: Gavagai gets discussed all the time. But (unless I’m missing something in the literature) I’ve never seen an advocate of gavagai-style indeterminacy spell out in detail what exactly the deviant interpretations or translations are, that incorporating the different ways of dividing reference (over rabbits, rabbit-stages or undetached rabbit-parts). And without this it is to say the least, a bit hard to evaluate the supposed counterexamples to such interpretations! So the main job of the paper is to spell out, for a significant fragment of language, what the rival accounts of reference-division amount to.

One audience for the paper (who might not realize they are an audience for it initially) are folks interested in the stage theory/worm theory debate in the philosophy of persistence. The neuvo-Gavagai guy, according to me, is claiming that there’s no fact of the matter whether our semantics is stage-theoretic or worm-theoretic. I think there’s a reasonable chance that that he’s right.

Stronger than this: so long as there are both 4D worms and instantaneous temporal parts thereof around (even if they’re “dependent entities” or “rabbit histories” or “mere sums” as opposed to Real Objects), the Gavagai guy asks you to explain why our words don’t refer to those worms or stages rather than whatever entity you think *really are* rabbits (say, enduring objects wholly present at each time).

By the way, even if these semantic indeterminacy results were right, I don’t think that this forecloses the metaphysical debate about which of endurance, perdurance or exdurance is the right account of *persistence*. But I do think that it forces us to think hard about what the difference is between semantic and metaphysical claims, and what sort of reasons we might offer for either.

Parsimony and the fundamental (x-posted from metaphysical values)

A bit cross-posting this one…

In his APA comments on Jonathan Schaffer, Ross asks about some of Jonathan’s ideas about the applicability of Ockham’s razor. The question arises if you buy into some robust distinction between “fundamental” and “derivative” existents. Candidate fundamental existents: quarks, electrons, maybe organisms (or maybe just THE WORLD). Candidate derivative existents: weirdo fusions, impure sets, maybe tables and chairs (or maybe everything except THE WORLD).

Let’s call the idea that “derivative” as well as “fundamental” entities are (thump table) existing things the expansivist interpretation of the fundamental/derivative distinction. Call the idea that only the fundamental (thump table) exists the restrictivist interpretation of that distinction.

Jonathan’s position is that Ockham’s razor, rightly understood, tells us to minimize the number of fundamental entities. Ross’s idea (I think?) is that this is right iff one has a restrictivist understanding of the fundamental/derivative distinction. But Jonathan, pretty clearly, has an expansivist understanding of that distinction: he doesn’t want to say that the only thing that (thump table) exists is the world, just that the world is ontologically prior to everything else. So if Ross is right, his application of parsimony is in trouble.

I can see what the idea is here: after all, understanding parsimony as the instruction to minimize (thump table) existents or to minimize the (thump table) kinds of existents is surely close to the traditional understanding. Whereas the idea that we need only minimize (kinds of) existents of such-and-such a type, seems to come a bit out of the blue, and at minimum we need some more explanation before we could accept that revision to our theoretical maxims.

However… One thing that seems important is to consider what sort of principles of parsimony might be present in more ordinary theorizing (e.g. in the special sciences). The appeal of appealing to parsimony in metaphysics is in large part that it’s a general theoretical virtue, applicable in all sorts of areas that are paradigms of good, productive fields of inquiry. Now, theoretical virtues in the sciences is not a topic that I’m in a position to speak with authority on. But one thing that seems to me important in this connection: if you think that the entities of special sciences aren’t fundamental entities, then principles of parsimony restricted to the fundamentals aren’t going to be in a position to give you much bite. (NB: I think that this was raised by someone in comments on Jonathan’s paper in Boise, but I can’t remember who it was…).

If that’s right, then whether you’re an expansivist or a restrictivist about the fundamental/derivative distinction seems beside the point. Any theorist who gives a story about what the fundamentals are that’s unconstrained by what the special sciences say, is going to be in trouble with the idea that principles of parsimony should be restricted to constraints on fundamental existents: for such principles of parsimony won’t then be able to get much bite on theorizing in the special sciences. I’d like to think that quarks, leptons etc are going to populate the fundamental, rather than Jonathan’s WORLD. This point bites me as much as Jonathan.

There’s plenty of room for further discussion here, particularly the interaction of the above with what you take to be evidence for some entities being fundamental. E.g. if you thought that various types of emergentism in special science would be evidence for “higher level” fundamental entities, then maybe the above parsimony principle would still have application to special sciences: it’d tell you to reduce to the number of emergent entities you postulate (i.e. it’d be a methodological imperative towards reductionism).

Also, it seems to me that there is something to the thought that some entities are simply “don’t cares” when applying parsimony principles. If I’m concerned with theorizing about the behaviour of various beetles in front of me, I care about how many kinds of beetles my theory is giving me, but not with how many kinds of mathematical entities I need to invoke in formulating that theory. Now, maybe that differential attitude can be explained away by pointing to the generality of the mathematica involved (e.g. that total science is “already committed to them”). But one natural take would be to look for restrictions to principles of parsimony/Ockham’s razor, making them sensitive to the subject-matter under investigation.

To speculate wildly: If principles of parsimony do need to be sensitized in this way, and if the study of what fundamentally exists is a genuine investigation, maybe the principle of parsimony, in application to that study, really would tell us to minimize the number of, and kinds of, fundamental entities we posit.

APA return

Back in Atlanta waiting to reboard a flight to the UK. Trying not to miss the flight this time (interestingly, the plane from SF was an hour out on the “local time” it displayed on board, which might explain the previous problems).

The APA was really fun. Highlights for me included the Hudson-fest, featuring comments from Josh Parsons, Mark Heller and Michael Rae, and interesting replies to each from Hud. Also the author-meets-critics session on dialethism which Brit mentions here. I’ve been thinking a lot about open futures following Brit’s talk on sea battle semantics, and may have some thoughts to post soon (on the plane over to Atlanta, my frantic drawing of dots and arrows trying to figure out how counterfactuals interact with open future semantics convinced my neighbour I was an astrophysicist. Must be the big axes with “time” and “reality” on them…). Andy Egan gave two really interesting papers, on fragmented minds and aesthetic disagreement, and I really enjoyed Alyssa Ney‘s talk on how different theories of causation fit together (or not). And lots more nice people met and good stuff talked about!

It was fun also meeting various bloggers for the first time in the flesh.

The tale of the 14 philosophers and the limousine is already legendary, I gather (I wasn’t there).

San Francisco

San Francisco! I’m staying at a hotel with a very posh lobby, the Sir Francis Drake, just down the street from the APA venue. I’ve enjoyed a hour-long double-decker train journey, and am just being struck once more about the strangeness of being in a different country.

I think food may be in order, then recovery before the hard philosophical slog restarts…

To the APA

The Boise metametaphysics conference finished today. A really fun event! I gave quick versions of my comments on Ted’s naturalness paper this morning.

One thing that was kind of surprising to me is that there weren’t many people defending the sort of “realist Quinean” view that I (along with a lot of people) took to be the orthodoxy. Carnapians (of various flavours), Aristotelians, and the like were more in evidence.

I found the framework and ideas in Dave Chalmers’ “Ontological anti-realism” paper particularly stimulating. It suggests to me some nice ways of extending some of the views I have on ontic vagueness. Lots to think about.

Anyway, I’m now about to get on a plane for San Francisco, for the Pacific APA. It was very exciting seeing the Pacific for the first time as I flew in to SF on the way to Boise; I’m really looking forward to seeing the city and attending the conference.

West coast journeying

I’m currently in Atlanta airport.

I didn’t mean to be still here. A combination of tiredness, lack of care with a watch, and (I suspect) there being different timezones in different terminals, mean that I missed my connecting flight.

On the positive side, I was happily making notes on excellent metametaphysics papers while missing my flight. Still, an all-things-considered bad, I think.

But the nice people at Delta rebooked me, and (modulo a taxi journey and quite possibly sleeping at San Jose airport) my travel plans are back in the swing.

So long as I don’t miss another flight through blogging…

Probabilistic multi-conclusion validity

I’ve been thinking a bit recently about how to generalize standard results relating probability to validity to a multi-conclusion setting.

The standard result is the following (where the uncertainty of p is 1-probability of p):

An argument is classically valid
iff
for all classical probability functions, the sum of the uncertainties of the premises is at least as great as the uncertainty of the conclusion.

It’ll help if we restate this as follows:

An argument is classically valid
iff
for all classical probability functions, the sum of the uncertainties of the premises + the probability of the conclusion is at least 1.

Stated this way, there’s a natural generalization available:

A multi-conclusion argument is classically valid
iff
for all classical probability functions, the sum of the uncertainties of the premises + the probabilities of the conclusions is greater than or equal to 1.

And once we’ve got it stated, it’s a corollary of the standard result (I believe).
It’s pretty easy to see directly that this works in the “if” direction, just by considering classical probability functions which only assign 1 or 0 to propositions.

In the “only if” direction (writing u for uncertainty and p for probability)

Consider A,B|=C,D. This holds iff A,B,~C,~D|= holds by a standard premise/conclusion swap result. And we know u(~C)=p(C), u(~D)=p(D). By the standard result, the sum of uncertainties of the premises of a single-conclusion argument must be greater than that of the conclusion. That is, the single-conc argument holds iff u(A)+u(B)+u(~C)+u(~D) is greater than equal to 1. But by the above identification, this holds iff u(A)+u(B)+p(C)+p(D) is greater than or equal to 1. This should generalize to arbitrary cases. QED.

Naturalness in Idaho (x-post from MV)

I’m off very soon to the INPC metametaphysics conference in Boise. Many other fun people will be there (not least fellow CMM-er Andy McGonigal, fresh from a spell at Cornell).

Together with Iris Einheuser, I’m going to be responding to Ted Sider’s paper “Which disputes are substantive?”. It’s been great to have a serious think about the way that Ted thinks of this stuff, and how it relates to the Kit Fine inspired setting that I’ve been working on lately.

Anyway, the whole writing-a-response thing got way out of hand, and I’ve ended up with a 7,500 word first draft. I do think there’s a couple of substantive issues raised therein for the kind of framework (otherwise really really attractive) that he’s been pushing here and in recent work. The worry centres around quantification into the scope of Ted’s “naturalness” operator. For any who are interested, I’ve put the draft response up online.

After the INPC, I’ll be in San Fran for the Pacific APA, along with many other CMM and Leeds folks.

Fundamental and derivative truths

After a bit of to-ing and fro-ing, I’ve decided to post a first draft of “Fundamental and derivative truths” on my work in progress page.

I’ve been thinking about this material a lot lately, but I’ve found it surprisingly different to formulate and explain. I can see how everything fits together: just not sure how best to go about explaining it to people. Different people react to it in such different ways!

The paper does a bunch of things:

  • offering an interpretation of Kit Fine‘s distinction between things that are really true, and things that are merely true. (So, e.g. tables might exist, but not really exist).
  • using Agustin Rayo‘s recent proposal for formulating a theory of requirements/ontological commitments in explication.
  • putting forward a general strategy for formulating nihilist-friendly theories of requirements (set theoretic nihilism and mereological nihilisms being the illustrative cases used in the paper).
  • using this to give an account of “postulating” things into existence (e.g. sets, weirdo fusions).
  • sketching a general answer to the question: in virtue of what do our sentences have the ontological commitments they do (i.e. what makes a theory of requirements *the correct one* for this or that language?)

This is exploratory stuff: there’s lots more to be said about each of these, and plenty more issues (e.g. how does this relate to fictionalist proposals?) But I’m at a stage where feedback and discussion are perhaps the most important things, so making it public seems a natural strategy…

I’m going to be talking in more detail about the case of mereological nihilism at the CMM structure in metaphysics workshop.

Thresholds for belief

I’m greatly enjoying reading David Christensen’s Putting logic in its place at the moment. Some remarks he makes about threshold accounts of the relationship between binary and graded beliefs seemed particularly suggestive. I want to use them here to suggest a certain picture of the relationship between binary and graded belief. No claim to novelty here, of course, but I’d be interested to hear about worries about this specific formulation (Christensen himself argues against the threshold account).

One worry about threshold accounts is that they’ll make constraints on binary beliefs look very weird. Consider, for example, the lottery paradox. I am certain that someone will win, but for each individual ticket, I’m almost certain that it’s a loser. Suppose that having belief of degree n sufficed for binary belief. Then, by choosing a big enough lottery, we can make it that I believe a generalization (there will be a winner) while believing the negation of each of its premises. So I believe each of a logically inconsistent set.

This sort of situation is very natural from the graded belief perspective: the beliefs in question meet constraints of probabilistic coherence. But there’s a strong natural thought that binary beliefs should be constrained to be logically consistent. And of course, the threshold account doesn’t deliver this.

What Christensen points to is some observations by Kyburg about limited consistency results that can be derived from the threshold account. Minimally, binary beliefs are required to be weakly consistent: for any threshold above zero, one cannot believe a single contradictory proposition. But there are stronger results too. For example, for any threshold above 0.5, one cannot believe a pair of mutually contradictory propositions. One can see why this is if one remembers the following result: that a logically valid argument is such that the improbability of its conclusion cannot be greater than the sum of the improbabilities of its premises. For the case where the conclusion is absurd (i.e. the premises are contradictory) we get the the sum of the improbabilities of the premises must be less than or equal to 1.

In general, then, what we get is the following: if the threshold for binary belief is at least 1-1/n, then one cannot believe each of an inconsistent set of n propositions.

Here’s one thought. Let’s suppose that the threshold for binary belief is context dependent in some way (I mean here to use this broadly, rather than committing to some particularly potentially controversial semantic analysis of belief attributions). The threshold that marks the shift to binary belief can vary depending on aspects of the context. The thought, crudely put, is that there’ll be the following constraint on what thresholds can be set: in a context where n propositions are being entertained, then the threshold for binary belief must be at least 1-1/n.

There is, of course, lots to clarify about this. But notice that now relative to every context, we’ll get logical consistency as a constraint on the pattern of binary belief (assuming that to belief that p is in part to entertain that p).

[As Christensen emphasises, this is not the same thing as getting closure holding in every context. Suppose we consider the three propositions, A, B, and A&B. Consistency means that we cannot accept the first two and accept the negation of the last. And indeed, with the threshold set at 2/3, we get this result. But closure would tell us that every situation in which we believe the first two, we should believe the last. But it’s quite consistent to believe A and B (say, by having credence 2/3 in each) and to fail to believe A&B (say, by having credence 1/3 in this proposition). Probabilistic coherence isn’t going to save the extendability of beliefs by deduction, for any reasonable choice of threshold.

Of course, if we allow a strong notion of disbelief or rejection, such that someone disbelieves that p iff their uncertainty of p is past the threshold (the same threshold as for belief), then we’ll be able to read off from the consistency constraint that in a valid argument, if one believes the premises, one should abandon disbelief in the conclusion. This is not closure, but perhaps it might sweeten the pill of giving up on closure.]

Without logical consistency being a pro tanto normative constraint on believing, I’m sceptical that we’re really dealing with a notion of binary belief at all. Suppose this is accepted. Then we can use the considerations above to argue (1) that if the threshold account of binary belief is right, then thresholds (if not extreme) must be context dependent, since for no choice of threshold less than 1 will consistency be upheld. (2) that there’s a natural constraint on thresholds in terms of the number of propositions obtained.

The minimal conclusion, for this threshold theorist, is that the more propositions they entertain, the harder it will be for them to count as beliefs. Consider the lottery paradox construed this way:


1 loses

2 loses

N loses

So: everyone loses

Present this as the following puzzle: We can believe all the premises, and disbelieve the conclusion, yet the latter is entailed by the former.

We can answer this version of the lottery paradox using the resources described above. In a context where we’re contemplating this many propositions, the threshold for belief is so high that we won’t count as believing the individual props. But we can explain why it seems so compelling: entertain each individually, and we will believe it (and our credences remain fixed throughout).

Of course, there’s other versions of the lottery paradox that we can formulate, e.g. relying on closure, for which we have no answer. Or at least, our answer is just to reject closure as a constraint on rational binary beliefs. But with a contextually variable threshold account, as opposed to a fixed threshold account, we don’t have to retreat any further.