Nominalizing statistical mechanics

Frank Arntzenius gave the departmental seminar here at Leeds the other day. Given I’ve been spending quite a bit of time just recently thinking about the Fieldian nominalist project, it was really interesting to hear about his updating and extension of the technical side of the nominalist programme (he’s working on extending it to differential geometry, gauge theories and the like).

One thing I’ve been wondering about is how  theories like statistical mechanics fit into the nominalist programme. These were raised as a problem for Field in one of the early reviews (by Malament). There’s a couple of interesting papers recently out in Philosophia Mathematica on this topic, by Glen Meyer and Mark Colyvan/Aidan Lyon. Now, one of the assumptions, as far as I can tell, is that even sticking with the classical, Newtonian framework, the Field programme is incomplete, because it fails to “nominalize” statistical mechanical reasoning (in particular, stuff naturally represented by measures over phase space).

Now one thing that I’ll mention just to set aside is that some of this discussion would look rather different if we increased our nominalistic ontology. Suppose that reality, Lewis-style, contains a plurality of concrete, nominalistic, space-times—at least one for each point in phase space (that’ll work as an interpretation of phase space, right?). Then the project of postulating qualitative probability synthetic structure over such worlds from which a representation theorem for the quantitative probabilities of statistical mechanics looks far easier. Maybe it’s still technically or philosophically problematic. Just a couple of thoughts on this. From the technical side, it’s probably not enough to show that the probabilities can be represented nominalistically—we want to show how to capture the relevant laws. And it’s not clear to me what a nominalistic formulation of something like the past hypothesis looks like (BTW, I’m working with something like the David Albert picture of stat mechanics here). Philosophically, what I’ve described looks like a nominalistic version of primitive propensities, and there are various worries about treating probability in this primitive way (e.g. why should information about such facts constrain credences in the distinctive way information about chance seems to)? I doubt Field would want to go in for this sort of ontological inflation in any case, but it’d be worth working through it as a case study.

Another idea I won’t pursue is the following: Field in the 80’s was perfectly happy to take a (logical) modality as primitive. From this, and nominalistic formulations of Newtonian laws, presumably a nomic modality could be defined. Now, it’s one thing to have a modality, another thing to earn the right to talk of possible worlds (or physical relations between them). But given that phase space looks so much like we’re talking about the space of nomically possible worlds (or time-slices thereof) it would be odd not to look carefully about whether we can use nomic modalities to help us out.

But even setting these kind of resources aside, I wonder what the rules of the game are here. Field’s programme really has two aspects. The first is the idea that there’s some “core” nominalistic science, C. And the second claim is that mathematics, and standard mathematized science, is conservative over C. Now, if the core was null, the conservativeness claim would be trivial, but nobody would be impressed by the project! But Field emphasizes on a number of occasions that the conservativeness claim is not terribly hard to establish, for a powerful block of applied mathematics (things that can be modelled in ZFCU, essentially).

(Actually, things are more delicate than you’d think from Science without Numbers, as emerged in the JPhil exchange between Shapiro and Field. The upshot, I take it  if (a) we’re allowed second-order logic in the nominalistic core; or (b) we can argue that best justified mathematized theory aren’t quite the usual versions, but systematically weakened versions; then the conservativeness results go through).

As far as I can tell, we can have the conservativeness result without a representation theorem. Indeed, for the case of arithmetic (as opposed to geometry and Newtonian gravitational theory) Field relies on conservativeness without giving anything like a representation theorem. I think therefore, that there’s a heel-digging response to all this open to Field. He could say that phase-space theories are all very fine, but they’re just part of the mathematized superstructure—there’s nothing in the core which they “represent”, nor do we need there to be.

Now, maybe this is deeply misguided. But I’d like to figure out exactly why. I can think of two worries: one based on loss of explanatory power; the other on the constraint to explain applicability.

Explanations. One possibility is that nominalistic science without statistical mechanics is a worse theory than mathematized science including phase space formulations—in a sense relevant to the indispensibility argument. But we have to treat this carefully. Clearly, there’s all sorts of ways in which mathematized science is more tractable than nominalized science—that’s Field’s explanation for why we indulge in the former in the first place. One objective of the Colyvan and Lyon article cited earlier is to give examples of the explanatory power of stat mechanical explanations, so that’s one place to start looking.

Here’s one thought about that. It’s not clear that the sort of explanations we get from statistical mechanics, cool though they may be, are of relevantly similar kind to the “explanations” given in classical mechanics. So one idea would be to try to pin down this difference (if there is one) and figure out how they relate to the “goodness” relevant to indispensibility arguments.

Applicability. The second thought is that the “mere conservativeness” line is appropriate either where the applicability of the relevant area of mathematics is unproblematic (as perhaps in arithmetic) or where there aren’t any applications to explain (the higher reaches of pure set theory). In other cases—like geometry, there is a prima facie challenge to tell a story about how claims about abstracta can tell us stuff about the world we live in. And representation theorems scratch this itch, since they show in detail how particular this-worldly structures can exactly call for a representation in terms of abstracta (so in some sense the abstracta are “coding for” purely nominalistic processes—“intrinsic processes” in Field’s terminology). Lots of people unsympathetic to nominalism are sympathetic to representation theorems as an account of the application of mathematics—or so the folklore says.

But, on the one hand, statistical mechanics does appear to feature in explanations of macro-phenomena; and second, the reason that talking about measures over some abstract “space” can be relevant to explaining facts about ripples on a pond is at least as unobvious as the applications of geometry.

I don’t have a very incisive way to end this post. But here’s one thought I have if the real worry is one of accounting for applicability, rather than explanatory power. Why think in these cases that applicability should be explained via representation theorems? In the case of geometry, Newtonian mechanics etc, it’s intuitively appealing to think there are nominalistic relations that our mathematized theories are encoding. Even if one is a platonist, that seems like an attractive part of a story about the applicability of the relevant theories. But when one looks at statistical mechanics, is there any sense that it’s applicability would be explained if we found a way to “code” within Newtonian space-time all the various points of phase space (and then postulate relations between the codings)? It seems like this is the wrong sort of story to be giving here. That thought goes back, I guess, to the point raised earlier in the “modal realist” version: even if we had the resources, would primitive nominalistic structure over some reconstruction of configuration of phase space really give us an attractive story about the applicability of statistical mechanical probabilities?

But if representation theorems don’t look like the right kind of story, what is? Can the Lewis-style “best theory theory” of chance, applied to stat mechanical case (as Barry Loewer has suggested) be wheeled in here? Can the Fieldian nominalist just appeal to (i) conservativeness (ii) the Lewisian account of how the probability-invoking theory and laws gets fixed by the patterns of nominalistic facts in a single classical space? Questions, questions…


Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s