Loewer on laws

In “Laws and Natural Properties” (Philosophical Topics 2007—I can’t find an online copy to link to) Barry Loewer argues we should divorce Lewis’s Humean account of laws from its appeal to natural properties.

The basic Lewisian idea is something like this. Take all the truths about world w describable in a language NL whose basic predicates pick out perfectly natural properties. There are various deductive systems with true theorems, formulated in this language. Some are simpler than others, some are more informative. The best system optimizes simplicity and strength. The laws are the generalizations, equations, or whatever, entailed by this best system. (This is the basic case—his distinctive treatment of chance requires some tweaks to the setup).

Why the focus on NL? Why not look at any old system in whatever language you like, and pick the simplest/most informative? Lewis worries that the account would then trivialize. Consider the language with a basic predicate F that is interpreted as “being such that T is true”. The single axiom “(Ex)Fx” is then, thinks Lewis, maximally simple, and since its entailments are the same as T, it’s just as informative as T. So simplicity would be no constraint at all, with an appropriate choice of language. What NL does is provide a level playing field: we force the theories to be presented in a common base language, which allows us fairly to compare their complexity.

Loewer notes that the above argument seems pretty questionable. Sure, “informativeness” might be understood just as the modal entailments of the theory—roughly, a theory is more informative the smaller the region of logical space it is true at. But is that the right way to understand informativeness? After all, a sensible seeming physical theory could be applied to some description of a physical situation and produce specific predictions—we can extract a whole range of syntatic consequences of the deductive system relevant to individual situations. Isn’t something like this what we’re after?

Loewer thinks that the right way to extend the Humean project is to take Lewis’s “simplicity and strength” as placeholders for whatever those virtues are that the scientific tradition does in fact value. So he thinks that minimally, if we’re evaluating theories for informativeness, “the information in a theory needs to be extractable in a way that connects with the problems and matters that are of scientific interest”.

I’m not quite sure I understand the next move in the paper. Loewer moves on to say: “Lewis’s argument does show that [Humeanism about laws] requires a preferred language”. That’s a bit surprising, given the above! He goes on to identify the language of scientific English, SL, or its proper successors, SL+. Now, one way to read this is that Loewer is here restricting the languages in which the competing theories can be formulated, not to NL as Lewis did, but to SL or any of the SL+. If we took this line, we can stick with Lewis’s original modal understanding of informativeness I guess–trivialization is ruled out by the same basic Lewisian strategy.

There’s a different way of understanding what’s going on though (and maybe its what Loewer intends). This is to think that the way that we should evaluate informativeness of T is in terms of “truths” that are extractable (logically entailed, for example) from T—the truths that constitute the answers to “problems and matters of scientific interest”. But these truths have to be formulated in a particular language—that’s the cost of the shift from modal characterizations of informativeness to broadly linguistic ones. So as well as the question of what language the theory is in, there’s also the question of the language for presenting the data against which the theory’s virtues are evaluated. There’s nothing that requires the two languages to coincide, and we could insist on a particular formulation of the data-language, while leaving open the theory-language (of course, if the data is to be extractable from the theory in a syntactical sense, then we probably need to add a bunch of coordinative definitions to the theory to link the two vocabularies).

One nice thing about the second way of going is that we don’t have to build in the assumption that the One True system of laws is humanly understandable, or that scientific English or its successors will be adequate to formulate the laws. The first way (where laws are to be formulated in SL+) requires a certain kind of optimism about the cognitive tractability of the underlying explanatory patterns in a world. Lewis’s original theory didn’t require this optimism—NL immediately picks out the fundamental structure of whatever world we’re concerned with, whether or not inhabitants of that world are in a position to figure out what those fundamentals are. Maybe we feel entitled to be optimistic about the actual world—but the Humean account is supposed to apply to arbitrary possible worlds, and surely there are some possible situations out there where SL+ won’t cut it, and some other vocabulary would be called for.

So I prefer the second interpretation of Loewer’s proposal, on which SL+ is the data-language, but the language of theory could be quite different. This suffices, I think, to rebut Lewis’s worry about trivialization. But it allows that in some scenarios, the best system explaining homely facts, is itself quite alien.

A halfway-house between this version of Humeanism and Lewis’s would have the data-language be NL rather than SL+, but allow the language of the final theory to vary. The obvious advantage of this is that it removes the dependence on the contingencies of our scientific language in fixing the laws of arbitrary worlds—strange alien possibilities filled with protoplasm or whatever just might not have a very interesting description in the terms of a language developed in response to our actual situation. Appealing to NL for the data-language tailors informativeness to a description of the world appropriate to the basic features of that world, rather than using one developed in response to the world we happen to find ourselves in.

Let’s consider an example. Suppose that the natural properties are Fieldian, rather than Lewisian. The fundamental features of the world are relations like congruence and betweenness (and similar) that fix the spatio-temporal structure of the world and the mass distribution across it. Now, Field’s “nominalized physics” aims to articulate versions of the standard Newtonian equations in this setting—without appeal to standard resources such as the relation of “having mass of x kg” which brings in appeal to abstracta. Field thinks this “synthetic” formulation should appeal even to those who do not share his qualms about the existence of numbers. Let’s suppose we take his proposal in this spirit, so whatever other problems there may be with the mathematized physics, the worry isn’t that it’s false.

Are the usual mathematized Lagrangian formulations of Newtonian mechanics laws in this Fieldian world? On the original Lewisian proposal about laws, the best system should be formulated in perfectly natural terms—which here means the Fieldian synthetic relations. The natural thought is that the Fieldian nominalistic formulation wins this competition, and its deductive consequence won’t include the usual mathematized equations. So, presumably, the mathematized Lagrangian equation won’t be a law. On the other hand, if we go for either of the tweaked versions above, our candidates for “best theory” needn’t be given in this metaphysically privileged vocabulary. Given appropriate coordinating links between the vocabulary, standard mathematical definitions will entail all the data about mass-congruence and the rest, and so count as informative about the Fieldian data (whether formulated in the Fieldian NL or SL). And (you might argue) going this way enables gains in simplicity, making it the winner in the fight for best theory. So the usual, mathematically laden, Lagrangian may yet be a law. Likewise, a Hamiltonian formulation of mechanics could still be the winner in the race for best theory, and the Hamiltonian equation a law, without us having to claim that it is the simplest around when formulated in the perfectly natural, synthetic terms. More generally, we’re liberated to argue that the basic principles of statistical mechanics should feature in the winning theory, even if its terms are a long way from perfectly natural—so long as they add enough information about (for example) the synthetic perfectly natural truths to justify the extra complexity of adding them in.

Some of the use that the Lewisian account of laws is put to goes over more smoothly, I think, if the data-language is NL rather than SL.  Lewis famously wanted to use the Humean framework to help understand chance. His underlying metaphysics had no primitive chances—simply a distribution of particular outcomes (e.g. there’s an atom at one location, and the results of atomic theory at the next, and a particular statistical distribution among events of this type across space-time, but no primitive “propensity” relating the tokens) On the original account, Lewis liberalized his requirements for the vocabulary of candidate theories, allowing an initially uninterpreted chance operator. Given an appropriate understanding of the “fit” between a chancy theory and a non-chancy world, he thought that chancy theories would win the battle of simplicity and informativeness, grounding chancy laws and thereby the truth of chance talk.

It becomes somewhat tricky to replicate this idea if the data-language is construed to be SL+, as Loewer suggests. Take a world that’s set up with GRW quantum mechanics, with primitive chancy evolution of the wave function. Now, presumably SL+ contains chance talk, and so the data against which theories are to be measured for informativeness includes truths about chance. The original idea was that we could characterize, non-circularly, what made a chance-invoking scientific theory “selected”. But now it turns out that one of the ingredients to selection—informativeness—require appeal to chance. If the data-language in question were NL rather than SL, we wouldn’t face this obstacle.

Overall, I’m not attracted to the version of Humeanism where competitors for best theory must be formulated in SL or SL+—it seems excessively optimistic to think that the laws of a wide enough range of worlds will be formulated in these terms. The version where we appeal to SL+ only in evaluating theories for informativeness looks much more promising. Even so, I’m not sure what we gain from appealing to SL+ rather than NL in the evaluation. Sure, if you were sceptical about appeal to the perfectly natural in the first place, you might be attracted to this as a decent fallback. But I don’t see otherwise what speaks in favour of that.

6 responses to “Loewer on laws

  1. Do you think the general method validates the idea that the Sheffer stroke is of logical significance? When we restrict to NL, formulating regular first order logic in terms of the Sheffer stroke (one might argue) enables gains in simplicity, without corresponding costs in informativeness. For fans of logical laws being analogous to natural laws (Schaffer?) wouldn’t this put Shefferian laws on a par with Lagrangian?

  2. Interesting question. I guess it really depends on what you take to contribute to simplicity. My own fix on it in these contexts is usually something like: the syntactic complexity of the theory, formulated in the relevant language. By that measure, a Sheffer-stroke formulation is going to be pretty bad, I’d’ve thought (the Bostock intermediate logic book has a sample axiomatization of propositional logic in terms of the Sheffer stroke—it’s horrendous). But if we thought of simplicity as including “parsimony” as well as “elegance”, then restricting to the Sheffer stroke would be a contribution to ideological parsimony. Loewer’s idea is that we read off the theoretical virtues from “the scientific tradition”, so on his view, whether this sort of parsimony matters would be settled by figuring out whether it’s something the best science cares about. But even if it does, surely complexity of the theory matters too, and so the question would be about what theory/language strikes the best balance—I wouldn’t have thought we’d have a straight route to concluding that “1 connective good, 2 connectives bad”.

  3. Yeah, I was actually thinking of adding the natural deduction type rules for the Sheffer stroke in a zero axiom system, which don’t seem as horrendous. I was thinking that the fundamental features of the world didn’t include the reference of logical constants, so that logical rules didn’t come out well on the NL-only scoring, but that adding the intro, elim, substitution, etc rules to the system enable gains in simplicity without costs in informativeness – for example, by allowing us to in effect say that every F is G, rather than saying Fa, Ga, Fb, Gb… for all the objects there are. I wasn’t really thinking of the strikes-us-as-mathematically-elegant-and-easy-to-use-for-pen-and-paper-proofs kind of simplicity, but more of parsimony crossed with a easiest-to-do-string-theory-on-super-computers-only-using-NAND-gates kind of simplicity. But there are obviously issues here.

  4. Back of the envelope, it looks to me like we can fix the truth-table for P|Q via:

    (Intro) From P => F(Q) infer P|Q
    (Elim) From P|Q infer P=>F(Q)

    where => is a metalinguistic inference ticket that acts like a material conditional and F is falsehood. (This obviously uses talk of good inference, truth, etc, but I take it that the linguistic conception will need these as part of best theory in any case). If, following Peacocke, we stipulate that the semantic value has to be the strongest binary truth-function that validates the intro rule (or weakest that validates elim) we can say that if P is F, then the input is T, so the output must be T; if Q is F, then the input is T, and so the output must be T; and if both P and Q are T, then the output should be F, on pain of not being the strongest such function.

    Say we start off with NL as the data language, and the logical constants don’t refer. We want to be able to describe the (perhaps infinitary) data simply without losing informational potential. Can’t we then just (i) add the two rules above (ii) use the standard logical treatments of &, v, etc in terms of P|Q as abbreviatory definitions, and use the standard semantic equivalences (e.g. not P = P|P) to in effect underwrite the relevant intro and elim rules for those other operators, etc? Maybe it could be argued that these latter rules should be treated as derived laws, but I don’t see why they couldn’t reasonably be taken to lack nomic standing, given the simplicity and elegance of the two basic rules. The way I was thinking of it, is that they would stand to the Sheffer rules like special science ‘laws’ stand to the Lagrangian laws, for Humeans who think only the physical is suitably lawlike. Neither the Sheffer rules nor the Lagrangian laws would be NL-stateable, but they would both be a class above the useful predictive regularities.

    I see this more as a bug than a feature, btw, in case you thought this was my taking the neo-Tractarianism too far!

  5. Well, even if the logic can be formulated neatly, expressing claims in that notation will explode (I vaguely recall a recent Kripke paper on definite descriptions and the Sheffer stroke that may be relevant here). So it seems to me there are plenty of things in the region of “simplicity” that we could say against a Sheffer-stroke based formulation of the theory. I do see that informativeness+ideological parsimony alone would create pressure towards it (and it’s worth bearing in mind how excited Russell was with the discovery of the Sheffer stroke—he thought it some kind of theoretical advance, though I never really got why). But that it’d be an unattractive result maybe tells us that those two virtues alone can’t be the end of the story, and we need something more like elegance. (For the Humean about laws, I don’t see any problem with appeal to elegance—even if we think, Melia-style, that it’s not the best guide to ontology.).

  6. Yeah, I was thinking that the explanation in basic (NL) physical terms + (non-NL) mathematical vocabulary of why the square peg didn’t go through the round hole might be pretty explosive too, without that driving the Humean to hold that it was a law that square pegs don’t go into round holes. I guess the question is where the elegance comes in. I was thinking that we had three levels of notational standing – notation that expresses NL values, notation that helps us formulate simple and powerful laws (at this stage we add Intro and Elim and math to the physical principles) and notation that is notationally useful (so we write down the claims using v, &, deflationary truth, parentheses, description operators, etc). Saying that the theory language can be richer than the data language doesn’t seem to commit us to treating all expressive aspects of the theory language that go beyond NL as on a par.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s