One role this blog is playing is allowing me to put down thoughts before I lose them.
So here’s another idea I’ve been playing with. If you think about the literature on vagueness, it’s remarkable that each of the main players seems to be broadly reductionist about vagueness. The key term here is “definitely”. The Williamsonian epistemicist reduces “definitely” to a concept constructed out of knowability. The supervaluationist typically appeals to semantic indecision, on one reading, that reduces vagueness to semantic facts; on another reading, that reduces vagueness to metasemantic facts concerning the link between semantic facts and their subvening base. Things are a little less clear with the degree theorist, but if “definite truth” is identified with “truth to degree 1”, then what they’re doing is reducing vagueness to semantic facts again.
If you think of the structure of the debate like this, then it makes sense of some of the dialectic on higher-order vagueness. For example, if vagueness is nothing but semantics, then the question immediately arises: what about those cases where semantic facts themselves appear to be vague? The parallel question for the epistemicist is: what about cases where it’s vague whether such-and-such is knowable? The epistemicists look like they’ve got a more stable position at this point, though exactly why this is is hard to spell out.
Consider other debates, e.g. in the philosophy of modality. Sure, there are reductionist views: Lewis wanting to reduce modality to what goes on in other concrete space-times; people who want to reduce it to a priori consistency; and so on. But a big player in that debate is the modalist, who just takes “possibility” and “necessity” as primitive, and refuses to offer a reductive story.
It seems to me pretty clear that a position analogous to modalism should be a central part of the vagueness literature; but I’m not aware of any self-conscious proponents of this position. Let me call it “primitivism” about vagueness. I think that perhaps some self-described semantic theorists would be better classified as primitivists.
At the end of ch 5 of the “Vagueness” book, Tim Williamson has just finished beating up on traditional supervaluationism, which equates truth with supertruth. He then briefly considers people who drop that identification. Here’s my take on this position. Proponents say that semantically, there’s a single precisification of our language which is the intended one, but which one it is is (semantically) vague. Truth is truth on the intended precisification; but definite truth is defined to be truth on all the precisifications which aren’t determinately unintended. Definite truth (supertruth) and truth come apart. This position, from a logical point of view, is entirely classical; satisfies bivalence; and looks like it thereby avoids many of Williamson’s objections to supervaluationism.
I think Williamson puts exactly the right challenge to this line. In what sense is this a semantic theory of vagueness? After all, you haven’t characterized “Definitely” in semantic terms: rather, what we’ve done is characterized “Definitely” using that very notion again in the metalanguage. One might resist this, claiming that “Definitely” should be defined using the term “admissible precisification” or some such. But then one wonders what account could be made of “admissible”: it plays no role in defining semantic notions such as “true” or “consequence” for this theorist. What sense can be made of it?
I think the challenge can be met by metasemantic versions of supervaluationism, who give a substantive theory of what makes a precisification admissible. I take that to be something like the McGee/McLaughlin line, and I spent a chapter of my thesis trying to lay out precisely what was involved. But that’s another story.
What I want to suggest now is that Primitivism about vagueness gives us a genuinely distinct option. This accepts Williamson’s contention that when we drop supertruth=truth, “nothing articulate” remains of the semantic theory of vagueness. But it questions the idea that this should lead us towards epistemicism. Let’s just take determinacy (or lack of it) as a fundamental part of reality, and then use it in constructing theories that make sense of the phenomenon of vagueness. Of course, there’s nothing positive this theorist has to say that distinguishes her from reductive rivals such as the epistemicist; but she has plenty of negative things to say disclaiming various reductive theses.
I have to say i tried to follow this, but you completely lost me somewhere in there….
On re-reading, I did kind of jump in the deep end in this post. I’m going to write this stuff up, so there might be something a little more first-principles orientated coming along soon.