I’ve been asked to write a survey paper on vagueness. It can’t be longer than 6000 words. And that’s a pretty big topic.
I’ve been wondering how best to get some feedback on whether I’m covering the right issues, and, of course, whether what I’m doing is fair and accurate. So what I thought is that I’d post up chunks of my first draft of the paper here—maybe 1000 words at a time, and see what happens. So comments *very much* welcomed, whether substantive or picky.
Basically, the plan for the paper is that it be divided into five sections (it’s mostly organized around the model theory/semantics, for reasons to do with the venue). So first I have a general introduction to the puzzles of vagueness, of which I identify two: effectively, soriticality, and borderline cases. Then I go on to say something about giving a semantics for vagueness in a vague metalanguage. The next three sections give three representative approaches to “semantics”, and their take on the original puzzles. First, classical approaches (taking Williamsonian epistemicism as representative). Second, something based on Fine’s supervaluations. And third, many valued theories. I chose to focus on Field’s recent stuff, even though it’s perhaps not the most prominent, since it allows me to have some discussion of how things look when we have a translate-and-deflate philosophy of language rather than the sort of interpretational approach you often find. Finally, I have a wrap up that mentions some additional issues (e.g. contextualism), and alternative methodologies/approaches.
So that’s how I’ve chosen to cut the cake. Without more ado, here’s the first section.
Vagueness Survey Paper. Part I.
The puzzles of vagueness
Take away grains, one by one, from a heap of rice. At what point is there no longer a heap in front of you? It seems hard to believe that there’s a sharp boundary – a single grain of rice removing which turns a heap into a non-heap. But if removing one grain can’t achieve this, how can removing a hundred do so? It seems small changes can’t make a difference to whether or not something is a heap; but big changes obviously do. How can this be, since big changes are nothing but small changes chained together?
Call this the “little by little” puzzle.
Pausing midway through removing grains from the original heap, ask yourself: “is what I have at this moment a heap?” At the initial stages, the answer will clearly be “yes”. At the late stages, the answer will clearly be “no”. But at intermediate stages, the question will generate perplexity: it’s not clearly right to say “yes”, nor is it clearly right to say “no”. A hedged response seems better: “it sorta is and sorta isn’t”, or “it’s a borderline case of a heap”. Those are fine things to say, but they’re not a direct answer to the original question: is this a heap? So what is the answer to that question when confronted with (what we can all agree to be) a borderline case of a heap?
Call this the “borderlineness” puzzle.
Versions of the “little by little” and “borderline case” puzzles are ubiquitous. As hairs fall from the head of a man as he ages, at what point is he bald? How could losing a single hair turn a non-bald man bald? What should one say of intermediate “borderline” cases? Likewise for colour terms: a series of colour patches may be indiscriminable from one another (put them side by side and you couldn’t tell them apart). Yet if they vary in the wavelength of light they reflect, chaining enough such cases might take one from pure red to pure yellow.
As Peter Unger (cite) emphasized, we can extend the idea further. Imagine an angel annihilating the molecules that make up a table, one by one. Certainly at the start of the process annihilating one molecule would still leave us with a table; at the end of the process we have a single molecule—no table that! But how could annihilating a single molecule destroy a table? It’s hard to see what terms (outside mathematics) do not give rise to these phenomena.
The little by little puzzle leads to the sorites paradox (from “sorites” – the Greek word for “heap”). Take a line of 10001 adjacent men, the first with no hairs, the last with 10000 hairs, with each successive man differing from the previous by the addition of a single hair (we call this a sorites series for “bald”). Let “Man N” name the man with N hairs.
Obviously, Man 0 is bald. Man 10000 is not bald. Furthermore, the following seem entirely reasonable (in fact, capture the thought that “a single hair can’t make a difference to baldness”):
No consider the following collection of horrible-sounding claims:
(1): Man 0 is bald, and man 1 is not bald.
(2): Man 1 is bald, and man 2 is not bald.
(10000): Man 99999 is bald, and man 10000 is not bald.
It seems that each of these must be rejected, if anything in the vicinity of the “little differences can’t make a difference” principle is right. But if we reject the above, surely we must accept their negations:
(1*): it is not the case that: Man 0 is bald, and man 1 is not bald.
(2*): it is not the case that: Man 1 is bald, and man 2 is not bald.
(10000*): it is not the case that: Man 99999 is bald, and man 10000 is not bald.
But given the various (N*), and the two obvious truths about the extreme cases, a contradiction follows.
One way you can see this is by noting that each (N*) is (classically) equivalent to the material conditional reading of:
(N**) if Man N-1 is bald, then Man N is bald
Since Man 0 is bald, a series of Modus Ponens inferences allow us to derive that Man 10000 was bald, contrary to our assumptions.
Alternatively, one can reason with the (N*) directly. Suppose that Man 9999 is bald. As we already know that Man 10000 was not bald, this contradicts (10000*). So, by reductio, Man 9999 is not bald. Repeat, and one eventually derives that Man 0 is not bald, contrary to our assumptions.
Whichever way we go, a contradiction follows from our premises, so we must either find some way of rejecting seemingly compelling premises, or find a flaw in the seemingly obviously valid reasoning.
We turn next to the puzzle of borderlineness: that given Harry is intermediate between clear cases and clear non-cases of baldness, “Is Harry bald?” seems to have no good, direct, answer.
There are familiar cases where we cannot answer such questions: if I’ve never seen Jimmy I might be in no position to say whether he’s bald, simply because I don’t know one way or the other. And indeed, ignorance is one model one could appeal to in the case of borderlineness. If we simply don’t know whether or not Harry is bald, that’d explain why we can’t answer the question directly!
This simply moves the challenge one stage back, however. Why do we lack knowledge? After all, it seems we can know all the relevant underlying facts (the number and distribution of hairs on a man’s head). Nor is does there seem to be any way of mounting a sensible inquiry into the question, to resolve the ignorance, unlike in the case of Jimmy. What kind of status is this, where the question of baldness is not only something we’re in no position to answer, but where we can’t even conceive of how to go about getting in a position to answer? What explains this seemingly inevitable absence of knowledge?
A final note on borderline cases. It’s often said that if Harry is a borderline case of baldness, then it’s indefinite or there’s no fact of the matter or it is indeterminate whether Harry is bald, and I’ll do so myself. Now, indeterminacy may be a more general phenomenon than vagueness (it’s appealed to in cases that seem to have little to do with sorites series: future contingents, conditionals, quantum physics, set theory); but needless to say, labeling borderline cases as “indeterminate” doesn’t explain what’s going on with them unless one has a general account of indeterminacy to appeal to.