This is one of a series of posts setting out my work on the Nature of Representation. You can view the whole series by following this link.
The previous sections have had a common pattern. I’ve picked out a certain concept, laid out some assumptions about the way it is deployed, and shown how particular theses about its denotation fall out. Crucial premises in the derivation (in addition to the “cognitive architecture” are radical interpretation and specific normative (epistemological, practical) normative theses. We could continue this themes for descriptive concepts—starting perhaps, from concepts of primary and secondary observable properties presupposed in the account of perceptual demonstrative concepts.
However, in this section I’m aiming for something a little different. Rather than arguing for a specific denotation for a specific concept for a specific conceptual role, I’ll be discussing a feature that characterizes our deployment of many concepts, and showing how (given radical interpretation) this constrains what they denote. I’ll call these concepts inductive, and the assumption about cognitive architecture I’ll be making (in addition to the usual generic ones) is that we’re disposed to indulge in inductive generalizations using such concepts. Observable (primary and secondary) concepts such as green and square are within the class, as are natural kind concepts such as being an emerald, being a tree or being positively charged. The class isn’t restricted to descriptive concepts: normative concepts (immoral, just, imprudent) are deployed in induction.
But there are concepts that don’t feature in inductive generalization. Famous foils are concepts like grue (=green and first observed before 2050, or blue and not first observed before 2050), or observed by me. Since so many concepts of interest plausibly are deployed in induction, any conclusions about their denotation we can draw from that feature will have wide application.
My focus in this post is developing a positive view of the relevance of inductive generalization fixing the denotation of general concepts.
The starting point is a particular view of inductive generalization: a view on which it is a special case of “inference to the best explanation”. For Sally, a highly reflective thinker, the formation of a justified general belief might go as follows:
- All observed emeralds have been green (and those observations were carried out in thus-and-such a manner).
- All emeralds are green best explains (1).
- So: All emeralds are green.
“Observed” here should be read “observed by Sally”. Premise (1) includes the note about the manner in which observations were carried out because the fact that all observed Fs are green may require a very different explanation if the observations were carried out in an unbiased and controlled sampling, from the explanation that suggests itself if the observations were conducted in the museum of green things. The grounds on which Sally may endorse (1) can be various, but in the most basic case will be based in memories of individual episodes in which she has observed a green emerald and failure to recollect any countervailing instance.
Premise (2) appeals to facts about best explanation. What determines whether an explanation is best will be very important, but it’ll help for the comparisons that follow to follow a certain tradition and assume that this is a matter of a trade-off between features like: being consistent with (1), strength (entailing as much of what (1) entails as possible), and simplicity of the hypothesis. The grounds on which Sally may endorse (1) are again various, but presumably she casts her mind over a range of salient rival hypothesis consistent with (1) and evaluates them for relative simplicity and strength, judging (3) the winner.
The assumption about cognitive architecture that we make, then, is that Sally finds the transition from (1) and (2) to (3) primitively compelling.
This is all very highly reflective. And surely we inductively generalize on the basis of experience without running through all this story. So perhaps what goes on is something like this (this is not an inference that Sally carries out, but a description of her psychology as she forms a general belief):
- (A1.1) Sally remembers seeing emerald 1 in circumstances C1, and it was green.
- (A1.2) Sally remembers seeing emerald 2 in circumstances C2, and it was green.
- (A1.n) Sally remembers seeing emerald n in circumstances Cn, and it was green.
- (A1.n+1) Sally tries and fails to remember seeing any non-green emerald.
On that basis:
- (A3) Sally forms the general belief that all emeralds are green.
So far as the psychology goes, this looks much more like a classic case of “enumerative induction”. And the 1.x facts are exactly the grounds on which on more reflective occasions Sally might endorse the original (1). But this formulation is not the whole epistemological story, since it doesn’t capture the epistemological significant difference between Sally’s good reasoning and Goodman’s famous variant, where grue=either green and first observed before 2050, or blue and not first observed before 2050:
- (G1.1) Sally remembers seeing emerald 1 in circumstances C1, and it was grue.
- (G1.2) Sally remembers seeing emerald 2 in circumstances C2, and it was grue.
- (G1.n) Sally remembers seeing emerald n in circumstances Cn, and it was grue.
- (G1.n+1) Sally tries and fails to remember seeing any non-grue emerald.
On that basis:
- (G3) Sally forms the general belief that all emeralds are grue.
The belief formed in (G3) is inconsistent with the belief formed in (A3), while (so long as the circumstances Ci entail the fact that the observation took place before 2050) the content of the memories reported in each pair (A1.x) and (G1.x) are each equivalent. Looking back to the original reflective case, what suggests itself is the following contrast:
- A2: All emeralds are green best explains (A1.1)-(A.n+1).
- not-G2: All emeralds are green does not best explain (G1.1)-(G.n+1)
On the epistemology I consider, IBE-dogmatism, Sally is by default (i.e. in the absence of defeaters and undercutters) justified in believing a generalization such as (A3) when it is in fact the best explanation of A1.1-A.1.n+1. The sort of thing that would undercut this justification would be a not-obviously-worse candidate-explanation being salient to Sally. So it’s because A2 obtains that the A-inference produces a justified belief. Because G2 does not hold, the G-inference does not.
Moving from epistemology to features of cognitive architecture, what I’ll be assuming is that Sally is default-disposed to find the inference A1.1-A1.n+1 to A3 primitively compelling. She is similarly default-disposed to find other instances with the same form primitively compelling, so long as the concepts in the “green” and “emerald” positions are taken from a certain stock of concepts (which includes the usual observational concepts, natural kind concepts, normative concepts, etc). Let’s call that our stock of inductive concepts.
So just as in previous cases, we have assumptions about cognitive architecture and normative theory. We turn now to draw out their significance for reference-fixing, given radical interpretation.
One thing to note immediately is that interpreting Sally’s concept “green” as picking out the property grue will make her default-disposition to induct on green unjustified, since on that interpretation in generalizing using the concept “green” she will be making the bad G-inference, rather than the good A-inference. And that moral generalizes: for every pair of inductive concepts c, d, all else equal, the best interpretation F, G, will all else equal be one which makes all Fs are G the best explanation of the fact that all observed Fs have been G.
If there are general features common to properties that figure in best explanations, then we could conclude at this point: all else equal, inductive concepts will denote properties with those required features. What might those be?
Well, consider what makes for something being the best explanation of data. Among those rivals consistent with the data, the best explanation needs to be optimally simple and strong. All else equal, it needs to be the simplest. So here’s a feature that properties featuring in best explanations will have: all else equal, they will be no less simple than those that feature in rival candidate explanations.
(1) I’m assuming that it makes sense to talk of a property being more or less simple, as well as the propositions that ascribe that property.
(2) What’s important is not the absolute level of simplicity/complexity of a property, but its relative simplicity compared to rivals.
A treatment of simplicity that underwrites (1) is to be found on Lewis’s work on laws of nature. There, he suggests we treat simplicity (of an interpreted theory, which we can think of as a set of structured propositions) as a matter of what some would call its elegance: how compactly we can express the theory in language. But compactness of expression is sensitive to expressive resources, and so could vary across different languages, so to secure objectivity Lewis posited a “canonical” language in which theories are to be expressed for the purposes of measuring their compactness. Notice that this measure of simplicity applies just as much to properties as to sets of propositions. Simpler properties will be those that are more compactly definable in the canonical language. And the simplicity of an interpreted theory directly depends on the simplicity of the properties it contains—the longer it takes to express the properties, the longer it takes to express the theory that ascribes them.
The upshot for us is the following: all else equal, the referent of an inductive concept will be the simplest of the candidates. To finish off this post, here are some ways this result matters.
Consider the permuted interpretation of observational concepts introduced earlier. *Being the image under p of something green* is less compactly expressable, for any sensible choice of canonical language, than the property of *being green*. Explanations framed in terms of the former will be less simple, so less good, than the latter. This suggests a diagnosis of the challenge from permuted interpretations left open at the end of the post on demonstratives. The permuted interpretations depict the agent’s inductive dispositions as unjustified, and hence the agent as overall less rational, than the alternatives.
Consider the Kripkensteinian property of being green within region R or blue and outside region R. Again, like permuted-green and grue, this is a less simple property than green, and so interpreting an agent’s green concept as denoting it will make the agent less rational than otherwise.