Maximizing similarity and charity: redux

This is a quick post (because it’s the last beautiful day of the year). But in the last post, I was excited by the thought that a principle of epistemic charity that told you to maximize self-similarity in interpretation would correspond to a principle of metaphysical charity in which the correct belief/desire interpretation of an individual maximized knowledge, morality, and other ideal characteristics.

That seemed nice, because similarity-maximization seemed easier to defend as a reliable practical interpretative principle than maximizing morality/knowledge directly. The similarity-maximization seems to presuppose only that interpreter and interpretee are (with high enough objective probability) cut from the same cloth. A practical knowledge/morality maximization version of charity, on the other hand, looks like it has to get into far more contentious background issues.

But I think this line of thought has a big problem. It’s based on the thought that if the facts about belief and desire are those that the ideal interpreter would attribute. If the ideal interpreter is an omniscient saint (and let’s grant that this is built into the way we understand the idealization) then similarity-maximization will make the ideal interpreter choose theories of any target that make them as close to an omniscient saint as possible—i.e. maximize knowledge and morality.

Alright. But the thing is that similarity maximization as practiced by ordinary human beings is reliable, if it is, because (with high enough probability) we resemble each other in our flaws as well as our perfections. My maximization of Sally’s psychological similarity to myself may produce warranted beliefs because I’m a decent sample of human psychology. But a hypothetical omniscient saint is not even hypothetically a decent sample of human psychology. The ideal interpreter shouldn’t be maximizing Sally’s psychological similarity to themself, but rather her similarity to some representative individual (like me).

Now, you might still get an interesting principle of metaphysical charity out of similarity-maximization, even if you have to make it agent-relative by having the ideal interpeter maximizing similarity to x, for some concrete individual x (if you like, this ideal interpreter is x’s ideal interpretive advisor). If you have this relativization built into metaphysical charity, you will have to do something about the resulting dangline parameter—maybe go for a kind of perspectival relativism about psychological facts, or try to generalize this away as a source of indeterminacy. But it’s not the morality-and-knowledge maximization I originally thought resulted.

I need to think about this dialectic some more: it’s a little complicated. Here’s another angle to approach the issue. You could just stick with characterizing “ideal interpreter” as I originally did, as omniscient saints going through the same de se process as we ourselves do in interpreting others, and stipulate that belief/desire facts are what they those particular ideal interpreters say they are. A question, if we do this, is whether this would undercut a practice of flesh and blood human beings (FAB) interpreting others by maximizing similarity to themselves. Suppose FAB recognizes two candidate interpretations available of a target—and similarity-to-FAB ranks interpretation A over B, whereas similarity-to-an-omniscient-saint ranks B over A. In that situation, won’t the stipulation about what fixes the belief/desire facts mean that FAB should go for B, rather than A? But similarity-maximization charity would require the opposite.

One issue here is whether we could ever find a case instantiating this pattern which doesn’t have a pathological character. For example, if cases of this kind needed FAB to identify a specific thing that the omniscient agent knows, that they do not know—then they’d be committed to the Moorean proposition “the omnsicient saint knows p, but I do not know p”. So perhaps there’s some more room to explore whether the combination of similarlity-maximization and metaphysical charity I originally put forward could be sustained as a package-deal. But for now I think the more natural pairing with similarity-maximization is the disappointingly relativistic kind of metaphysics given above.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s