jump to navigation

The Basics: The Is/Ought Distinction (plus a bit about cultural relativism) June 1, 2009

Posted by johnallengay in ethics, research.
Tags: ,
trackback

The major problem for science to overcome, if it’s to explain morality away, is what’s known as the is/ought distinction. It was first pointed out by the great British empiricist David Hume, and it goes like this:

There are two sorts of facts about the world, which I’ll call standard facts and moral facts. Standard facts are characterized by the use of the verb “is,” as in “The sky is blue” or “Two plus two is four.” Moral facts are characterized by the use of some form of “ought,” which indicates the feeling of  duty or obligatoriness that characterizes moral sentiment. Examples of this include “One ought not murder” or “The poor ought to have enough money to live comfortably.”

Science deals directly with only standard facts. In order to explain morality away, it needs to be able to get moral facts from sets of standard facts–it needs to be able to derive an “ought” from an “is.” Here’s an example of how we’d do this in order to arrive at the alleged moral fact I mentioned above, “The poor ought to have enough money to live comfortably.” We start with two observations: first, that there is some amount of money y such that it is enough to live comfortably, and there are no ways to live comfortably without having at least y; second, there are many people who have some amount of money that is less than y and therefore do not live comfortably. In order to get to the moral fact we’re after, we need an additional claim: People ought to live comfortably. We might argue for this in any number of ways: perhaps people have a right to live comfortably, or there is greater happiness for people if they are comfortable than if they are not, and having greater happiness is better. However, terms like rights and better still require a notion of obligation–an ought–in order to carry any weight (after all, it seems nonsensical to say, for instance, “It is better that you be allowed to speak freely, but you ought not be allowed to speak freely,” because this seems to just be an assertion that it isn’t really better for you to be allowed to speak freely). No matter what, we have to explain how to get that ought, and it’s just not clear what sort of empirical phenomenon there is that contains an ought. What standard fact can we point to as an ought-bearer? We might point to a baby, for instance, and observe that it’s crying because it lacks something that its parents would need y amount of money to obtain. Now, we’d agree that this is bad, and on seeing the baby crying we’d have unpleasant feelings and strong desires to help it and stop its crying, but we also tend to regard feelings and attitudes as subjective matters, while science deals with objective matters. It seems there’s a fundamental, formal problem that arises when one tries to derive ought-claims from a set containing only is-claims.

If this is so, then scientific efforts to explain morality can only go so far. Suppose we’re anthropologists studying a previously undiscovered tribe. We can of course make plenty of is-claims–”the tribe’s members all shave their heads” or “the tribe’s males all carry large wooden sticks wherever they go.” We can also make is-claims about ought-claims: “In the eyes of the tribe, one ought to shave one’s head” and “The tribe members think that females ought never carry large wooden sticks.” Is-claims about ought-claims are objective, despite the apparently subjective component: we can’t find objective facts that indicate that shaving one’s head is morally obligatory, but we can find objective facts that indicate that the members of the tribe hold that opinion (for example, they state that they regard it as morally obligatory, they praise those who obey and ostracize those who disobey, etc.). We can also make a special sort of is-claim about ought-claims that indicates well-adjustedness of the ought-claims, and perhaps indicates their origin (this is far more relevant to evolutionary explanations of morality than to this particular case), as in “The area where the tribe lives is infested with a parasite that dwells in hair, and the parasite carries a serious disease, so the tribe owes its continued survival and flourishing to making head-shaving morally obligatory.” Straightforward ought-claims, however, seem to elude us. We might say, for instance, that the tribe should flourish and thus they should shave their heads, or that the tribe should be more egalitarian and let women carry large wooden sticks if they so choose, and we might even say these things truthfully, but we would be hard-pressed to remain scientific in doing so. We’d need to give an objectively verifiable  reason for assenting to our ought-claims, and it’s just not clear there is one–can we point, for instance, to the goodness of shaved heads, or to the obligation to egalitarian sociality? It would seem that these things are too abstract to be directly examined.

As I write the last part, I began to wonder if the is/ought problem is at the root of the widespread promotion of cultural relativism in some social sciences (like sociology and anthropology)–in the quest for objectivity, the scientist becomes used to making is-claims about ought-claims and applies this standard to their own ought-claims–thus, it is objectively true that certain cultures in East Africa regard female genital mutilation as morally praiseworthy, a sentiment few outside East Africa agree with. The scientist is among those who disagree with that sentiment, but realizes its only objective expression is that most cultures outside East Africa regard female genital mutilation as morally blameworthy. To assert that this is the best account we can give is to be a cultural relativist. People tend to find the relativist account of morals to be rather flaccid, and this makes sense–it consists more in is-claims about ought-claims than in actual ought-claims, and we tend to regard morality as being a set of ought-claims.

We can also see an error in the systematic qualification of all moral judgments with the observation that these are the subjective feelings of a particular judge, such that morality is an entirely subjective phenomenon that lacks the broad, crosscultural and intersubjective force we’d typically think it has. The scientifically-motivated cultural relativist conflates scientific epistemology (that is, the ways through which we can arrive at knowledge) with truth, so that the only moral facts are objectively verifiable (i.e. scientific) facts. However, we need not conflate epistemology with truth. Surely there can be facts that we cannot verify, period, let alone verify them with scientific methods. Suppose, for instance, that there is some sort of subatomic particle which is so small that it has no effect on the sorts of subatomic particles we can observe. If you were to write an article on the particle and its properties, no self-respecting scientific journal would publish it (or so we’d hope!), yet the particle may of course still exist. What’s more, the existence of the particle is objective–it isn’t true for you that the particle exists but not true for mainstream science; it’s true for everyone, but for various reasons not everyone accepts it to be so. Thus, I need to clarify what I said earlier about science. Science deals in objective, verifiable facts. Ought-claims can’t be directly investigated by science because they are either objective, unverifiable facts or subjective attitude-reports. Of course, when I say “verifiable,” I mean verifiable by methods which yield the rigorous sorts of results demanded by science; it is still possible that moral facts might be verifiable by some less-challenging method, though it isn’t clear what that would be.

Let’s get back to the is/ought distinction. Lots of people don’t buy it. I’d like to take a brief look at two of them.

One is John R. Searle.  Searle suggests that there are institutions we participate in that, by their very nature, contain obligations. Take promising. Suppose I say to you, “I promise to give you $5,” and I’m not saying that because I’m under duress, drugged, or acting. The very utterance of these words is an act of promising, and it in the nature of promising that I place myself under an obligation, that I make it so I ought to give you $5. Thus, given that you observe my saying “I promise to give you $5″ under the right conditions, you can derive that I ought to pay you $5–you can get an “ought” from an “is.” If Searle is correct, science might be able to provide an exhaustive account of the obligations implicit in institutions simply by noting that they are institutions, and that one fails to participate in an institution if one does not obey its rules, including adopting obligation. Thus, if I promise you $5 but say I’m not obligated to give you $5, I’ve failed to authentically promise; if I play soccer but don’t feel I ought to score goals, prevent the opposition from scoring, and refrain from touching the ball with my hands while it’s in play, I’m not really playing soccer.

I see a few problems in Searle’s account, but the largest is this: Perhaps Searle has only accounted for the derivation of an ought from an is in the context of institutions. OK, we say to Searle, the scientific approach to ethics is explanitorily complete in the institutional context, so you can account for all sorts of promises (loans, marriages, etc) and rules-governed consensual group activities (playing games, trading on the stock market, etc). So what? Institutional obligations are just a subset, and a very small subset, of the set of obligations. Surely murder isn’t wrong just because to murder is to fail to be a properly social human! Isn’t there something extra that’s bad about murdering, and that’s the thing that bears the real moral weight? What’s more, we seem to need an ought-claim about institutions in order for them to be morally efficacious; we need to claim that one ought to obey all of an institution’s rules if one attempts to participate in it or goes through the motions of participating. Thus, when I say “I promise to give you $5″ and then don’t give you the $5, I can’t say that I don’t fully participate in the promising institution; when I play soccer and carry the ball around in my hands, I can’t say that I obey all the rules of soccer except the ones pertaining to not carrying the ball in my hands.

A second objection to the is-ought distinction comes from the strict naturalists, especially those of scientific mind, like Patricia Churchland. Churchland points out that biologists and other scientists are perfectly comfortable looking for the origins of behaviors, whether as products of evolutionary, environmental, psychological, or genetic factors. They don’t see the need for the is-ought distinction–for them, our felt “oughts” are just products of “is’s” in our formative past. For example, they might note that we have notions of theft, and of property, because our ancestors fed on scarce, nutrient-dense foods, so taking that food meant a substantial, perhaps even lethal loss to the primate that was stolen from, to the point that enough stealing meant nobody could survive, because primates as a class tend to use high sociality as a survival and reproduction strategy, and enough thieving deprived the thief of groupmates (see this article for a discussion of this). Our strong feelings about stealing, then, persist into the present for two reasons: first, we have a long evolutionary history of opposing stealing; second, many of our possessions are still marked by high value and scarcity and thus still cause significant loss to victims of thievery.

There are problems with this account, too. First, it doesn’t seem to have much moral pull, as we saw above with cultural relativism. The scientific account of our morals tells us about how we came to have the morals that we have, and how they exert such a strong pull on us, but it doesn’t seem to be a moral system itself–it says why we feel murdering is wrong, not that murdering is wrong.  We can save the scientific account if we add the claim “one ought to align with one’s nature.” Thus, as a human I ought not murder because humans as a species have historically found murder to be evil, and they feel this way because murder is a behavior that has negative social and selective consequences, etc., and it is best for me to align with this nature. Here it’s not clear why I should align with human nature–after all, optimal survival strategies can change over time as circumstances change, and we definitely don’t live in the environment humans evolved dealing with; additionally, it would seem that most of our appeals to why I should align with human nature are of the form “You’d be happier aligning with human nature,” or something of the like, and this is just a way to push the notion of obligation away without actually resolving it, as I mentioned at the beginning of the post.

Second, the scientific account still seems to presuppose ought-claims rather than deriving them strictly from is-claims. As I mentioned above, there’s an apparent need for an ought-claim saying one should align with nature.

Finally, the scientific account would seem to suggest that what is natural to feel about morals is also what is correct to feel about morals.This seems to be at best contingently true and more likely just false–for example, prejudicial attitudes have sometimes been explained as being an evolutionarily useful strategy of promoting the genetic patterns of one’s own group over visibly dissimilar patterns.

Comments

1. Pat Sheehan - June 1, 2009

Interesting.

2. meshul - June 7, 2009

It sounds like you really know what you’re talking about. I’ll be interested to see where you end up on this topic.

3. acmcca - June 14, 2009

I still feel like both Searle and Churchland’s arguments are worthwhile, especially when taken together. The institution you’re participating in is ‘living as a human’ and (at the risk of a ‘no-true-scotsman’ problem) living as a human in its rightest form would be living with the moral rules which evolution has created for us, wouldn’t it? Yes, environments change after a while, but modifying old strategies to meet newer circumstances is the point of evolution in the first place.

I think a bigger problem is that we’ve been trying to find a set of moral obligations which are true in 100% of cases, and evolution has not provided for that kind of morality. Shouldn’t it be enough to say, “if it feels wrong in this case, don’t do it. If it feels okay, then okay?”

I think I need to do more thinking about this.

4. johnallengay - June 15, 2009

I’ve thought about your idea a bit and it is interesting. First, I have to point out that using evolutionary imperatives (if there are any) as morals is a risky strategy–as I noted in the post, some suggest that racism and similar prejudicial behaviors may be evolutionarily-rooted (though I’ll need to do some digging to make sure that’s in the literature and not just speculation), and evolution doesn’t necessarily produce optimal outcomes.

For example, the human retina has its nerve structures located above the optical receptors, meaning the optic nerve has to go through the retina at some point–that’s why we have a blind spot. The eyes of cephalopods, by contrast, have the nerve structures located behind the retina and thus have no blind spot. The improvement in vision we’d get by reversing our retinas would be only minor (you will almost never notice your blind spot unless you deliberately try to detect it, especially since the other eye can fill in information), but it would be an improvement. We could say, then, that there’s a very miniscule selective pressure in favor of eliminating the blind spot. However, that pressure is cancelled out because reversing the retina would seem to require many intermediate generations of partially-reversed-retina humans with bad eyesight–a big negative selective pressure. Basically, what happened in the distant evolutionary past is that (if I’m remembering this correctly) the cephalopod and non-cephalopod eyes evolved independently of one another but ended up getting results that are functionally almost identical, but their early evolutionary paths were different enough that for cephalopods, the evolutionarily best path put the nerves behind the retina, and for non-cephalopods, the evolutionarily best path put them in front. It’s a bit like both cephalopods and noncephalopods were climbing hills (hills of complexity? hills of adaptedness?) by following a simple, “selective” rule–go in a random direction, and if your new location is higher than your previous location, repeat this process; if not, go back and pick a new direction. With a little modification, such a rule will get both us and the cephalopods to the top of a “hill.” However, one could say that the two groups are on top of different hills, and the cephalopod hill is a bit higher. We noncephalopods can’t get to the top of the cephalopod hill without breaking the rule, because we’re on top of our own hill, and every direction we turn, we go downward.

Check out here (http://en.wikipedia.org/wiki/Evolution_of_the_eye#Evolutionary_baggage) for a bit more on what I’m talking about.

Sorry about the longwinded example, but the point is this: We might have evolved morally nonoptimal systems, so following their dictates might not be morally optimal.

Additionally, it’s just not clear how we can justify making the leap from claims about our biological past to claims about right and wrong (that’s the is/ought problem)–sure, such-and-such behavior might be biologically right, but that doesn’t seem to make it morally right (as above, some suspect that prejudice is a biologically good strategy, while it’s obviously morally bad; many good biological strategies would seem to just have no moral import–what’s the moral element in the gag reflex, for example?)

Second, I’m not sure that “living as a human” is an institution in a strict enough sense for us to get moral imperatives from it. For one thing, we don’t choose to participate in the institution, at least not in any traditional sense, and we tend to link choice very closely to morality. For another, it seems to me that if living as a human is indeed an institution, the way we’d find the moral obligations it brings with it would be by crosscultural research in morality, yet it’s just not clear that there are any moral rules that all cultures accept (some suggest a minimal rule against murder, at least, is universal).

Your last point, however, about moral certainty, is spot-on. A lot of the argument in moral philosophy uses appeals to intuitions about right and wrong, and there’s been some research that’s suggested our intuitions are generally good but inconsistent in extreme situations. I’m thinking there’s a way we can expand on that with an examination of neural structures, but I’m going to save that for a much later post.

5. acmcca - June 23, 2009

Responding to your comments in reverse order, here goes:

Using Donald E. Brown’s “List of Human Universals,” here are the ones which pertain to morality (near as I can tell)

Actions under self-control distinguished from those not under control
Conflict, means of dealing with
Conflict, Mediation of
cooperation
cooperative labor
Distinguishing right and wrong
empathy
etiquette
generosity admired
gift giving
good and bad distinguished
hospitality
incest taboo (or unthinkable) with respect to mother/child
incest, prevention or avoidance
in-group, biases in favor of
insulting
Law (rights and obligations)
Law (rules of membership)
males more: aggressive, prone to theft/violence
murder proscribed
rape proscribed
reciprocal exchanges of labor, goods, services
Reciprocity, negative (revenge, retaliation)
reciprocity, positive
sanctions
sanctions for crimes against the collectivity
sanctions include removal from the social unit
self is responsible
sexual regulation (?)
taboos
tabooed foods (Say it five times aloud!!!)
tabooed utterances
turn-taking
violence, some forms of proscribed
(additions since 1989)
judging others
males engage in more coalitional violence
moral sentiments
moral sentiments, limited effective range of
self-control
shame
stinginess, disapproval of

There were a few more that some people might have put on a morality list (e.g. ideas about status/prestige and sexual mores) which I don’t consider relevant to morality (who cares if you have sex? honestly). You might consider looking at the whole list anyway, it’s pretty cool.

Definitely not to name drop, but I think Sartre might agree with me that we make an active choice daily to participate–or not–in life. Perhaps not in the traditional sense of the word but if one were properly motivated to not participate in life, it’s certainly within his power (insofar as we are able to ‘choose’ anything) to choose to not live anymore.

I guess I’ve backed myself into the kind-of untenable position that our biological drives include plausible morals. I guess I’m going to run with it. Here goes:

It seems that although certain definitely-amoral behaviors (murder, rape) are manifestly beneficial in other species (e.g. orangutans, ask me about their rape practices some other time) it appears that there’s a major caveat in these kinds of behaviors in humans. Being that we’re evolved to live in groups which are largely populated by kin, it seems that there’s a strong in-group/out-group moderation in these kinds of behaviors. It’s probably beneficial on the short scale for human males to rape, but it’s definitely detrimental (lovely alliteration, agreed?) to do so within ones own kin group. Now here’s where it gets interesting. The measures for what counts as part of our kin(ish)-group is something like, “people whom I see on a regular basis” or maybe even “someone whom I can identify as a member of my in-group(whatever that may be).” Peter Singer wrote a (now out of print?) book called, “The Expanding Circle, Ethics and Sociobiology” that I’m sure you’ve heard of if you haven’t read. In case you haven’t read it, the argument goes something like this: As human society as progressed, what counts as the in-group for most people (their circle) has expanded. If this trend continues, people will start to see all of humanity, and then perhaps all life on Earth, as part of their own group. If this goal is realized, then the same morals that apply to kin (don’t rape, don’t kill, you’re hurting your genes!) will apply to all people or perhaps all species (but don’t count on it).

I think that’s what I was invoking in my previous comment. If we trust that the people we deal with on a regular basis are included in our modular idea of a kin group, then we’re set for trusting our instincts as to what’s right and what’s wrong.

The only problem is when you’re dealing with people you don’t know or aren’t likely to deal with again. The only defense here, I guess, would be the “practice random acts of kindness” folks, gosh-bless-them. Although I can’t help be hopeful that if enough people keep writing FMLs about how they got screwed over, we’ll all start feeling embarrassed enough to stop doing crazy things like that. Cross your fingers.

Hey–thanks for taking the time to respond to my comments thoughtfully,

Andrew

6. johnallengay - June 24, 2009

While I can’t address everything on that list, it’s my understanding that there’s some doubt in the scholarship as to the universality of certain taboos–with incest, for instance, there are numerous cultures that practice endogamy to an almost-incestuous degree, and a few that encourage full-scale incest (I forget exactly where I read this, but I believe there’s a culture in Africa or New Guinea in which mother-son intercourse is mandatory). What may well be the case with incest is that all cultures have some form of incest prohibition, but there is no particular incestuous behavior that is universally prohibited, hence, the “follow our universal drives” rule would merely tell us that we need to have a prohibition on incest, but not what the content of that prohibition would be. I’m thinking some of the other universals on the list might be similarly nebulous (particularly etiquette, food taboos, hospitality, and shame).

Others on the list I would say are just necessary conditions for morality, or for socially-mediated morality: a distinction between good and bad, sanctioning, cooperation, and a concept of self-controlled vs uncontrolled actions. Anything that lacks these simply fails to be a moral system.

The fundamental problem with nature-derived systems like the one attributed to Singer is that they conflate selective advantage with moral good, and it’s not clear to say the least that there’s anything morally good about having a selective advantage (though there is a selective advantage to making us feel good about doing things that give selective advantages, and bad about things that give disadvantages). Take the example of the rape taboo. Suppose orangutans were sentient, intelligent, conscious, or had whatever other criterion of moral personhood you might favor. Their behavior would still convey a selective benefit, yet we would likely regard the rape of an entity with moral personhood as wrong. We’re thus grounding our assertion that rape is wrong in contingencies of biology, which seems to be rather shaky ground.