I. Common-Sense vs. Intuition vs. Peer-Cultivation
Holden Karnofsky, a co-founder of GiveWell and co-CEO of Open Philanthropy, recently wrote an article on his blog Cold Takes entitled “Future-proof ethics,” which kicks off a series of blog posts on ethical theories that are “ahead of the curve.” The article begins with a quote from philosopher Kwame Anthony Appiah from an article entitled “What will future generations condemn us for?”, regarding the “horrible track record” of common sense based ethics.
Once, pretty much everywhere, beating your wife and children was regarded as a father's duty, homosexuality was a hanging offense, and waterboarding was approved—in fact, invented—by the Catholic Church. Through the middle of the 19th century, the United States and other nations in the Americas condoned plantation slavery. Many of our grandparents were born in states where women were forbidden to vote. And well into the 20th century, lynch mobs in this country stripped, tortured, hanged and burned human beings at picnics.
Looking back at such horrors, it is easy to ask: What were people thinking?
Yet, the chances are that our own descendants will ask the same question, with the same incomprehension, about some of our practices today.
Is there a way to guess which ones?
Karnofsky wants us to make ethical decisions “that look better, with hindsight after a great deal of moral progress, than what our peer-trained intuitions tell us to do.” According to his usage, moral progress is both societal and personal. Karnofksy believes that if we ignore the “future-proof” aspect of ethics, we would be making big mistake.
Karnofsky is not a fan of “intuitive” and “common-sense” approaches to ethics. To explain what he means, he uses a quotation from an article on conservatism entitled “What Happened to American Conservatism?” in which the author, David Brooks, provides some defense for a less “rationalist” and more “conservative” defense of moral emotion.
Rationalists put a lot of faith in “I think therefore I am”—the autonomous individual deconstructing problems step by logical step. Conservatives put a lot of faith in the latent wisdom that is passed down by generations, cultures, families, and institutions, and that shows up as a set of quick and ready intuitions about what to do in any situation. Brits don’t have to think about what to do at a crowded bus stop. They form a queue, guided by the cultural practices they have inherited ...
In the right circumstances, people are motivated by the positive moral emotions—especially sympathy and benevolence, but also admiration, patriotism, charity, and loyalty. These moral sentiments move you to be outraged by cruelty, to care for your neighbor, to feel proper affection for your imperfect country. They motivate you to do the right thing.
Your emotions can be trusted, the conservative believes, when they are cultivated rightly. “Reason is, and ought only to be the slave of the passions,” David Hume wrote in his Treatise of Human Nature. “The feelings on which people act are often superior to the arguments they employ,” the late neoconservative scholar James Q. Wilson wrote in The Moral Sense.
The key phrase, of course, is cultivated rightly. A person who lived in a state of nature would be an unrecognizable creature ... If a person has not been trained by a community to tame [their] passions from within, then the state would have to continuously control [them] from without.
People generally do not adapt a specific moral system such as consequentialism, deontology or virtue ethics. People are guided by their sense of right and wrong, which is cultivated by those around them. Karnofsky calls the method the “common-sense approach”, which he believes is more precisely described as the “peer-cultivated intuitions” approach. People have changed their attitudes across time and so, Karnosky believes if you want to be ahead of the curve, then you can’t merely adopt surrounding societies views of ethics.
Most writings on philosophy are about comparing different "systems" or "frameworks" for ethics (e.g., consequentialism vs. deontology vs. virtue ethics). By contrast, this series focuses on the comparison between non-systematic, "common-sense" ethics and an alternative approach that aims to be more "future-proof," at the cost of departing more from common sense.
I think that we should avoid regarding “common-sense” ethics, “intuitive” ethics and “peer-cultivated” ethics as being somewhat interchangeable despite their similarities. Something can be intuitive, but not regarded as common-sense. And something can be common-sense but not peer-cultivated. Finally, something can be intuitive but not peer-cultivated.
Imagine an effective altruist or rationalist in the Bay Area that has developed the opinion that there are certain ethical obligations we have toward computer simulations; this is likely peer-cultivated but wildly unintuitive and against what most people call common-sense. A vegan living in a small town with no other vegans could have the opinion that factory farming is unethical, but adopt this view by watching videos portraying animal suffering online rather than peer influence; this would be intuitive but not peer-cultivated or common-sense.
This distinction is worth making if you believe in moral realism and ethical intuitionism. Moral realism is the idea that some moral statements are true and ethical intuitionism is the belief that we can apprehend some true moral statements. In other words, we don’t necessarily have to derive all our ethical beliefs from other facts.
We accept that apprehending something is a reason for thinking it is true in other domains even if our perceptions are subject to bias or influence. When I was younger, I saw a ghost but ghosts are not real. What I saw was a visual illusion and yet, I still believe that what I see with my eyes is generally real. There are many instances in which two bystanders to a crime will recall different facts, but that does not mean we should outright dismiss all eye witnesses in criminal investigations. When asked personal opinions or to recall information, people can be influenced by how questions are asked even if the content of the question is the same. Just because something is subject to bias, imperfect or peer-influenced, does not mean it is useless. This is not necessarily what Karnofsky argues, but this critique of intuitions generally is common.
The only reason we know anything about morality is because of intuition. Scott Alexander says in his Consequentialist FAQ, “moral theories must end up grounded in our moral intuitions for them to work.” Just as moral knowledge rests on a foundation of intuition, so does our knowledge about everything else. For this reason, I think we should avoid equating the intuitive approach to ethics with the common-sense and peer-cultivated approach.
Karnosky believes that we need to depart from common-sense, but he sees common-sense as what is peer-cultivated. If we think of common-sense as whatever everybody is doing at this exact moment in this culture, then everyone should reject this sort of view except for adherents to a particular form of moral relativism. However, I don’t think we can really fully depart from our “senses” or the “common-senses” that humans have because we are ultimately using our intuitions to determine what is and is not ethical.
II. Future-Proofing is Misguided
The meta-ethical justification that we want our ethical theory to have is that it is true. This meta-ethical justification supersedes all other considerations. We should not care if our ethical theory is parsimonious, beautiful, easily understandable, straight forward or simple if it comes at the sacrifice of being true. The way to figure out if our ethical theory is probable is to look at the extent to which it explains the available evidence, namely intuitions.
Future-proofing is good only to the extent that it correlates with being true. Imagine that I have an ethical dilemma with action A and action B. Imagine also that, for some reason, I know for a fact that action A is more ethical but less future-proofed. I should pick action A in literally all scenarios of this kind. This calls into question future-proofing’s ultimate value.
Future-proofing could be a useful tool if we knew what future moral progress was going to look like. However, the only way to make accurate predictions about what future moral progress will look like is to have an idea about what are the true moral facts. The way to find moral facts is by using our ethical intuitions or with our ethical theory that we have formulated.
Imagine if I said “We ought to make geopolitical forecasts that future journalists will confirm in their articles.” This would be true, but uninformative. I don’t think it adds anything to the discussion, because I have to be good at geopolitical forecasting to have a good idea about what future journalists will write. Similarly, I have to be good at ethics to know what moral progress will look like.
The general movement of history is toward moral progress, but time is always going to be a weak heuristic for evaluating outcomes. Besides, we are going to need a way to make sure that we are still morally progressing. Fortunately, we see a decline in violence and all sorts of repugnant behavior, but we only know that this is good because we already have our theory of right and wrong. If we didn't have our theory of right and wrong, we would enter into ethical analysis with the only criteria of “expect moral progress” and if we found that there was a trend, we would think the trend reflects moral progress. Imagine that I saw a two hundred year increase in interracial domestic violence and concluded that this sort of behavior must be unethical. Obviously, that would be preposterous and our heuristic would be wrong here. The only why I can tell that is because I already know what is right and wrong.
There is an order to justification. Your meta-ethical theory about justification comes first and then your ethical theory comes after. You can’t use your ethical theory to devise your meta-ethical justification. Desiring future-proofing because there is moral progress and then determining their is moral progress because it is suggested by your ethical theory is circular. And if you suggest that you know there is moral progress because of your intuitions, then you’ve undercut your initial point. Notice that, ironically, Appiah doesn’t even feel the need to explain that lynching is immoral in the above quote. I think that he doesn’t need to explain it because it is intuitive.
Imagine that I totally disregarded my intuitions and personal beliefs to devise an ethical theory which “looks better” upon reflection from future people. I could look better by destroying evidence of moral crimes or by genetically engineering conformist babies that don’t question the moral status quo. These are both unethical. Again, we know because we have an understanding of right and wrong already.
III. Systemization
Karnofsky wants to systematize his ethics, which means “instead of judging each case individually, look for a small set of principles that we deeply believe in, and derive everything else from those.” The reason for doing this is:
(A) Our ethical intuitions are sometimes "good" but sometimes "distorted" by e.g. biases toward helping people like us, or inability to process everything going on in a complex situation.
(B) If we derive our views from a small number of intuitions, we can give these intuitions a lot of serious examination, and pick ones that seem unusually unlikely to be "distorted."
(C) Analogies to science and law also provide some case for systemization. Science seeks "truth" via systemization and law seeks "fairness" via systemization; these are both arguably analogous to what we are trying to do with future-proof ethics.
I agree with point (A). We do have distortions. As Karnofsky mentions, some biasing factors include convenience, convention, repulsion and confusion. I agree. The process of figuring out ethics is weighing intuitions and trying to remove distortions which cloud our judgement.
As for (B), while using a small number of intuitions can let us examine them more seriously and avoid distortions, you should be incorporating and examining all possible intuitions. When I am reasoning about a hypothesis, I need to use all available evidence. We can seriously examine lots of different intuitions. Why exclude some evidence from our analysis? If I have an intuition that might be distorted, I need to more closely examine it or discount the weight I apply to it when evaluating my hypothesis. I cannot merely exclude it just as I cannot exclude any piece of evidence.
One issue that can occur with (C) is that you are overly reductionist. There is no single axiom which explains the entirety of law. Similarly, there is no substance which explains all physical matter or experiences, as some philosophers believed.
Systemization can be weird. It’s important to understand from the get-go that seeking an ethics based on “deep truth” rather than conventions of the time means we might end up with some very strange, initially uncomfortable-feeling ethical views. The rest of this series will present such uncomfortable-feeling views, and I think it’s important to process them with a spirit of “This sounds wild, but if I don’t want to be stuck with my raw intuitions and the standards of my time, I should seriously consider that this is where a more deeply true ethical system will end up taking me.”
I can point to the horrors of systematizing things and ignoring our basic human intuitions about what is moral and immoral. People have driven themselves to commit atrocities in the name of ideology, even when it was clearly an unethical thing to do. I could write paragraphs on the atrocities committed by people committed to belief systems such as religion, communism and fascism. A notable example is of Adolf Eichmann who followed a modified version of the categorical imperative to rationalize his participation in an unethical system. I don’t think we need to categorically avoid systematizing because systematizers have done bad things.
A utilitarian could respond by noting that their preferred version of systematized ethics would not allow the extermination of innocent people. I would agree with them. I don’t think it’s fair to lump together all “systematized” ethics in a similar way as to how it is unfair to lump together all intuitive forms of ethics. My ethical theories are based on intuition, and I do not believe that lynching and women beating are okay.
It easier to lump the intuitionist positions because we do not state our positions fully and at length, but that is because it is particularly difficult to do this. That is not a failure of the theory but merely a feature. You can’t fully describe physical reality very easily and, similarly, I can’t fully describe my moral reality totally. I can provide descriptions of it in certain scenarios and point to general principles. For example: I think you should save the drowning child in the pond; I think people generally have a right to bodily autonomy; I think that not saving the child is not as bad as intentionally drowning the child; I would flip the switch in the trolley problem; lynching, torture and beating people are unethical.
IV. Against Moral Realism?
Karnofsky goes on to describe why he supports utilitarianism and sentientism. I am not a utilitarian. I have explained this position elsewhere, such as in an article entitled “No One Makes the Charge of Arbitrariness Against Utilitarians.” This critique is getting long, so I will address this final point and avoid directly critiquing his points on utilitarianism and sentientism. Before justifying these positions, he talks about moral-quasirealism in his distortions discussion.
It's very debatable what it means for an ethical view to be "not distorted." Some people (“moral realists”) believe that there are literal ethical “truths,” while others (what I might call “moral quasi-realists,” including myself) believe that we are simply trying to find patterns in what ethical principles we would embrace if we were more thoughtful, informed, etc. But either way, the basic thinking is that some of our ethical intuitions are more reliable than others - more "really about what is right" and less tied to the prejudices of our time.
A first point of confusion is that there is already a position in ethics called quasi-realism which seems different from what Karnofsky describes. It is a meta-ethical view that is non-cognitivist and emotivist. From Wikipedia:
Quasi-realism is the meta-ethical view which claims that:
Ethical sentences do not express propositions.
Instead, ethical sentences project emotional attitudes as though they were real properties.
Karnofsky defines his version of quasi-realism again in a post entitled “‘Moral progress’ v. the simple passage of time”:
I don't think morality is objective, but I still care greatly about what a future Holden - one who has reflected more, learned more, etc. - would think about the ethical choices I'm making today.
He believes there are no objective ethical truths, yet personally likes to consider something about his future self. This seems like a personal preference about ethics, rather than an ethical theory. This points to why Karnofsky likes future-proof ethics—the Karnofskian quasi-realist position is basically just future-proof ethics by definition. His ethical theory is whatever is personally future-proof.
This position confuses me. If there are no objective moral facts, then you should not change your mind on the basis of evidence and reflection. There is no amount of evidence that you can show me to believe that “jklasdjfkjashdf is lkdskdj” is a true statement, because it is totally meaningless. Why should I care what future Parrhesia thinks about this particular statement? Future Parrhesia will not have any evidence or information which makes that a more or less probable statement. There is no confirmatory of disconfirmatory evidence toward or against that statement. I don’t don’t think he is an error theorist, but the same argument would apply there; if the statements are by definition false—rather than incoherent—who cares what evidence I learn about in the future?
There is no such thing as a true intuition under this framework. If I have an intuition that torturing babies is something we shouldn’t do, this isn’t a legitimate intuition. If it’s culturally influenced, it doesn't really matter because we know for a fact it is either false or incoherent within this framework. There can be no “good” intuition if there is no moral reality, just as there can be no good apprehension if everything is an illusion. There is no good reason to use particularly strongly believed ethical principles because all ethical principles are either false or incoherent. And I think there is no reason to care what ethical framework a future self is using because a future self is going to be wrong under this framework. I think this undermines the whole project.
I personally believe that we should have ethical theories which best explain our intuitions. The process of moral reasoning is the process of weighing and comparing intuitions, while avoiding bias and distortion. The true ethical theory that results from this process is not able to be summarized in a one sentence statement; this is too reductionist and the available intuitive evidence suggests that ethics is more complex. In my view, future-proofing is not a useful concept and Karnofskian quasi-realism does not make sense or, at least, I fail to see a justification for it. Despite being critical of Karnofsky’s writing, I appreciate the efforts that he has made to make the world a better place!
> "..he sees common-sense as what is peer-cultivated..."
Oh, boy! I like this type of poking at categories! Conflation of common-sense with peer-cultivated ethics would be bad. Often common-sense is from generations before, not ones peers. (probably from other sources of wisdom--such as firsthand experience--as well.)
But I'm going to question the YouTube vegan example... I'm going to note that, to the extent that there is a series of videos on youtube, there is evidence of a community. (who might be viewed as peers) But even if that vegan isn't reading the comments... often people creating the video explicitly talk to their community at the beginning or end. But even if the videos lack that property, the manner in which it was created "leaks evidence" that there is a receptive community engaging with it. (I know this sounds a bit TLP "hypothetical audience" and stuff.) Brought up that objection just because it is very interesting to me* ...not b/c I think you are wrong that "intuitive" ethics being different from both "peer-cultivated" ethics and "common-sense" ethics!
* I like to ask "Am I within a set of multiple overlapping peer groups--which may pull me in different ways?"