Contra Alexander on Moral Offsetting
All actions are ordered according to utility without regard for blame, obligation or supererogation. This clashes with moral intuitions, so offsetting is created but this causes further issues.
I. Moral Offsetting
I recently learned about the concept of moral offsetting from the substack Astral Codex Ten in a post entitled Moral Costs Of Chicken Vs. Beef. Apparently, the author Scott Alexander has spoken about this before here and here. Scott explains the concept:
Offsetting is where you compensate for a bad thing by doing a good thing, then consider yourself even. For example, an environmentalist takes a carbon-belching plane flight, then pays to clean up the same amount of carbon she released.
The immediate “problem” that comes to mind is that you could offset really heinous things. Scott explains:
Current estimates suggest that $3340 worth of donations to global health causes saves, on average, one life.
Let us be excruciatingly cautious and include a two-order-of-magnitude margin of error. At $334,000, we are super duper sure we are saving at least one life.
So. Say I’m a millionaire with a spare $334,000, and there’s a guy I really don’t like…
Utilitarians will describe this sort of thing as a “troublesome.” It is something that needs to be solved. However, if you believe in the offsetting theory and utilitarianism, then this is actually a logical conclusion. I believe that too frequently utilitarians create work-arounds to avoid believing or doing anything that would be typically recognized as absurd or something that violates their intuitions.
II. Attempting to Resolve the Issue with Axiology
Here is Scott Alexander’s attempt at resolving this issue from his article Axiology, Morality, Law:
Axiology is the study of what’s good. If you want to get all reductive, think of it as comparing the values of world-states. A world-state where everybody is happy seems better than a world-state where everybody is sad. A world-state with lots of beautiful art is better than a world-state containing only featureless concrete cubes. Maybe some people think a world-state full of people living in harmony with nature is better than a world-state full of gleaming domed cities, and other people believe the opposite; when they debate the point, they’re debating axiology.
Morality is the study of what the right thing to do is. If someone says “don’t murder”, they’re making a moral commandment. If someone says “Pirating music is wrong”, they’re making a moral claim. Maybe some people believe you should pull the lever on the trolley problem, and other people believe you shouldn’t; when they debate the point, they’re debating morality.
Utilitarians believe that the correct moral choice is the one which maximizes utility. Stanford Encyclopedia of Philosophy agrees with my interpretation: “utilitarianism is generally held to be the view that the morally right action is the action that produces the most good.” He continues:
Law is – oh, come on, you know this one. If someone says “Don’t go above the speed limit, there’s a cop car behind that corner”, that’s law. If someone says “my state doesn’t allow recreational marijuana, but it will next year”, that’s law too. Maybe some people believe that zoning restrictions should ban skyscrapers in historic areas, and other people believe they shouldn’t; when they debate the point, they’re debating law.
These three concepts are pretty similar; they’re all about some vague sense of what is or isn’t desirable. But most societies stop short of making them exactly the same. Only the purest act-utilitarianesque consequentialists say that axiology exactly equals morality, and I’m not sure there is anybody quite that pure. And only the harshest of Puritans try to legislate the state law to be exactly identical to the moral one. To bridge the whole distance – to directly connect axiology to law and make it illegal to do anything other than the most utility-maximizing action at any given time – is such a mind-bogglingly bad idea that I don’t think anyone’s even considered it in all of human history.
The axiologically “best” society has to exactly equal to the most moral society under utilitarian thinking. If you introduce other considerations into a choice or you want to create laws which don’t maximize utility, then you are not doing utilitarian moral calculation.
The only way something could be a “bad idea” under a utilitarian framework is if it does not actually maximize utility in which case utilitarian law has failed at it’s goal and it should be modified to maximize utility. As a utilitarian, you cannot reject the idea that we should create laws that should maximize utility but maybe you can reject Scott’s formulation of punishing people for failing to do so but only for reasons of disutility from punishment, disutility from bad incentives and the like.
You can object that you don’t always have to maximize utility but there is not a good utilitarian reason for thinking this. Utilitarians end up placing a line between supererogatory and obligatory on the scale of good and bad actions despite there being no justifiable way to place this line.
I think that utilitarians recognize this inability to justifiably not maximize utility and they feel blameworthy. They know that the preference ordering of actions is (don’t do bad action + offset charitable contribution) > (don’t do bad action) = (do bad action + offset) > (do bad action). They feel guilt being kind of bad and better being not so bad but they won’t go so far as to doing the best they can do.
I believe that what might be going on is that the intuition to not cause harm creates a feeling of blameworthiness and guilt. This is fixed with the offset. Doing more than offsetting seems extra even though it’s obligatory to do the most you can, so the feelings of guilt don’t arise. I think the moral offsetting is more reflective of feelings of guilt than some coherent moral thinking. Perhaps it is a pragmatic tool to get people to feel shame into being a bit less bad but if that is the case, then the moral justification is still just utilitarian in nature. No need for this concept of “morality.”
III. The Abstract and The Pragmatic
These concepts stay separate because they each make different compromises between goodness, implementation, and coordination.
One example: axiology can’t distinguish between murdering your annoying neighbor vs. not donating money to save a child dying of parasitic worms in Uganda. To axiology, they’re both just one life snuffed out of the world before its time. If you forced it to draw some distinction, it would probably decide that saving the child dying of parasitic worms was more important, since they have a longer potential future lifespan.
In utilitarianism, everything maps onto a utility scale. If your annoying neighbor net lost utility is more than the child’s net lost utility while incorporating second-order effects, then saving the child is more important in utilitarianism.
But morality absolutely draws this distinction: it says not-murdering is obligatory, but donating money to Uganda is supererogatory. Even utilitarians who deny this distinction in principle will use it in everyday life: if their friend was considering not donating money, they would be a little upset; if their friend was considering murder, they would be horrified. If they themselves forgot to donate money, they’d feel a little bad; if they committed murder in the heat of passion, they’d feel awful.
Scott recognizes the intuition that murder and letting die are very different morally. But how did he come to know this? I believe Scott is using ethical intuitionism to lay out a form of common-sense morality, as I believe. What he is saying about utilitarians have a different practice than they do principle is totally true. I think it’s inconsistency. I think if we started discussing an abstract problem they would switch back to utilitarian calculation mode.
If I went and asked a utilitarian about the trolley problem, he would probably say that of course we should switch to save the 5 lives even if it kills the one. The reason is that we are maximizing utility by saving lives and the distinction between killing and letting die is not morally relevant. This is the correct position for a utilitarian but it leads to whacky results when generalized, so there seems to be some dancing around the whacky results.
If you press him and say “Okay, a life can be saved for $3400 and I don’t donate $3400, am I morally equivalent to a killer?” you get a really long essay about the difference between axiology, morality and legality and the suggestion that almost no consequentialist would be so pure as to believe that we should actually behave in this utility maximizing way, the implication of morality being equivalent to axiology.
If I said “Should we have organ donations, income redistribution and end the War in Afghanistan”, the utilitarian would likely say yes. If I objected that they are ignoring the ethics of bodily purity, natural rights and national pride they would probably say that the only thing the law should concern itself with is the well-being of the people of the world. If I say “Okay, so law should maximize well-being and disregard all other considerations and moral foundations including natural rights?”, it seems like the utilitarian might say “that’s not just a really bad idea, that’s inconceivably mind-bogglingly bad.” (Unless he was just talking about the fact that it would fail or punishment would cause it to not actually be utilitarian maximizing or something more nuanced.)
What’s going on here? If not 100%, what percent of the time should we be making people behave in a utility maximizing way and why? How often should we ourselves be acting in a utility maximizing way?
Further on in the essay:
So fundamentally, what is the difference between axiology, morality, and law?
Axiology is just our beliefs about what is good. If you defy axiology, you make the world worse.
When can utilitarians make the world worse in their decisions? Never. That is unethical in their framework.
At least from a rule-utilitarianesque perspective, morality is an attempt to triage the infinite demands of axiology, in order to make them implementable by specific people living in specific communities.
You should not triage the demands of axiology. What you are doing is adopting a common-sense approach to morality and then using it to ease your tensions about failing to maximize utility. Where do you set the obligation and why does reducing the obligation make it more implementable?
It also admits that it’s important that everyone living in a community is on at least kind of the same page morally, both in order to create social pressure to follow the rules, and in order to build the social trust that allows the community to keep functioning. If you defy morality, you still make the world worse. And you feel guilty. And you betray the social trust that lets your community function smoothly. And you get ostracized as a bad person.
Perhaps the correct approach would be to maintain the infinite obligation and inflict as much social pressure and social ostracism as possible on people. My question is “Do you actually believe you don’t have infinite obligation or are you only acting like you don’t so people will comply?” Why not just say “You should give all your money away but 10% is better than nothing?” This tension between morality and axiology does not seem coherent to me.
What even are these unmentioned laws of morality? “If you deny morality, you make the world worse?” Doesn’t this make morality = axiology again? I’m confused. “And you feel guilty.” People feel guilty for the wrong reasons like being homosexual. “And you betray the social trust that lets your community function smoothly.” Making sure your Afgan bride keeps wearing her Burqa lets the community function smoothly but is abhorrent. Is this idea of morality society specific? I don’t understand what’s going on here.
IV. Scott’s Rule: You Can Offset Axiology, but Not Morality
With this framework, we can propose a clearer answer to the moral offsetting problem: you can offset axiology, but not morality.
Emitting carbon doesn’t violate any moral law at all (in the stricter sense of morality used above). It does make the world a worse place. But there’s no unspoken social agreement not to do it, it doesn’t violate any codes, nobody’s going to lose trust in you because of it, you’re not making the community any less cohesive. If you make the world a worse place, it’s perfectly fine to compensate by making the world a better place. So pay to clean up some carbon, or donate to help children in Uganda with parasitic worms, or whatever.
Eating meat doesn’t violate any moral laws either. Again, it makes the world a worse place. But there aren’t any bonds of trust between humans and animals, nobody’s expecting you not to eat meat, there aren’t any written or unwritten codes saying you shouldn’t. So eat the meat and offset it by making the world better in some other way.
Once again, a utilitarian is supposed to maximize utility in all choices because for a true utilitarian axiology is morality. It’s not clear that there is a coherent definition of morality. Many people think eating meat violates moral laws and they have lots of reasonable reasons for thinking this (like animals suffering for your taste bud’s sake). Let’s try to apply this to a different concept in a backward society: “But there aren’t any bonds of trust between [men] and [women], nobody’s expecting you not [abuse your spouse], there aren’t any written or unwritten codes saying you shouldn’t.” Could someone offset beating their spouse in this society by donating to domestic abuse prevention charities? Again, is you morality society specific?
Murdering someone does violate a moral law. The problem with murder isn’t just that it creates a world in which one extra person is dead. If that’s all we cared about, murdering would be no worse than failing to donate money to cure tropical diseases, which also kills people.
Scott is setting some sphere around what is moral and not moral regardless of utility and then saying that we cannot offset those things. There is no utilitarian moral justification for where to draw the line because you can’t justify it on the basis of anything but utility. It seems like some other ethical intuitions are being incorporated into ethical decision making which is not allowed.
This is more precise than Askell’s claim that we can offset “trivial immoral actions” but not “more serious” ones.
The line is arbitrary from a utilitarian point of view. There is no line. All actions including secondary and tertiary effects map onto a utility scale. We then order the choices. We must pick the higher choice. That is what we should do in utilitarianism.
If you believe there is some other idea of morality, then how is it determined? Again, it’s somewhere nested in this:
At least from a rule-utilitarianesque perspective, morality is an attempt to triage the infinite demands of axiology, in order to make them implementable by specific people living in specific communities. It makes assumptions like “people have limited ability to predict the outcome of their actions”, “people are only going to do a certain amount and then get tired”, and “people do better with bright-line rules than with vague gradients of goodness”. It also admits that it’s important that everyone living in a community is on at least kind of the same page morally, both in order to create social pressure to follow the rules, and in order to build the social trust that allows the community to keep functioning. If you defy morality, you still make the world worse. And you feel guilty. And you betray the social trust that lets your community function smoothly. And you get ostracized as a bad person.
Emphasis is mine. Isn’t the whole point of this essay that defying morality and making the world worse are different things! Is morality social cohesion and smooth functioning? Telling your wife she doesn’t have to to wear her Burqa makes a ruckus, causes things to not function smoothly and could get you and her ostracized. Is morality what makes you feel guilty? Some people feel guilt for being homosexual. Is there value to “societal function” outside of utility?
V. Conclusion
Here is what I think is going on: People have a common-sense perception of morality that includes intuitions like murdering and letting-die are not equivalent. Utilitarianism is an appealing moral system that rationalists like. However, when pressed, rationalists see that many of the logical conclusions of utilitarianism are absurd or morally revolting. Other conclusions of utilitarianism indicate that the rationalists are committing massive moral failings by not donating all their income and this moral failing is equivalent to mass murder. They bundle these intuitions and call them morality. Morality is used as a pragmatic tool for social cohesion and triaging the demands of utilitarian axiology. What are the moral principles and rules is left unsaid. However, morality says they don’t have to give all their income and morality says they’re not equivalent to a murderer. The guilt and anxiety is partially relieved but not entirely.
They want to feel better about their smaller moral transgressions like eating meat and polluting. They use offsetting to reduce the feeling of moral blameworthiness. Is eating animals more like murder or carbon emissions? Well, they’re going to say it’s more like carbon emissions with a justification about social bonds and expectations that would be wildly insufficient justification for other transgressions. The only way this could work under utilitarianism is if they put a utility measure to “social bonds” and factor it in. Some argue that anything that seems really bad is “serious” and everything else is “trivial” although trivially bad and seriously bad are just points on a scale with no lines separating them. In utilitarianism, everything is on a utility scale so you can’t make categorical distinctions that allow for blame, praise, obligation and supererogation as much as your intuitions and feelings of guilt are telling you that you need to. Abandon utilitarianism and accept these intuitions, become an ethical intuitionist.
Questions for Utilitarians:
I am deciding if I should do moral choice A or B. A is utility maximizing and B is not, all things considered. Under what circumstances is B the correct choice?
What is a moral consideration that overrides utility maximization?
If you believe in this idea of morality, can you explain how you come to know moral facts or the nature of this morality. For example, why is it worse to kill 1 than to not save 2 even without moral hazard issues?
Contra Alexander on Moral Offsetting
Only skimmed it, but I found it interesting: https://journals.openedition.org/etudes-benthamiennes/192
Once you accept that we're evolved creatures in an unplanned universe, I think it becomes obvious that looking for logically "correct" moral foundations is unlikely to bear fruit - there's no reason to think the universe is fair in any sense (other than maybe thermodynamically).
So that leaves us with utilitarianism. You have to start from there and work backward toward toward a "simple-enough" theory (maybe including rights and duties - totally made-up ones that lead to functional results) that human brains can handle. It's never going to be perfect or "correct", nor should we expect it to be. "Pretty good, works pretty well mostly, doesn't produce too many absurd-seeming outcomes" is probably the most we can hope for.
Bentham, Mill, and their successors have taken a stab at it, but I don't know that many who try concede that it's in principle probably impossible and that "good enough" may have to do.
Um. If you take utilitarianism and logically AND it with a notion of "rights", maybe you're getting somewhere. It's not simple.