Utilitarian Action Coupling
Coupling supererogatory and obligatory actions and rank ordering them leads to counter-intuitive conclusions in a utilitarian framework
I reject utilitarianism. I believe that we have a variety of moral intuitions and that maximizing utility conflicts with so many of our intuitions and, in many cases, so strongly that it is not a plausible ethical system. You cannot reduce morality down to single sentence statements. To demonstrate this, there is no other way than to provide counter-intuitive examples of utilitarianism consistently applied.
One way of doing this that I conceived of is “Utilitarian Action Coupling.” I imagine someone else has thought of this but I am taking credit for thinking of it even if I was not the first. Imagine we treat human lives equally in a utilitarian calculus, for simplicity’s sake.
I am going to couple two groups of actions together. The first is killing an innocent person that I hate and donating enough money to an effective charity to prevent 2 deaths. The second is not killing someone and not donating the money. In a utilitarian calculus, I should pick the first coupling of actions. However, this feels wrong. I think the reason is an asymmetry between obligatory and supererogatory. It is really bad to kill someone. It is good to save someone but not quite as good as killing someone is bad.
Utilitarians often like to point to moral hazard. I usually see this as an ex post facto rationalization for this strange feeling they are having about a consistent application of their principle. No disrespect intended. However, let us entertain this argument. Imagine that everyone commits a murder and saves two lives until there is no one left to save. The last person left would not be able to commit a murder becaues there was not enough people to save. We would be in a world in which no one unnecessarily dies. Why is that so bad as to prevent this from a moral hazard point of view?
I cannot see a good solution to this problem other than to accept that certain seemingly unsavory clustering of actions are preferable to more normal clustering of actions like doing nothing.
I think many utilitarians would agree to kill the person and donate the money, rather than doing neither, given the premise that they had to choose between only these two options; this example is isomorphic to the trolley problem. That the premise is true — that they cannot donate the money without killing the person — is more questionable, but if we accept it as a given then someone who supports act-utilitarianism rather than rule-utilitarianism would have to accept the conclusion.
(As far as I can tell, much modern rule-utilitarianism (what https://archive.today/2017.04.05-083200/https://plato.stanford.edu/entries/consequentialism-rule/#FulVerParRulCon calls partial rule-consequentialism. Its "full rule-consequentialism" is not utilitarian) is based on the assumption that a person cannot be an optimal act-utilitarian, but that their thinking is frequently enough incorrect that in some sets of cases they would make worse decisions by trying to do the optimal thing than by following some rule, and therefore that the best choice they can make is to choose to follow the rule all the time instead; https://archive.today/2017.04.01-223647/http://lesswrong.com/lw/v1/ethical_injunctions/ gives a fuller version of this argument. The "moral hazard" argument is basically a special case of this problem.)