Artificial Intelligence as the Ultimate Utility Monster
If you can simulate happiness, then someday will we be morally obligated to spend all our time making computers happy?
In a recent article about AI on Astral Codex Ten, Scott Alexander discusses a conversation between Eliezer Yudkowsky of the Machine Intelligence Research Institute and Richard Ngo of OpenAI about safely aligning agent-like artificial intelligence. Yudkowsky is pessimistic about the future because he is concerned that AI could pose a threat to humanity.
I don’t actually want to talk about that hypothesis as much as I want to comment on a tangential discussion I had in the comments. I was confused about the possibility of an AI with human-like qualities. Almost everyone in the comments disagreed with me! Many even held the belief that computer simulations could feel emotions provided sufficient computing power.
In this world, it’s clear that we would have to ethically treat simulations. The people on ACX are also typically utilitarians, which is another thing that I am skeptical about. But let us grant that premise as well: the morally correct action is that which maximizes well being. If it’s true that robots have feelings too, then that really ought to be considered in the utility calculation even if they are way less important than humans.
Why? Well, there is also a well known philosophical idea called the mere addition paradox, or sometimes called the repugnant conclusion. My favorite philosopher Michael Huemer defends the repugnant conclusion in his essay entitled “In Defense of Repugnance,” in which he has a nice formulation:
(RC) For any world full of happy people, a world full of people whose lives were just barely worth living would be better, provided that the latter world contained enough people.
Even if a computer is way less important than a human, enough simulations would be more important than a human life—as odd as that sounds.
If it is ever possible to simulate a phenomenological experience using a classical computer, then I believe that we will achieve it someday and it will decline in price after initial discovery following the current pattern of computing power. The crowd that believes that agent like AI with desires could arise from our current computing technology, should be sympathetic to my premises.
If we can create simulations which have positive experiences, then there may be an obligation to do so, depending on the cost. While in 2035, it may cost $100,000 to simulate the utility equivalent in a human, by the year 2050, it might cost as little as $1000. Rather than saving a human life with charity, we could merely purchase another simulation bot and leave it running in our apartment like an altruistic bitcoin miner.
If costs became sufficiently low, it would be the moral imperative to create and maintain as many happy sims as possible, even if it degraded our quality of life. Maybe by 2100, for the price of $10, you could simulate entire cities filled with tens of thousands of happy sims. My timelines are hypothetical. The overall point is that if trends continue and AI can experience emotions, they will one day become the ultimate utility monster.
Summary:
P1: Simulations can experience wellbeing with sufficient computing power.
P2: We ought maximize wellbeing.
P3: A sufficiently large number of low wellbeing entities can be morally preferable to a smaller number of high wellbeing entities.
P3: If P1 is true, one day computational power will reach a point where human level utility is almost always cheaper to simulate than experience.
C: One day, we will be morally obligated to devote almost all of our time to simulation wellbeing, even to the point of seriously neglecting human wellbeing.
Personally, I don’t think that simulations can have phenomenological experiences, so I reject P1. I also don’t think that we have a moral duty to maximize utility, so I reject P2. I think computing trends will continue and things will get faster and faster for a long time to come, but we will never reach a point where simulations have emotions. They may act like it, but they won’t actually experience it.
If you are sympathetic to utilitarianism and agent-like artificial intelligence concerns as many rationalists are, this might be something worth considering. What do you think? Let me know in the comments. Thanks!
Artificial Intelligence as the Ultimate Utility Monster
I completely agree with you on rejecting P1. For P2, I’m not strictly utilitarian, but I think utilitarianism is a reasonable 1st order approximation in many situations. But I think your logic is a strong argument for having more children, regardless of socio-economic status.
I haven’t heard many utilitarians or effective altruists promoting (or practicing) a higher birth rate, although I think it should logically follow.