An Anthropic Argument for Person-Affecting Longtermism
If people have multiple existences across time, an adherent to person-affecting utilitarianism should probably still be a longtermist.
Longtermism is a popular philosophical belief among effective altruists due in large part to philosopher William MacAskill’s book What We Owe The Future (2022). Simply put, longtermism is the belief that future people have moral worth and their interests should be strongly considered, especially given that our future descendants could be highly numerous. The growing adoption of longtermist priorities has created somewhat of a rift among Effective Altruists, with some opting for prioritizing existential threat reduction while others preferring to focus on more immediate interventions like reducing premature death from malaria. How much we should prioritize currently existing sentient beings relative to future ones is extraordinarily important for organizations intent on maximizing welfare.
A case can be made that utilitarians should treat future people as an overwhelming moral priority because they have the potential to be incredibly numerous. In his article “Astronomical Waste,” Bostrom argues that the utilitarian imperative of “Maximize expected aggregate utility!” can be reduced to “Minimize existential risk!” His argument rests on the potential number of lives lost from delays in colonizing our local supercluster, which he estimates to be about 10^29 lives per second. The precise number is not particularly relevant. If potential lives lost are remotely close to this number, global extinction events are so harmful they should probably be an overwhelming moral priority.
This radical longtermist conclusion can be averted by embracing the person-affecting view, which is the idea that possible people do not matter, only already existing people. This view has many repugnant implications. One example from Savulescu (2001) is that a mother would have no moral reason to avoid having a blind and deaf child by waiting a few months until her rubella passes. For those unmoved by these sorts of implications, Bostrom points out that potential technological advances could extend lifetimes considerably, preserving the far future as a priority even for an adherent to person-affecting views.
Another counter I have conceived is that longtermism is still a moral priority if people have multiple existences across time and humanity’s future is incredibly long. If there is reincarnation, people existing now are some of the same people affected by future events. Although this sounds farfetched, if we introduce even a tiny chance when considering such an astronomically large amount of welfare, utilitarians concerned with maximizing expected welfare must treat that chance seriously, and it may dominate over other concerns.
Taking small probability events with large payoffs seriously is a so-called problem for utilitarianism called fanaticism. Related issues for expected utility maximization include Pascal’s mugging and infinite ethical concerns. This argument will not rely on embracing supernatural beings or infinite payoffs. It will rely on the assumption that a person at a different place and time in the future is still a “person” in the person-affecting view and that intermittent periods of unconsciousness or non-existence do not discount that person’s considerations.
In an earlier newsletter on anthropic reasoning, I provided some articles that argue we have some reason to suspect multiple existences, given some possible anthropic assumptions, namely “Existence Is Evidence of Immortality” (2021) by Michael Huemer and “Immortal Beauty: Does Existence Confirm Reincarnation?” (2021) by Jens Jäger. Huemer argues that given time is infinite and you exist now, the probability of repeated existence is one, whereas the probability of a single existence is zero. Although skeptical of the conclusion of multiple human existences, Jäger argues that if selecting one-third is correct in the Sleeping Beauty problem, selecting a zero probability is correct when comparing the possibility of being incarnated once compared to being incarnated infinitely many times. See Newsletter #003: Anthropic Reasoning for more details.
Once again, the argument should work even if we assign a very low probability to multiple existences. This anthropic argument cannot be wholly dismissed just because it is weird. Many possible anthropic assumptions have been proposed, and many are very weird. The self-indication assumption (SIA) points toward reincarnation and the presumptuous philosopher. One of the more popular anthropic assumptions—the self-sampling assumption (SSA)—points toward telekinesis and the possibility of an impending doomsday. Other assumptions are even weirder. This is not to say that there is not a simple solution that is not strange and does not have bizarre implications, but it is to say you should not fully dismiss the multiple existences argument based on its strangeness.
If currently existing people existed previously, then they should matter from the person-affecting view. In this case, future welfare may be much more important than present welfare because there is no relevant moral distinction between currently incarnated people and future incarnations. We would want to concern ourselves strongly with making future people comfortable and happy. Although this argument is likely unpersuasive, it felt worth sharing. This argument may not work, but ethical beliefs will depend on our anthropic assumptions. Anthropic moral reasoning is worthwhile as a subdiscipline of ethics. Since the implications of our anthropic assumptions concern all living beings, different anthropic assumptions may create radically different obligations about future and currently existing people.
Just in case you know, the network is gotten to as thus: chin up and forward, shoulders up and forward, tailbone up and back.