5 Comments

Hi. My view on Bostrom’s propositions is that (1) is true. Maybe I’m a pessimist but I see a history of rising and falling civilizations, and I see cracks in the foundation of ours that don’t give me confidence of an infinite future.

As you know from ACX comments, I am a religious guy (but I like of what rationalism has to offer). Your point 6 is analogous to the problem of evil for theists. I don’t agree with your point 6 for the same reason I don’t think the Problem of Evil disproves God. What if the maximal utility for simulated beings is achieved by giving them challenges to overcome for personal agency and growth? Otherwise you have a simulated boring Garden of Eden with zero impetus for personal or civilizational development.

Your blog gives a lot of fun food for thought. I’ll look through it some more.

Expand full comment

In philosophy, there is a dispute about whether you must act on a moral.truth once you have discovered it ...whether moral truths are intrinsically motivating. (You phrase it somewhat weakly as "..should have some motivation to act morally..." )

Intrinsic motivation is an assumption in addition to moral realism, and it's not clear that rationalists generally believe it. Without intrinsic motivation, nothing much follows find from utilitarianism..

Expand full comment

Hi. You seem to have made a mistake in how you use the word "objective". Eliezer Yudkowski quite clearly states that they don't believe in arguments that are compelling to all possible minds.

https://www.readthesequences.com/No-Universally-Compelling-Arguments

"What is 2+2?" is an objective question (at least once 2 and + are defined). If you ask a superintelligence this question, you can be sure they will know. You can't be sure they will answer correctly, they might lie. They might ignore you. They might tell you about something else.

But if put in an exam and properly incentivised, the AI would pass.

"What is the morally best action?" is also objective, once you write a long formal definition of morality. If you put the AI in an exam and ask it morality questions and reward it for passing, it will pass.

This doesn't mean the AI will choose to act morally.

Arithmetic is objective, that doesn't make every circuit a calculator.

Expand full comment