Links for January 2022
I am going to start doing a links post toward the end of the month. It’s just going to be a collection of things that I find interesting. Let me know if you like it. Thanks for reading Parrhesia’s Newsletter! For curated newsletters, you can check out the sample.
1. Why I Am Not a Utilitarian by Michael Huemer. My favorite philosopher explains why he isn’t a fan of utilitarianism. This is one of my favorite topics—critiquing utilitarianism—and Huemer is my favorite philosopher, so this was very exciting to see. I’ll likely use this for future posts. Some of the arguments I had thought of before and some were new. Here is an interesting one:
b. Maybe non-utilitarian intuitions are approximations to utilitarian results in normal circumstances.
I’ve heard something like this suggestion. I guess (?) the idea is that maybe on some deep level, we’re really utilitarians, and we have the intuitions cited in section 1 because those sorts of intuitions usually result in maximizing utility, in normal circumstances (e.g., usually killing healthy patients lowers total utility). We just get confused when someone describes a weird case in which the thing that usually lowers utility would raise it.
Responses:
i) Why is it more plausible to say we are subconscious utilitarians who easily get confused than to say that we are subconscious deontologists who don’t get so easily confused?
ii) Also, why is this more plausible than the ethical egoist’s hypothesis that we are really egoists deep down, and that our altruistic intuitions result from the fact that helping other people usually, in normal circumstances, redounds to your own benefit? Then we just get confused when someone raises an unusual case in which the thing that would normally help you doesn’t?
2. Did Daddy write about IQ again yet?: According to Razib Khan’s blog statistics, it appears that writing about IQ might be a good way to draw attention to your blog. Razib writes: “The response suggests to me how rare it is for many readers to find anyone addressing the most basic history of IQ and psychometrics, imperial exams in China, etc. When it comes to education or intelligence, our mainstream discourse has veered so sharply away from anything factual lately; but perhaps my experience will convince others who crave traffic spikes as much as my daughter does:…readers are actually hungry for fact-based coverage.” Maybe this is more relevant for me than you. In other words, I should write another article about IQ. Here is the first one: Why Intelligence Matters.
3. Decoupling as a Moral Decision: I’ve started reading a substack called atis who has interesting things to say about topics that interest me, but seems to take a more progressive stance. The whole Sam Harris-Charles Murray-Ezra Klein controversy was very fascinating to me. This is a discussion of the idea of putting things in historical and societal context or avoiding doing that to discuss a scientific point (decoupling). Was Ezra Klein intentionally not decoupling?
4. How to make your own SARS-CoV-2 variants: From Metacelsus “Are you an evil genius? Mad scientist? Or maybe you just read my Deltacron post and want to interfere with my predictions? Well, good news for you! Some researchers last year published a highly detailed, step-by-step guide to engineering SARS-CoV-2.”
4. There’s A Time For Everyone: Scott Alexander of Astral Codex Ten got married. Hurray! Of course, he has an analogy and analytical perspective on the whole thing.
5. Unsolicited Advice on the Institute of Marriage: Written by a friend in response to Scott’s marriage. If you care about ACX, this might interest you.
6. What Is a Religion, and Is Woke Progressivism One?: Michael Huemer disagrees with John McWhorter that woke progressivism is like a religion. Huemer acknowledges the similarities between the woke ideology and religion but does not believe it meets enough requirements. Why the woke behave like very religious conservatives is a question I’ve addressed in my post Does Viewing Racism as a Disease Make Woke Progressives Think like Conservatives?
7. Fewer interviews, more debates: A good argument for having less interviews on podcasts and having more debates. I can’t help but agree! Debates are more exciting.
When a popular writer publishes a new book, they might end up doing several podcast interviews – a dozen if they’re a big name. And yet after you’ve listened to one, you’ve kind of got the message. There’s little to be gained by listening to others, since the same points just come up again and again. “Why did you write the book? What’s the main argument? How does this change the way we think about X?”
However, I’d personally like to see more disagreement – more podcasts where two or more people with opposing ideas actually attempt to persuade each other, and the audience, of their point of view.
8. Inflation is Still Too Low: Bryan Caplan is being contrarian again but in a reasonable way of course.
Inflation just hit 7%. But in an important sense, that’s still too low. Prices need to rise more – and the sooner, the better.
I know that sounds crazy, but hear me out. I’m not saying that we need more monetary or fiscal stimulus. Quite the opposite. Aggregate Demand policy has been absurdly expansionary for over a year.
The reason why we need more inflation is simple: ubiquitous shortages. This problem isn’t merely on the news; at this point, something I want to buy is unavailable practically every day. Pre-Covid, that would have happened roughly one a month.
9. Labor Econ Versus the World: Essays on the World's Greatest Market: Speaking of Bryan Caplan, he is coming out with a book composed of his blog posts. I ordered a copy. It should be fun.
10. Practically-A-Book Review: Yudkowsky Contra Ngo On Agents: Scott Alexander summarizes a conversation between Eliezer Yudkowsky and Richard Ngo about the idea of AI posing a threat to humanity by becoming an agent and taking actions which are not in humans interest. I was a bit confused by agent AI and commented. I had a great deal of responses and criticism. I found the whole thing pretty interesting. Here was my comment:
Can someone explain to me how an AI agent is possible? That seems like an impossibility to me even after reading the above.
"I found it helpful to consider the following hypothetical: suppose (I imagine Richard saying) you tried to get GPT-∞ - which is exactly like GPT-3 in every way except infinitely good at its job - to solve AI alignment through the following clever hack. You prompted it with "This is the text of a paper which completely solved the AI alignment problem: ___ " and then saw what paper it wrote. Since it’s infinitely good at writing to a prompt, it should complete this prompt with the genuine text of such a paper. A successful pivotal action! "
It's infinitely good at writing text that seems like it would work but there is a difference between that and actually solving the problem, right?
"Some AIs already have something like this: if you evolve a tool AI through reinforcement learning, it will probably end up with a part that looks like an agent. A chess engine will have parts that plan a few moves ahead. It will have goals and subgoals like "capture the opposing queen". It's still not an “agent”, because it doesn’t try to learn new facts about the world or anything, but it can make basic plans."
I don't see it as planning but just running calculations like a calculator. From a programming perspective, would does it mean when your algorithm is "planning"?