Nick Bostrom Has a Plan for Humanity’s ‘Big Retirement’

Philosopher Nick Bostrom Recently posted a paper where he said that a small chance of AI destroying all humans may be worth the risk, because advanced AI could relieve humanity of “its universal death sentence”. That excited gambling is a far cry from his previous dark views on AI, which made him a disastrous godfather. His 2014 book superintendence This was an early investigation of the existential risk of AI. A memorable thought experiment: An AI tasked with making paper clips destroys humanity because all those resource-needy people are hindering paper clip production. His recent book, Deep UtopiaReflects a change in his focus. Bostrom, who leads Oxford’s Future of Humanity Institute, focuses on the “solvable world” that comes when we get AI right.

Steven Levy: Deep Utopia Is more optimistic than your previous book. What changed for you?

Nick Bostrom: I call myself a concerned optimist. I am very excited about the potential it has to fundamentally improve human life and open up possibilities for our civilization. This is in line with the real possibility of things going wrong.

You wrote a paper with a wonderful argument: Since we’re all going to die anyway, the worst that can happen to AI is that we’ll die early. But if AI works, it could extend our lives, perhaps indefinitely.

That paper clearly looks at only one aspect of this. In any academic paper, you cannot address the meaning of life, the universe and everything. So let’s just look at this small issue and try to solve it.

This is no small matter.

I guess I’ve become fed up with some of the arguments given by destructive people who say if you build AI, you’ll kill me and my kids and how dare you. like the latest book If anyone makes it, everyone dies. What is even more likely is that if nobody Makes it, everyone dies! This has been the experience for the last several 100,000 years.

But in a catastrophic scenario everyone dies and no more people are born. Big difference.

I’ve obviously been very concerned about this. But in this paper, I am looking at a different question, which is what will be best for the current human population like you and me and our families and the people of Bangladesh? It seems that if we develop AI our life expectancy will increase, even if it is quite risky.

In Deep Utopia You speculate that AI could create incredible abundance, so much so that humanity could have a major problem finding purpose. I live in the United States. We are a very rich country, but our government, apparently with the support of the people, has policies that deny services to the poor and distribute rewards to the rich. I think that even if AI is able to provide abundance for everyone, we still won’t be able to supply it to everyone.

media outlets. Deep Utopia Takes as its starting point the assumption that everything goes very well. If we do a reasonably good job on administration, everyone gets a share. What a good human life would look like under these ideal circumstances is a deeply philosophical question.

The meaning of life is something you hear a lot about in Woody Allen’s movies and probably in the philosophical community. I am more concerned about the means of supporting myself and having a share in this abundance.

The book is not just about meaning. This is one of the various values ​​he considers. This can be a wonderful liberation from the drudgery that man has had to endure. If you have to give up, say, half your waking hours as an adult just to get by, doing something you don’t enjoy and don’t believe in, that’s a sad state of affairs. Society has become so accustomed to it that we have invented all kinds of rationalizations around it. This is like a partial form of slavery.



<a href

Leave a Comment