Nick Bostrom made the world fear AI. Now he asks: What if everything were okay?

Philosopher Nick Bostrom is surprisingly cheerful for someone who has spent so much time thinking about how humanity might destroy itself. In photos, he often looks deadly serious, perhaps appropriately haunted by the existential dangers swirling around his brain. When we talk over Zoom, he looks relaxed and smiling.

Bostrom has made it his life’s work to think about distant technological advances and existential risks to humanity. With the publication of his last book Superintelligence: paths, dangers, strategiesIn 2014, Bostrom brought to the public’s attention what was then a fringe idea – that AI would advance to the point where it could turn against humanity and wipe it out.

To many inside and outside of AI research, the idea seemed fanciful but influential including Elon Musk quoted Bostrom’s writing. The book sparked a series of apocalyptic concerns about AI that has been simmering lately flared up after the introduction of ChatGPT. Concern about AI risk is not only mainstream but also an internal issue Government AI policy circles.

Bostrom’s new book takes a completely different approach. Instead of playing the doomy hits, Deep Utopia: Life and Meaning in a Dissolved World, considers a future in which humanity has successfully developed superintelligent machines but averted catastrophe. All diseases are eradicated and people can live in infinite abundance indefinitely. Bostrom’s book explores what meaning life would have in a techno-utopia and asks whether it might be rather hollow. He spoke to WIRED via Zoom in a conversation that has been lightly edited for length and clarity.

Will Knight: Why move from writing about superintelligent AI threatening humanity to thinking about a future in which it is used to do good?

Nick Bostrom: Much more attention is now being paid to the various things that can go wrong when developing AI. It’s a big change in the last 10 years. All leading AI research labs now have research groups trying to develop scalable targeting methods. And in recent years, we are also seeing political leaders starting to pay attention to AI.

There hasn’t been a corresponding increase in depth and sophistication when it comes to thinking about where things go if we don’t fall into one of these pits. The reflections on this topic were quite superficial.

When you wrote SuperintelligenceFew would have expected that existential AI risks would become a mainstream debate so quickly. Do we need to address the issues in your new book sooner than people might think?

I think these conversations will begin and eventually deepen as we see the adoption of automation and expect the progress to continue.

Applications for social companions are becoming increasingly important. People will have all sorts of different views and it’s a great place to maybe have a little culture war. It could be great for people who don’t find fulfillment in normal life, but what if there’s a segment of the population that takes pleasure in abusing them?

In the political and information sector, we could see the use of AI in political campaigns, marketing and automated propaganda systems. But if we have a sufficient level of wisdom, these things could really strengthen our ability to be constructive democratic citizens, with personalized advice that explains what policy proposals mean for you. There will be a whole range of social dynamics.

Would a future in which AI has solved many problems such as climate change, disease and the need to work really be that bad?

Source link