WOLFcon 2024 - Understanding and Using AI Workflows with FOLIO

23 September 2024


p(doom) and Artificial General Intelligence (AGI)

What is the probability of AI taking over or even causing the complete destruction of civilization? This question has become a popular topic with the rapid development of generative AI in recent years1. Known as p(doom), or the probability of an AI apocalypse, this metric is measured on a scale from 1 to 100, with a higher number representing the likelihood that AI could evolve into a malignant superintelligence.

While p(doom) is a sensationalist metric, the disturbing reality is that many of the top AI scientists, engineers, and executives all have surprising high2 p(doom) estimates with the mean estimate from AI engineers of a 40% p(doom) score3.

Workshop Exercise

Please go to this survey to submit your p(doom) number.

Artificial General Intelligence (AGI)

Artificial General Intelligence (AGI)4 refers to an AI that surpasses human intelligence across a wide range cognitive tasks. A significant concern associated with AGI is that once AI reaches this threshold, it could rapidly improve itself, becoming an intelligence far beyond anything humans can currently comprehend.

AI Alignment

Minimizing p(doom) through aligning AGI's goals with our own goals and values is an active area of research in both academic and by AI companies and should be included in any broad analysis of the benefits of AGI6.

To illustrate misalignment of goals between an AGI and humans, in a 2003 paper5, Nick Bostrom provided an example of a superintelligent AI that has an initial goal to maximize paperclips manufacturing. Such an AI could potentially ignore any human values, and in a worst-case scenario, converting the entire planet into paperclip manufacturing, to the detriment of all life on Earth.

Footnotes