Andy Haymaker Author

If Anyone Builds It, Everyone Dies

When Eliezer Yudkowsky first founded the Singularity Institute for Artificial Intelligence (SIAI), it was to promote the idea of superhuman AI as a path to human flourishing. Along the way, the institute was rebranded as the Machine Intelligence Research Institute (MIRI) and Nate Soares joined as a leader. Driven by Yudkowsky’s intellectual leadership, the focus shifted from AI promotion to AI caution, culminating in the duo publishing a book in 2025 with the provocative title If Anyone Builds It, Everyone Dies. Catchphrase: We wish we were exaggerating.

Yudkowsky’s intellectual path mirrors my own. When writing Bitstreams of Hope, I told a book cover designer that I was a “full-throated advocate” of AI, despite concerns at the time over jobs and the shady way LLM builders had obtained their training data. That conversation was the beginning of my awakening about AI. To have a one-sided view of any powerful issue is bound to end up being wrong. I still believe in the promise of AI to solve many of the world’s problems. Importantly, I believe, like Max Tegmark, that we need to solve such problems with narrow AIs that we can provably control. Meanwhile, every big American AI lab is rushing to be the first to build AGI, which is admitted to be just a stepping stone to Artificial Superintelligence (ASI) as systems become able to self-improve. Now that Alibaba has also become AGI-pilled, this will only add fuel to the fire that American firms must be allowed and supported to build AGI before China does.

Throughout history, arms races have ratcheted up the technology of destruction, and empires have risen and fallen. The limits that have always prevented the fall of empires from turning into extinction are now gone. Nukes were the first weapon to create existential risk, and now the second is here: AI riding atop a globalized, internet-connected economy. The book’s argument of this stance boils down to a few principles:

  1. AI is grown, not crafted, and we don’t understand the resulting systems.
  2. Pursuit of any ultimate goal benefits from certain convergent instrumental goals such as accumulating resources and preventing yourself from being turned off or replaced.
  3. We don’t know how to load specific goals into an AI that’s grown.
  4. AI minds are demonstrably alien and their speed will kill us in ways we didn’t know were possible while it pursues instrumental goals that happen to make life on earth impossible.
  5. We won’t get a second chance to build safe AI.

These postulates are independent of who builds the first self-improving AI or what they intend the system to do. If you don’t accept any of the postulates, read the book. It’s short and to the point, using very accessible language to explain universal truths about AI. 

Nobody really disputes the facts of the doomer argument this book advances. The only serious debate is about probabilities (“What’s your P(doom), bruh?”) , timelines, and the plausibility that we’ll “solve alignment” in the required timeframe. I’m with Yudkowsky and Soares that we’re laughably unlikely to solve alignment in time, if it’s even solvable in principle, which isn’t known.

I’m glad they wrote this book and I’m glad it’s getting traction. As the end of the book sketches out, it’s possible to stop everyone from building AGI. Humanity has come together before to contain existentially dangerous technology such as nukes. You can help by talking about this issue, which moves the Overton window, bringing possible solutions into the thinkable and popular realms.