Book review: More Everything Forever

Adam Becker’s More Everything Forever examines a cluster of ideas that sit behind much of contemporary AI and technology discourse, particularly those associated with Silicon Valley futurism. The book focuses less on specific technologies and more on the underlying worldview that connects themes such as superintelligence, digital immortality, space colonisation, and longtermist ethics.

The organising observation is that many of these ideas share a common orientation toward the future. They prioritise hypothetical, large-scale scenarios over present-day constraints, and they often assume that technological progress will eventually resolve limits that currently appear fundamental. This orientation shows up across different domains, from arguments about AI alignment to visions of uploading human minds or expanding civilisation beyond Earth.

Becker grounds this in a series of specific examples. He describes how communities around rationality and AI safety engage with highly abstract thought experiments, including scenarios in which future AI systems influence present-day behaviour through speculative incentives. These discussions can become influential within those circles even when their internal logic is fragile or difficult to test.

He also looks at more mainstream futurist narratives, such as Ray Kurzweil’s accounts of the Singularity. These include detailed descriptions of a future in which human minds are uploaded into computational systems, physical constraints are overcome, and death is effectively eliminated. In these accounts, the environment itself becomes something engineered and controlled, with nature treated as something to be stabilised or replaced rather than something to adapt to.

Alongside this, the book examines the ethical frameworks that support these visions, particularly longtermism. Becker traces how arguments about maximising value over extremely long time horizons can shift attention away from present-day harms, because current trade-offs are evaluated primarily in terms of their potential impact on vast future populations.

Across its chapters, the book builds a picture of a set of ideas that move fluidly between science fiction, philosophy, and engineering culture. This movement makes them difficult to evaluate using any single framework, because they are not purely technical claims, nor purely ethical arguments, nor purely speculative narratives. Instead, they operate across these domains at once.

For readers interested in AI governance or human oversight, the book is useful as a map of the assumptions that shape how problems are defined before any technical system is built. It makes visible the gap between speculative future scenarios and the conditions under which real systems are designed, deployed, and used.

Why you should read this:

This book is worth reading if you want to understand where some of the more extreme strands of AI thinking come from, and how they manage to sound internally coherent even when they are built on highly speculative assumptions.

One of its strengths is that it takes ideas that are often encountered in fragments (alignment risks, digital immortality, longtermist ethics, space expansion) and shows how they fit together into a single worldview. Seeing that structure makes it easier to recognise when similar assumptions appear in more practical contexts, such as policy discussions or organisational strategy.

It is also a corrective if current AI discourse sometimes feels abstract or detached from real-world constraints. The book does not just catalogue unusual ideas – it shows how far some of them extend once their underlying logic is followed through. That makes it easier to judge where speculation ends and decision-relevant thinking begins. Some of these positions only reveal their full implications when laid out end to end and reading the book makes that difficult to dismiss.

It is particularly useful if you are working around AI policy, governance, or organisational adoption, where these ideas tend to appear indirectly, shaping priorities, risk perceptions, and definitions of what counts as a “serious” problem. Reading the book makes it easier to recognise those assumptions when they show up in more practical settings.

Leave a comment