Opinion Why ‘longtermism’ isn’t ethically sound

Columnist and member of the Editorial Board
September 5, 2022 at 10:16 a.m. EDT
(Matt Chinworth for The Washington Post)
5 min

In a not-quite-throwaway line in a recent New Yorker magazine profile, Oxford philosopher and “effective altruism” figurehead William MacAskill described meeting billionaire Tesla chief executive Elon Musk in 2015: “I tried to talk to him for five minutes about global poverty and got little interest.”

Recently, though, their interests seem to have converged. In August, Musk tweeted an endorsement of MacAskill’s new book “What We Owe the Future,” remarking “This is a close match for my philosophy.”

“What We Owe the Future” is a case for “longtermism,” which MacAskill defines as “the idea that positively influencing the future is a key moral priority of our time.” It’s compelling at first blush, but as a value system, its practical implications are worrisome.

First, some background. Since its beginnings in the late 2000s, the effective altruism movement (“EA” for short) has been obsessed with “doing good better” — using reason and evidence to optimize charitable giving to better alleviate suffering for the greatest number of people.

In the movement’s early days, that involved promoting high-impact, basic-needs interventions in global health and poverty, such as distributing mosquito netting in the developing world — a distinctive break from the regular philanthropic practices of donating to one’s alma mater or favorite museum. Today, though, those EA priorities are giving way to a new and questionable fascination.

Longtermism relies on the theory that humans have evolved fairly recently, and thus we can expect our species to grow long into the future. The world’s current population is really a blip; if all goes well, a huge number of humans will come after us. Thus, if we’re reasoning rationally and impartially (as EAs pride themselves on doing), we should tilt heavily toward paying attention to this larger future population’s concerns — not the concerns of people living right now.

Depending on how you crunch the numbers, making even the minutest progress on avoiding existential risk can be seen as more worthwhile than saving millions of people alive today. In the big picture, “neartermist” problems such as poverty and global health don’t affect enough people to be worth worrying about — what we should really be obsessing over is the chance of a sci-fi apocalypse.

In practice, this looks similar to a shift toward preventing existential threats to humanity as the most valuable philanthropic cause. The future population’s greatest threats are things like a rogue super-intelligent AI, a nuclear catastrophe or an unexpectedly virulent pathogen, and there is a heavy emphasis on tech-driven research and solutions.

It’s hard to argue against taking the long view. People tend to be shortsighted, and we talk constantly about leaving a better world for future generations.

But while that can make this newest obsession of effective altruists appear nearly irrefutable, abandoning what would most help people on Earth today isn’t exactly ethically sound.

As much as the effective altruist community prides itself on evidence, reason and morality, there’s more than a whiff of selective rigor here. The turn to longtermism appears to be a projection of a hubris common to those in tech and finance, based on an unwarranted confidence in its adherents’ ability to predict the future and shape it to their liking. It suggests that playing games with probability (what is the expected value calculus of taming a speculative robot overlord?) is more important than helping those in the here-and-now, and that top-down solutions trump collective systems that respond to real people’s preferences.

Conveniently, focusing on the future means that longtermists don’t have to dirty their hands by dealing with actual living humans in need, or implicate themselves by critiquing the morally questionable systems that have allowed them to thrive. A not-yet-extant population can’t complain or criticize or interfere, which makes the future a much more pleasant sandbox in which to pursue your interests — be they AI or bioengineering — than an existing community that might push back or try to steer things for itself.

To be even more cynical: Longtermism seems tailor-made to allow tech, finance and philosophy elites to indulge their anti-humanistic tendencies while patting themselves on the back for their intelligence and superior IQs. The future becomes a clean slate onto which longtermists can project their moral certitude and pursue their techno-utopian fantasies, while flattering themselves that they are still “doing good.”

As such, it’s unsurprising that someone such as Musk — whose most memorable philanthropic moments include tweeting that he would donate $6 billion to the Nobel-winning World Food Program if it could convince him of its efficacy, then never following up when its executive director responded in detail — finds the proposition compelling.

Despite its flaws, longtermism might be the future of the effective altruist movement. The new focus is backed by funding: Open Philanthropy, GiveWell’s spending arm, has distributed more than $480 million to longtermist causes since 2015, while the FTX Future Fund, founded by cryptocurrency billionaire and effective altruist Sam Bankman-Fried, has chipped in about $132 million. Meanwhile, EA’s funding base continues to grow, and its newest reigning philosophy is set to have a major impact.

Sure, donating to theorize about AI risk is probably still a better philanthropic cause than, say, paying to put your name on a gallery at the Met. But is it really doing the most good? I wouldn’t be so sure.

correction

An earlier version of this column incorrectly stated that Elon Musk is the founder of Tesla. He is chief executive. This version has been corrected.