Effective altruism (EA), and its increasingly prominent cousin “longtermism,” are having a bit of a moment right now. William McCaskill, one of the EA movement’s founders and a leader of its longtermist vanguard, recently wrote a best-selling book. Elon Musk has endorsed this book, and as court documents recently revealed, McCaskill has been texting with Musk on behalf of another EA-aligned billionaire, Sam Bankman-Fried.
Effective altruism is aimed at solving society’s problems, so it’s only natural that the leaders in this movement have started to look towards influencing the American political system as a way to accomplish their goals (however self-serving they may be). Currents in the EA movement range from being useful to flat-out dangerous, and since millions of dollars have already been spent on political campaigns in the name of effective altruism, it’s about time we looked under the hood at what exactly is going on here.
What is Effective Altruism?
At its core, effective altruism is a strategy for allocating resources. It’s a rebranded version of utilitarianism1 – doing the most good for as many people as possible, given resource constraints. It began with people trying to optimize charitable giving (see GiveWell), pledging to donate 10% of their income to alleviate poverty, and taking a more rationalist approach to their lives and moral codes.
Early effective altruists began by looking to solve problems that are “important, neglected, and tractable.” In areas where charity can improve people’s lives more quickly and effectively than the government, this is a good strategy if you’ve got a few million to spend and want to make a dent. In fact, a lot of credit is due to effective altruists for championing causes like distributing bed nets to combat malaria as a low-cost way to save a lot of lives.
After a few years of trying to optimize charitable giving, some effective altruists began expanding their horizons. Because its originators were moral philosophers, what began as a strategy to guide philanthropy quickly turned into a social and philosophical movement that increasingly looked towards having an impact in the future rather than the present.
One of the earliest signs of this was the “earning-to-give” strategy, which advises young altruists to pursue a career that maximizes their earnings so they would have a larger sum to give away to charity down the road. Of course, the most lucrative careers are often those that are most destructive and exploitative: corporate law; management consulting; high finance; big tech. But the logic of “earning-to-give” was that you could have a net positive impact by providing legal counsel to Exxon while donating a share of your income to Givewell.2
Somewhat predictably, a movement that provides a patina of moral legitimacy to the pursuit of capital accumulation (and promises to morally absolve those who have already accumulated significant capital) was bound to catch on, especially in America, and especially in Silicon Valley.
Over the past few years, effective altruism has transformed into a full-on social movement. This movement has grown as it has gained financial and political notoriety, with at least two prominent billionaires (Dustin Moskowitz and Sam Bankman-Fried) and millionaires like James Martin committing their fortunes to its causes. As it’s grown, it’s developed a broader mandate for itself, moving from an altruistic giving strategy to a set of moral guidelines focused on the future.
The turn to Longtermism
As the EA movement has grown, its guiding philosophers have pivoted from ways to optimize giving in the short-term to doing the most good over the long term. In this pivot from strategic philanthropy to moral to philosophy, the question changed from “how can we do the most good in the present?” to “how can we do the most good for the future?”3
Longtermism is an extension of the EA priority of “minimizing existential risk,” which argues that we should focus on problems that have the potential to wipe out humanity like nuclear war or pandemics. Longtermism extends the horizon of this analysis past problems that could wipe out humanity today to ones that threaten a hypothetical future global or intergalactic population of humans that is orders of magnitude larger than the current population. If future lives matter just as much as present ones, longtermists argue, then we should prioritize minimizing the risk to a future society of potentially trillions of people.
This is where the movement has lost utilitarians like Peter Singer, and drawn the most criticism from philosophers and ethicists. While effective altruism focuses on present-day problems that have either been measured or are measurable, longtermist priorities require much more speculation and assumptions to justify their utility.
Longtermists usually get around this problem by expanding the magnitude of their treatment population. They’ll argue that interventions that only have a millionth of a percent chance of working are still worth pursuing because they could affect billions of people in the future. But the numbers and calculations they use to estimate their problems are, at best, “vulgar bayesianism,” which is roughly the idea that you can attach a quantifiable probability to anything and reliably use that to make decisions. This “shut up and multiply” argument can get a little ridiculous, as the philosopher Nick Bostrom illustrates with Pascal’s Mugging.
But more concretely, the moral logic underpinning McCaskill’s argument that future lives matter just as much as present ones doesn’t really hold up. The longer the time horizon, the more an individual’s obligation to the future diffuses into a collective responsibility. But in the present, the stakes and consequences of individual acts of altruism are more real. It actually makes a difference who goes to sleep under a bed net tonight and who doesn’t. Virtually no moral philosophy works without applying a discount to future life. And failing to do so tends to shrink the very pressing problems of the present.

Why Politics?
One of the problems with early effective altruism - doing some meta-analyses and finding the most efficient ways to save lives - is that it gets boring after a while4. Shipping bed nets to Africa also doesn’t come with much cultural cachet in America, no matter how effective it might be.
But the turn to longtermism provides a way for wealthy EA-minded entrepreneurs to score both moral and social points. These moral points are important. A popular way to pitch your company to Silicon Valley investors is to tell them that you’re not just selling a product or a service, but performing altruism. The latest generation of ultra-wealthy tech entrepreneurs, of which many have turned to effective altruism and longtermism, learned from an early age that applying a pro-social veneer to any venture is an excellent way to justify whatever direction that venture takes.
Longtermism is a way of applying that pro-social veneer by giving a project the appearance of altruistic philanthropy. By casting aside present problems in favor of imagined future ones that you’ve used strained utilitarian logic to amplify into existential threats, you can justify pursuing pretty much anything. It’s no surprise that this method has caught on among tech workers and entrepreneurs, who coincidentally have made researching ways to prevent dangerous uses for super-advanced artificial intelligence one of the top priorities for the movement.
Others have used longtermism as a way to get involved in American politics, with varying results. On the positive side, Dustin Moskovitz funds Employ America, a think tank that has done an impressive job persuading the Biden administration and Congressional leaders to resist austerity and pursue policies that encourage full employment, including taking measures that helped drive down gas prices this summer. Bankman-Fried’s Guarding Against Pandemics team has a laudable goal, though they’ve had middling success electorally and on the Hill.
But a larger problem still exists: for the most part, leading longtermists don’t seem particularly interested in solving current problems like poverty, climate change5, or inequality. The turn to longtermism has eroded the imperative to solve the problems of the present. And now we have a group of highly influential rich people who use the uncertainty inherent in longtermism to define their own interests and pursuits as socially beneficial, and can claim they are morally justified in spending their money this way.
The upshot is that a lot of resources are wasted on the pet projects of self-described altruistic billionaires rather than proven poverty reduction techniques or medical interventions. And not for nothing, but exorbitantly wealthy people shouldn’t have this kind of outsized influence in our politics anyway. Our government tends to respond to the priorities of rich people. A movement that shifts the priorities of altruistically inclined rich people away from present problems in favor of funding AI research does not bode well for efforts to mobilize public resources towards solving these problems.
At the core of longtermism is a false sense of certainty about the future. Theoretical problems are magnified by decades, centuries, and millennia to create future existential threats that deserve urgent attention today. But there’s little room in this framework for unknown unknowns - future problems that we can’t foresee.
Nor is there room for building social systems that could be flexible enough to solve these future problems. Instead we’re left with the tools of the present to conceive of and then solve the problems of tomorrow. Bostrom illustrates how stilted this is by taking the opposite tact of the longtermists and projecting far back into the past. In a 2015 interview with The New Yorker, he mused:
“What I want to avoid is to think from our parochial 2015 view—from my own limited life experience, my own limited brain—and super-confidentially postulate what is the best form for civilization a billion years from now…What if the great apes had asked whether they should evolve into Homo sapiens—pros and cons—and they had listed, on the pro side, ‘Oh, we could have a lot of bananas if we became human’? Well, we can have unlimited bananas now, but there is more to the human condition than that.”
The truth is that we don’t know what we owe the future; we can’t know it with any certainty. But we do know what we owe the present. People on Earth right now are suffering unnecessarily, and indications are that preventable problems like climate change and wealth inequality are making their lives worse. It’s easy to imagine future problems that are more tractable, less politically messy, and have clearer moral stakes, and then commit yourself to solving those. But the only thing we owe the future is a better present. If we don’t refocus, we might find ourselves in a burning building with little more to help us out than an unlimited supply of bananas.
These beliefs are subject to all of the normal criticism levied against utilitarianism – the repugnant conclusion and the problems of distributing and measuring utility.
If this sounds dubious, you’re not alone. EAs seem to like thought experiments, so here’s one: Imagine that a series of ten-foot deep holes start appearing all over your town. They’re causing some real problems. People, pets, and cars keep falling in them and getting stuck. You decide to do something about it. The local hole-filling charity doesn’t have the resources to keep up with the problem, but you figure you could make a sizable impact by getting a high-paying job and donating a portion of your salary to help fill the holes. And you’re in luck, a lucrative position just opened up that would allow you to do exactly that. All you’d have to do is work for the local hole-digging company.
It’s simplistic, but it exposes the flawed approach to poverty alleviation at the heart of earning-to-give and philanthropy more broadly. Problems caused by systemic failures cannot be solved without fixing the systems that caused them in the first place. Charity can help alleviate temporary pain, but it’s no substitute for structural change. In the hole-filling example, you’d be better off trying to get your town to aggressively regulate hole digging rather than hoping that your charitable contributions will solve the problem post-hoc. This is not to say that donating to the hole-filling charity is wrong, just that it’s inadequate, and it certainly does not cancel out the harm you would cause by working at the hole-digging company.
To continue the hole-digging thought experiment from the previous example, imagine that a group of people in your town started to get really worried that the hole-digging and filling machines could one day become sentient and start killing people. Since that would definitely be a bigger problem than a couple stray holes scattered around town, they start a group to research ways to prevent this future scenario by safely developing digging machine sentience instead of filling the holes. That’s basically what longtermism is.
For example, the issue of climate change is considered important but not an actual threat to the survival of humanity (“more than 10 times less likely to cause extinction than nuclear war or pandemics”), so it is de-prioritized.
Wow. This post perfectly clarifies a major issue I have had with where EA is right now. I had not been able to articulate it though.
Just taking the turn to longtermism, it so obviously undermines the core tenet of EA which is “effective”. At best the longtermists are correct, but the amount of uncertainty in the bet is so high that it’s hard to defend with a straight face. So you go from a strategy of funding 95%+/-5% effective charities to one that funds charities that are 70%+/-30% effective, hoping that you stay above 50% for a *massive* number of people.
Likewise with your hole digging thought experiment. Your going from an idea where you take your existing money and put it into a remedy that is highly effective to a world where the remedy is still highly effective(95%), but you also create new money that itself is 30% damaged, so overall the effectiveness is now much lower than 95%.
In retrospect this comment is just restating your post, but it was really helpful to me. There are a couple other flaws with EA to me, but this was really eye-opening.
Doesn't the final quote from Bostrom support efforts to ensure survival?