Within effective altruism’s framework, selecting one’s career is just as important as choosing where to make donations. EA defines a professional “fit” by whether a candidate has comparative advantages like exceptional intelligence or an entrepreneurial drive, and if an effective altruist qualifies for a high-paying path, the ethos encourages “earning to give,” or dedicating one’s life to building wealth in order to give it away to EA causes. Bankman-Fried has said that he’s earning to give, even founding the crypto platform FTX with the express purpose of building wealth in order to redirect 99% of it. Now one of the richest crypto executives in the world, Bankman-Fried plans to give away up to $1 billion by the end of 2022.
“The allure of effective altruism has been that it’s an off-the-shelf methodology for being a highly sophisticated, impact-focused, data-driven funder,” says David Callahan, founder and editor of Inside Philanthropy and the author of a 2017 book on philanthropic trends, The Givers. Not only does EA suggest a clear and decisive framework, but the community also offers a set of resources for potential EA funders—including GiveWell, a nonprofit that uses an EA-driven evaluation rubric to recommend charitable organizations; EA Funds, which allows individuals to donate to curated pools of charities; 80,000 Hours, a career-coaching organization; and a vibrant discussion forum at Effectivealtruism.org, where leaders like MacAskill and Ord regularly chime in.
Effective altruism’s original laser focus on measurement has contributed rigor in a field that has historically lacked accountability for big donors with last names like Rockefeller and Sackler. “It has been an overdue, much-needed counterweight to the typical practice of elite philanthropy, which has been very inefficient,” says Callahan.
But where exactly are effective altruists directing their earnings? Who benefits? As with all giving—in EA or otherwise—there are no set rules for what constitutes “philanthropy,” and charitable organizations benefit from a tax code that incentivizes the super-rich to establish and control their own charitable endeavors at the expense of public tax revenues, local governance, or public accountability. EA organizations are able to leverage the practices of traditional philanthropy while enjoying the shine of an effectively disruptive approach to giving. The movement has formalized its community’s commitment to donate with the Giving What We Can Pledge—mirroring another old-school philanthropic practice—but there are no giving requirements to be publicly listed as a pledger. Tracking the full influence of EA’s philosophy is tricky, but 80,000 Hours has estimated that $46 billion was committed to EA causes between 2015 and 2021, with donations growing about 20% each year. GiveWell calculates that in 2021 alone, it directed over $187 million to malaria nets and medication; by the organization’s math, that’s over 36,000 lives saved.
Accountability is significantly harder with longtermist causes like biosecurity or “AI alignment”—a set of efforts aimed at ensuring that the power of AI is harnessed toward ends generally understood as “good.” Such causes, for a growing number of effective altruists, now take priority over mosquito nets and vitamin A medication. “The things that matter most are the things that have long-term impact on what the world will look like,” Bankman-Fried said in an interview earlier this year. “There are trillions of people who have not yet been born.” Bankman-Fried’s views are influenced by longtermism’s utilitarian calculations, which flatten lives into single units of value. By this math, the trillions of humans yet to be born represent a greater moral obligation than the billions alive today. Any threats that could prevent future generations from reaching their full potential—either through extinction or through technological stagnation, which MacAskill deems equally dire in his new book, What We Owe the Future—are priority number one.
In his book, MacAskill discusses his own journey from longtermism skeptic to true believer and urges other to follow the same path. The existential risks he lays out are specific: “The future could be terrible, falling to authoritarians who use surveillance and AI to lock in their ideology for all time, or even to AI systems that seek to gain power rather than promote a thriving society. Or there could be no future at all: we could kill ourselves off with biological weapons or wage an all-out nuclear war that causes civilisation to collapse and never recover.”
It was to help guard against these exact possibilities that Bankman-Fried created the FTX Future Fund this year as a project within his philanthropic foundation. Its focus areas include “space governance,” “artificial intelligence,” and “empowering exceptional people.” The fund’s website acknowledges that many of its bets “will fail.” (Its primary goal for 2022 is to test new funding models, but the fund’s site does not establish what “success” may look like.) As of June 2022, the FTX Future Fund had made 262 grants and investments, with recipients including a Brown University academic researching long-term economic growth, a Cornell University academic researching AI alignment, and an organization working on legal research around AI and biosecurity (which was born out of Harvard Law’s EA group).
Bankman-Fried is hardly the only tech billionaire pushing forward longtermist causes. Open Philanthropy, the EA charitable organization funded primarily by Moskovitz and Tuna, has directed $260 million to addressing “potential risks from advanced AI” since its founding. Together, the FTX Future Fund and Open Philanthropy supported Longview Philanthropy with more than $15 million this year before the organization announced its new Longtermism Fund. Vitalik Buterin, one of the founders of the blockchain platform Ethereum, is the second-largest recent donor to MIRI, whose mission is “to ensure [that] smarter-than-human artificial intelligence has a positive impact.” MIRI’s donor list also includes the Thiel Foundation; Ben Delo, cofounder of crypto exchange BitMEX; and Jaan Tallinn, one of the founding engineers of Skype, who is also a cofounder of Cambridge’s Centre for the Study of Existential Risk (CSER). Elon Musk is yet another tech mogul dedicated to fighting longtermist existential risks; he’s even claimed that his for-profit operations—including SpaceX’s mission to Mars—are philanthropic efforts supporting humanity’s progress and survival. (MacAskill has recently expressed concern that his philosophy is getting conflated with Musk’s “worldview.” However, EA aims for an expanded audience, and it seems unreasonable to expect rigid adherence to the exact belief system of its creators.)