Support 110 years of independent journalism.

  1. Ideas
14 November 2022

Elon Musk’s useful philosopher

Silicon Valley billionaires have taken up the ideas of William MacAskill, the leading voice of longtermism. But they will not solve humanity’s problems.

By Paris Marx

When we look back at the past century, what are we to make of the people who came before us and the world they left us? We grow up knowing about wars and genocides, not to mention the challenges they created that we now face. But we also learn about the developments we should be proud of, such as the ongoing fight to expand people’s rights and freedoms. The 35-year-old Scottish philosopher William MacAskill, author of What We Owe the Future (2022), argues the next stage of the civil rights movement is to protect the rights of “future people” – not just in the century to come, but those that will live millions of years in the future.

MacAskill is the public face of “longtermism”, which he describes as “the idea that positively influencing the long-term future is a key moral priority of our time” and one that requires we consider that people who have yet to be born “count for no less, morally, than the present generation”. This seems to make sense. Contemporary politics, business and economics don’t consider the long-term implications of present-day actions; rather, they’re geared for several-year electoral cycles or the next quarterly earnings report. If we want to take on collective challenges, from global poverty to a warming climate, we must be able to think on much longer timescales.

Yet longtermism and long-term thinking are not the same thing. As much as MacAskill would assure us otherwise, the values he advocates could have dire consequences not just for those living today, but for those who come after too. It’s no surprise that some of the most powerful people in the world are seizing on the world-view he’s outlining – Elon Musk has promoted the book, saying it’s “a close match for my philosophy” – and investing billions of dollars to help spread its influence in the seats of corporate and political power. But for those outside that elite stratum, longtermism – and the broader effective altruist community it emerges from – has little to offer.

Since its publication in September this year, What We Owe the Future has been heaped with positive coverage in major publications such as the New York Times, Time Magazine and the Guardian in a well-coordinated (and well-funded) marketing campaign aimed not just at selling books, but at selling the idea of longtermism to the public. The book has been blurbed by well-known progressive voices such as the Dutch historian Rutger Bregman, who called MacAskill “one of the most important philosophers alive today”.

But there are serious problems with the idea that the future well-being of humanity depends on the acceptance of longtermism. Consider what it means to see the life of a person alive today and the theoretical life of someone living a million years from now as being morally equivalent. Then pair that with the assumption that trillions more people could live in the future than live today. If that’s the case, do we direct our limited resources to, for example, improving the lives of the 8 billion people alive now, or leave many of them to suffer so that we can maximise our chances of avoiding the “existential risks” that would prevent those trillions of future people coming into being? For longtermists and their billionaire benefactors, the answer is an easy one.

Select and enter your email address Your weekly guide to the best writing on ideas, politics, books and culture every Saturday - from the New Statesman. The New Statesman's quick and essential guide to the news and politics of the day. Stay up to date with NS events, subscription offers & updates.
  • Administration / Office
  • Arts and Culture
  • Board Member
  • Business / Corporate Services
  • Client / Customer Services
  • Communications
  • Construction, Works, Engineering
  • Education, Curriculum and Teaching
  • Environment, Conservation and NRM
  • Facility / Grounds Management and Maintenance
  • Finance Management
  • Health - Medical and Nursing Management
  • HR, Training and Organisational Development
  • Information and Communications Technology
  • Information Services, Statistics, Records, Archives
  • Infrastructure Management - Transport, Utilities
  • Legal Officers and Practitioners
  • Librarians and Library Management
  • Management
  • Marketing
  • OH&S, Risk Management
  • Operations Management
  • Planning, Policy, Strategy
  • Printing, Design, Publishing, Web
  • Projects, Programs and Advisors
  • Property, Assets and Fleet Management
  • Public Relations and Media
  • Purchasing and Procurement
  • Quality Management
  • Science and Technical Research and Development
  • Security and Law Enforcement
  • Service Delivery
  • Sport and Recreation
  • Travel, Accommodation, Tourism
  • Wellbeing, Community / Social Services
Visit our privacy Policy for more information about our services, how New Statesman Media Group may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications.
THANK YOU

Longtermist philosophers like MacAskill enjoy an aloof detachment from the consequences of their high-level game of future planning, acting as real-life versions of Hari Seldon, a character in Isaac Asimov’s Foundation series, who tries to guide the long-term future of humanity. MacAskill, for his part, advocates a disturbing utilitarianism that quantifies the potential “average value” of future scenarios, based on a measure of the happiness of all future people who could potentially exist on the choices that are made today. MacAskill, an academic who did his PhD at Oxford under the supervision of the economist-turned-philosopher John Broome, weaponises the hard calculus of economics against the human species as whole.

For MacAskill, this leads to a series of troubling conclusions. Most notably, he supports the “repugnant conclusion”, which argues that we should maximise the number of future people that are born as long as they have a marginally positive well-being, rather that prioritising a smaller total population that can live much more fulfilling and vibrant lives. In arguments that are reminiscent of Musk’s musings, he argues that people need to have many more children – Musk now has ten – and that we need to colonise the cosmos so there’s more space into which the human species can expand. The realisation of those trillions of potential future people is of the utmost importance to MacAskill, even if that means crises, suffering and mass death in the near-term, so long as the long-term horizon is not foreclosed. Those future people don’t even need to be human as we currently understand ourselves.

Content from our partners
Why collaboration is the key to growth
How AI can help unleash employee potential
How Registers of Scotland modernised the world’s oldest land register

[See also: Elon Musk’s mismanagement suggests a dark future for Twitter]

Embracing techno-determinism like his Silicon Valley boosters, MacAskill argues that “technological development is creating new threats and opportunities for humanity”. The assumption is that artificial general intelligence (AGI), which refers to computers matching (or exceeding) the intellectual capabilities of humans, is inevitable (something that’s not at all clear to critical experts in the field). As MacAskill describes it, the spread and deepening of AGI would have a whole range of consequences but one is that, even if every flesh-and-bone human were to be eradicated, “the AI agents would continue civilisation, potentially for billions of years to come”. MacAskill only hints at this dystopian reflex in his book, but such a transhumanist vision is commonly advocated by longtermists, including Nick Bostrom, who founded Oxford University’s Future of Humanity Institute, where much of this thinking originates.

MacAskill has been asked why the theme of bringing together human and machine into “digital minds” wasn’t discussed more in the book, to which he responded, “I think it’s a really important topic, but I ended up just not having space.” As the philosopher and historian Émile P Torres has explained, this refers to an assumption by longtermists that we will “create vast computer simulations around stars in which unfathomably huge numbers of people live net-positive lives in virtual-reality environments.” MacAskill co-authored a paper in 2021 arguing there could be 1045 of those digital people in the Milky Way galaxy alone in the future – all of which are morally equivalent to humans alive today. It’s no surprise that doesn’t make it into his book because it’s ludicrous, and shows just how disconnected longtermism is from humanity.

Over the past decade or so, the billionaires of Silicon Valley have graced us with their visions for the far future of our species. Musk wants us to colonise Mars as the first step towards extending the “light of consciousness” beyond our planet. Jeff Bezos wants to see a trillion people in floating space colonies, lest we succumb to “stasis and rationing” here on Earth. Longtermism furnishes their egomaniacal plots with a seemingly moral (and scholarly) makeover.

At the core of MacAskill’s argument is the notion of “value lock-in”. He provides the example of major world religions, and how the values of their adherents have had a profound impact on the trajectory of human development over thousands of years. He explains that longtermists need to learn from that history so they can shape the values of humanity in the future; they need to do so quickly, MacAskill asserts, because once AGI is invented, sentient computers will internalise the values of that moment, making them much more difficult to alter afterwards. Putting aside the questionable credibility of that statement, what do those values look like?

MacAskill makes a number of comments throughout the book that illuminate his inability to consider the material challenges that people endure in their lives. In explaining the need for a “morally exploratory world”, he argues that the best way to achieve that is through the libertarian fever dream of charter cities – jurisdictions that create their own laws and governance structures beyond state authority. “For almost every social structure we can imagine, we could have a charter city based on that idea,” he writes. He also questions why countries didn’t allow citizens to buy Covid-19 vaccines “on the free market”. Beyond that, MacAskill assumes that couples are doing intricate calculations as to whether it makes sense to have children, asserting that people in rich countries are having fewer of them because “work and other commitments get in the way”, failing to consider the effects of stagnating incomes and the difficulty of affording adequate housing.

While echoing the disconnected assumptions of the tech elite, MacAskill’s theory of change is also favourable to their view of the world. He rightly argues that individual consumption choices are not going to solve climate change, but then goes further: suggesting those actions may even be counterproductive. The solution is instead to be found in philanthropy. For example, he argues that vegetarianism and reducing plastic use are misguided, and it’s far more effective for people to donate to organisations and charities that advocate for their causes of concern. Instead of decisions around personal consumption, he writes, “in order to solve climate change, what we actually need is for companies like Shell to go out of business”. For a moment, he seems to be aligned with climate activists. Is he proposing nationalisation, or maybe some form of state action to phase out fossil fuels? Not so fast. The route to that end is instead to be found in “donating to effective nonprofits”.

This way of thinking is at the core of effective altruism, which forms the foundation for longtermism. Its idea is sweet-sounding to someone like Bill Gates who uses his vast wealth to influence health, education and other policies so that they correspond to his free-market view of how global problems are solved. As he said at Harvard in 2007, “If we can find approaches that meet the needs of the poor in ways that generate profits for business and votes for politicians, we will have found a sustainable way to reduce inequity in the world.” Yet his advocacy for intellectual property rights has, among other things, prevented parts of the world, particularly in Africa, from accessing Covid vaccines. Effective altruism is also beneficial to the wealthy who argue that we shouldn’t tax their vast and ever-expanding assets, even as inequality hits record levels and growing numbers of people turn to food banks and are unable to live with dignity. The altruist crew contend that the affluent need to retain their fortunes so they can donate some of it to organisations that seem like they are helping to make the world a better place – but without threatening the wealth or power of the super-rich.

There’s no question that we need to reconsider the social and economic structures that are perpetuating a series of intensifying social, economic and environmental crises. But the solution to those problems won’t be found in the dangerous philosophy promoted by MacAskill and his well-financed coterie of effectual altruists. What We Owe the Future isn’t a guide to a better future, but a symptom of a dangerous ideology commandeering the thought-world of the West.

Longtermism is a technocratic dream that purports to give some of the wealthiest people in the world the ability to plan the far future of humanity according to their personal whims. It is hubris, treating billions of people as the pawns of god-like billionaire overseers who hoard unthinkable fortunes and constantly seek new ways to manufacture consent for their dominance. We owe it to ourselves – and the future – to stop it in its tracks.

[See also: The left’s patient gardener]

Topics in this article : , , ,