Will MacAskill of Effective Altruism Fame — The Value of Longtermism, Tools for Beating Stress and Overwhelm, AI Scenarios, High-Impact Books, and How to Save the World and Be an Agent of Change (#612)

Illustration via 99designs

“I often think to myself, ‘Could I justify my life now to my 15-year-old self?’ And if the answer is no, then I’m a bit like, ‘Oh, what are you doing? You’re not living up to what earlier Will would’ve wanted for present Will.'”

— Will MacAskill

William MacAskill (@willmacaskill) is an associate professor in philosophy at the University of Oxford. At the time of his appointment, he was the youngest associate professor of philosophy in the world. A Forbes 30 Under 30 social entrepreneur, he also cofounded the nonprofits Giving What We Can, the Centre for Effective Altruism, and Y Combinator-backed 80,000 Hours, which together have moved over $200 million to effective charities. You can find my 2015 conversation with Will at tim.blog/will.

His new book is What We Owe the Future. It is blurbed by several guests of the podcast, including Sam Harris, who wrote, “No living philosopher has had a greater impact upon my ethics than Will MacAskill. . . . This is an altogether thrilling and necessary book.” 

Please enjoy!

Listen to the episode on Apple Podcasts, Spotify, Overcast, Podcast Addict, Pocket Casts, Castbox, Google Podcasts, Stitcher, Amazon Musicor on your favorite podcast platform. You can watch the interview on YouTube here.

Brought to you by LinkedIn Jobs recruitment platform with 800M+ users, Vuori comfortable and durable performance apparel, and Theragun percussive muscle therapy devices. More on all three below.

The transcript of this episode can be found here. Transcripts of all episodes can be found here.

#612: Will MacAskill of Effective Altruism Fame — The Value of Longtermism, Tools for Beating Stress and Overwhelm, AI Scenarios, High-Impact Books, and How to Save the World and Be an Agent of Change

This episode is brought to you by Vuori ClothingVuori is a new and fresh perspective on performance apparel, perfect if you are sick and tired of traditional, old workout gear. Everything is designed for maximum comfort and versatility so that you look and feel as good in everyday life as you do working out.

Get yourself some of the most comfortable and versatile clothing on the planet at VuoriClothing.com/Tim. Not only will you receive 20% off your first purchase, but you’ll also enjoy free shipping on any US orders over $75 and free returns.


This episode is brought to you by Theragun! Theragun is my go-to solution for recovery and restoration. It’s a famous, handheld percussive therapy device that releases your deepest muscle tension. I own two Theraguns, and my girlfriend and I use them every day after workouts and before bed. The all-new Gen 4 Theragun is easy to use and has a proprietary brushless motor that’s surprisingly quiet—about as quiet as an electric toothbrush.

Go to Therabody.com/Tim right now and get your Gen 4 Theragun today, starting at only $179.


This episode is brought to you by LinkedIn Jobs. Whether you are looking to hire now for a critical role or thinking about needs that you may have in the future, LinkedIn Jobs can help. LinkedIn screens candidates for the hard and soft skills you’re looking for and puts your job in front of candidates looking for job opportunities that match what you have to offer.

Using LinkedIn’s active community of more than 800 million professionals worldwide, LinkedIn Jobs can help you find and hire the right person faster. When your business is ready to make that next hire, find the right person with LinkedIn Jobs. And now, you can post a job for free. Just visit LinkedIn.com/Tim.


Want to hear the first time Will MacAskill was on this podcast? Have a listen to my 2015 interview with Will MacAskill here, in which we discuss how to take a scientific approach to doing good, charity spending for the poorest of the poor versus investing in future generations, the perils of pursuing your passion, underrated existential threats, life decision frameworks, and much more.

#120: Will MacAskill on Effective Altruism, Y Combinator, and Artificial Intelligence

What was your favorite quote or lesson from this episode? Please let me know in the comments.

SCROLL BELOW FOR LINKS AND SHOW NOTES…

SELECTED LINKS FROM THE EPISODE

  • Connect with Will MacAskill:

Website | Twitter

SHOW NOTES

  • [07:20] Recommended reading.
  • [13:26] How Dostoevsky’s Crime and Punishment changed Will’s life.
  • [18:12] Maintaining optimism in the age of doomscrolling.
  • [23:41] What is effective altruism?
  • [26:04] Resources for maximizing the impact of your philanthropy.
  • [27:45] How adopting a check-in system has most improved Will’s life.
  • [32:32] Caffeine limits.
  • [34:08] Effective back pain relief.
  • [41:18] What is longtermism, and why did Will write What We Owe the Future?
  • [43:44] Future generations matter.
  • [46:42] Finding the line between apathy and fatalism that spurs action toward ensuring there’s a future.
  • [52:23] What Will hopes readers take away from What We Owe the Future.
  • [55:56] What is value lock-in?
  • [1:01:38] Most concerning threats projected over the next 10 years.
  • [1:09:28] Most promising developments happening now.
  • [1:13:47] How Will refocuses during periods of overwhelm.
  • [1:18:48] Perils of AI considered plausible by the people who create it.
  • [1:30:42] Longtermist-minded resources and actions we can take now.
  • [1:36:29] Parting thoughts.

MORE GUEST QUOTES FROM THE INTERVIEW

“I often think to myself, ‘Could I justify my life now to my 15-year-old self?’ And if the answer is no, then I’m a bit like, ‘Oh, what are you doing? You’re not living up to what earlier Will would’ve wanted for present Will.'”
— Will MacAskill

“I give away most of my income, which is a very unusual thing to do. And you might think, oh, that’s a sacrifice that’s making my life worse, but actually I find it kind of empowering because I am making an autonomous decision. I am not merely following the dictates of what social convention is telling me to do, but I’m reasoning about things from first principles and then making a decision that’s genuinely, authentically mine.”
— Will MacAskill

“In the last podcast, we talked a lot about global health and development and what’s the difference you can make? Well, if you are a middle-class member of a rich country, it’s on the order of saving dozens, hundreds, maybe even thousands of lives over the course of your life, if you put your mind to it. That’s huge. Now, we’re talking about existential risks and the long-term future of humanity. What’s the difference you can make? You can play a part in being pivotal in putting humanity onto a better trajectory for, not just centuries, but for thousands, millions, or even billions of years.”
— Will MacAskill

“The amount of good that you can do is truly enormous. You can have cosmic significance, and that’s pretty inspiring.”
— Will MacAskill

“When you think about the difference you can make rather than just focusing on the magnitude of the problems, I think there’s every reason for optimism.”
— Will MacAskill

“Even just the progress we’ve made over the last few hundred years, people today have far, far better lives. If you extrapolate that out just a few hundred years more, let alone thousands of years, then there’s at least a good chance that we could have a future where everyone lives, not just as well as the best people alive today, but maybe tens, hundreds, thousands of times better.”
— Will MacAskill

“One thing I think that a lot of people find motivating is this thought that you’re part of this grand project, much, much grander than yourself, of trying to build a good and flourishing society over the course of not just centuries, but thousands of years, and that’s one way in which our lives have meaning.”
— Will MacAskill

“We face truly enormous challenges in our life. Many of these challenges are very scary. They can be overwhelming. They can be intimidating. But I really believe that each of us individually can make an enormous difference to these problems. We really can significantly help as part of a wider community to putting humanity onto a better path. And if we do, then the future really could be long and absolutely flourishing. And your great-great-grandkids will thank you.”
— Will MacAskill

PEOPLE MENTIONED

The Tim Ferriss Show is one of the most popular podcasts in the world with more than 900 million downloads. It has been selected for "Best of Apple Podcasts" three times, it is often the #1 interview podcast across all of Apple Podcasts, and it's been ranked #1 out of 400,000+ podcasts on many occasions. To listen to any of the past episodes for free, check out this page.

Leave a Reply

Comment Rules: Remember what Fonzie was like? Cool. That’s how we’re gonna be — cool. Critical is fine, but if you’re rude, we’ll delete your stuff. Please do not put your URL in the comment text and please use your PERSONAL name or initials and not your business name, as the latter comes off like spam. Have fun and thanks for adding to the conversation! (Thanks to Brian Oberkirch for the inspiration.)

6 Replies to “Will MacAskill of Effective Altruism Fame — The Value of Longtermism, Tools for Beating Stress and Overwhelm, AI Scenarios, High-Impact Books, and How to Save the World and Be an Agent of Change (#612)”

  1. Hi there,

    It is surprising that Will MacAskill does not include climate deregulation in his existential threats list, at least it has not been short-listed as the most pressing issue.

    I am writing this from a place where there hasn’t been any rain for the last month (I live in Europe) and where every day brings scorching temperatures which few years back weren’t considered to be a norm at all.

    Two days ago, the BBC published an article “Climate change: More studies needed on possibility of human extinction”. Scientists urge to do more studies concerning the worst case scenario which includes a possibility that “By 2070, these temperatures [the average annual temp of 29°C] and the social and political consequences will directly affect two nuclear powers, and seven maximum containment laboratories housing the most dangerous pathogens. There is serious potential for disastrous knock-on effects,” Chi Xu of Nanjing University.

    I would be curious to know what justifies the omission of this topic from MacAskill’s list.

    All the best,
    Ania

  2. Hi Tim, when discussing “Everybody Worships” you were not sure what was meant by YHWY. That is the divine name in Hebrew represented by four consonants. It would be commonly translated Yahweh – Hebrew or Jehovah – English. It’s the name of the creator in the Bible and YHWY is in the oldest manuscripts over 7,000 times.
    Thanks for your efforts to keep us informed and to think a little deeper on many topics.

    Bryan

  3. Very interesting talk, addressing a lot of my fears. One of Will’s fears of existential treats, was future pandemics. There has been no real debate around the covid-19 pandemic, treatment, true numbers, the vaccine, ‘with or of’ covid numbers,which I find odd at best, disturbing at worst so what have we learned from it that could be applied in the future?

    However, my main point is that Will mentioned light bulbs in the future that could potentially kill microbes and stop other future pandemics. Humans have evolved with germs and microbes, infection and illness. We have a gut full of friendly microbes, and exposure to microbes exercises our immune system, and keeps it robust, as does the right diet/nutrients, sleep, exercise, clean water etc. It is about the terrain, a healthy body has natural protection again serious illness. Hence, the average age of death ‘of’ covid was 83.4 (above average). What I see as an existential threat is human interference and greed disturbing our balance and harmony, such as genetically modified Frankenfood food, gene therapy. That will not end well.

  4. My experience of Effective Altruism so far has been that they do have a focus on existential risk, which I think is great. We can’t really talk about that too much.

    After discussing this on very many pages of their forum though, I don’t think they as yet really understand the nature of existential risk, and have that confused with a huge pile of details about AI and related topics.

    I’m not persuaded that discussions of AI alignment and so on are really meaningful so long as the knowledge explosion is generating ever more, ever larger powers, at an ever accelerating rate. Overcoming existential risk requires, by definition, winning every single time. This is not going to be possible so long as the knowledge explosion is generating new risks faster than we can solve the existing risks.

    My take is that the EA community are very intelligent well intended people, who don’t really understand existential risk. But they think they do. So there’s not much that can be done at the moment.