Acerca de
Week 7: What Do You Think?
“It is one of the unfortunate truisms of the human condition that there is hardly a good idea, noble impulse, or sound suggestion that can't be (and isn't eventually) adopted and bastardized by zealots… One iteration of this tendency is in the idea of “effective altruism.”
This session, we’ll give you time to reflect on what you think of effective altruism, and of the specific potential priorities you’ve heard about so far.
We are dedicating a session to this because, to whatever extent we are wrong, realizing and correcting our mistakes will allow us to do more good. Honestly reckoning with strong counterarguments (from both within and outside of the EA community) can help us avoid confirmation bias and groupthink, and get us a little closer to identifying the most effective ways to do good.
Such critiques have led to important changes in what many EAs do: for example, GiveWell polled a sample of its recipients on how they would make moral tradeoffs in response to criticisms that it shouldn’t make moral tradeoffs on behalf of the people its recommended charities benefit.
A key concept for this session is the importance of forming independent impressions. In the long run, you’re likely to gain a deeper understanding of important issues if you think through the arguments for yourself. But (since you can’t reason through everything) it can still sometimes make sense to defer to others when you’re making decisions.
Required Materials
Independent impressions:
-
Independent impressions - (2 mins.)
Recent critique of effective altruism. Read articles in More to Explore for others:
-
Notes on Effective Altruism (20 mins.)
While we’ve covered some of the most popular EA causes above, there are many other causes that we haven’t had space to cover. Please skim over this list of other causes to get a sense of other ideas that people in EA have discussed. (Note, you don’t need to read this whole post in detail!)
Exercise, which includes reading and reflecting on criticisms of ideas covered in previous sessions (see below).
What topics or ideas from the program do you most feel like you don’t understand?
What seems most confusing to you about each one?
Go back to that topic/idea and see if there are any further readings you can do that would help you address your uncertainties and explore any concerns. Do those readings. Consider writing notes on your confusion, stream-of-consciousness style.
What topics or ideas from the program do you most feel like you don’t understand?
What seems most confusing to you about each one?
Go back to that topic/idea and see if there are any further readings you can do that would help you address your uncertainties and explore any concerns. Do those readings. Consider writing notes on your confusion, stream-of-consciousness style.
List one idea from the program that you found surprising at first, and which you now think more or less makes sense and is important?
How could this idea be wrong? What’s the strongest case against it?
Exercise (1.5 hours)
​
For the exercise this session, we will take some time to reflect on the ideas we’ve engaged with over the past sessions. Our goal is to take stock and to identify our concerns and uncertainties about EA ideas.
​
Part 1 - What are your concerns about EA? (15 mins.)
We’ve covered a lot: the philosophical foundations of effective altruism, how to compare causes and allocate resources, and a look at some top priority causes using the EA framework.
What are your biggest questions, concerns, and criticisms based on what we’ve discussed so far? These can be about the EA framework/community, specific ideas or causes, or anything you’d like!
Please raise and discuss them at your next meeting!
Part 2 - Reflecting back (45 mins.)
You’ve covered a lot over the past sessions! We hope you found it an interesting and enjoyable experience. There are lots of major considerations to take into account when trying to do the most good you can, and lots of ideas may have been new and unfamiliar to you. This session we’d like you to reflect back on the program with a skeptical and curious mindset.
To recapitulate what we’ve covered:
​
Over the course of sessions 1 and 2, we aim to introduce you to the core principles of effective altruism. We use global health interventions, which has been a key focus area for effective altruism, to illustrate these principles, partly because we have unusually good data for this cause area.
​
We continue to explore the core principles of effective altruism, particularly through the lens of global health interventions because they are especially concrete and well-studied. We focus on giving you tools to quantify and evaluate how much good an intervention can achieve; introduce expected value reasoning; and investigate differences in expected cost-effectiveness between interventions.
​
Radical empathy
The next section focuses on your own values and their practical implications. We explore who our moral consideration should include. We focus especially on farmed animals as an important example of this question this session.
​
Our final century?
This session we’ll focus on existential risks: risks that threaten the destruction of humanity’s long-term potential. We’ll examine why existential risks might be a moral priority, and explore why existential risks are so neglected by society. We’ll also look into one of the major risks that we might face: a human-made pandemic, worse than COVID-19.
​
What could the future hold? And why care?
This session we explore what the future might be like, and why it might matter. We’ll explore arguments for “longtermism” - the view that improving the long term future is a key moral priority. This can bolster arguments for working on reducing some of the extinction risks that we covered in the last two sessions. We’ll also explore some views on what our future could look like, and why it might be pretty different from the present.
​
Risks from artificial intelligence
Transformative artificial intelligence may well be developed this century. If it is, it may begin to make many significant decisions for us, and rapidly accelerate changes like economic growth. Are we set up to deal with this new technology safely?
​
​
More to explore
Effectiveness is a conjunction of multipliers (5 mins.) - one take on why it matters so much to think carefully and critically about which of the above perspectives is right.
​
Types of criticism:
-
Disagreeing about what’s effective isn’t disagreeing with effective altruism - Rob Wiblin differentiates critiques of effective altruism as a concept and critiques of the ways EAs attempt to apply this concept. (5 mins.)
​
Systemic change:
-
Effective altruists love systemic change - Robert Wiblin argues why EA does not, in fact, neglect systemic change. (13 mins.)
-
Beware Systemic Change (15 mins.)
-
Critique of pursuing systemic change. How hard is it to figure out what systemic changes will make things better?
-
This is partly an expression of disagreement with others in EA who have embraced systemic change, which was itself partly a response to criticisms like those in the Boston Review
-
​​
Is effective altruism a question or an ideology, or both?
-
Effective Altruism is a Question (not an ideology) (5 mins.)
-
Effective Altruism is an Ideology, not (just) a Question (24 mins.)
General criticisms of effective altruism:
-
Notes on Effective Altruism (20 mins.)
-
The Centre for Effective Altruism’s responses to some common objections (10 mins.)
-
Responses to The Logic of Effective Altruism (~20 mins., pick a few to read) Note that these critiques are from 2015.
-
Recommended excerpts
-
Daron Acemoglu
-
Angus Deaton
-
Jennifer Rubenstein
-
Iason Gabriel
-
Peter Singer’s response
-
-
How to view these: click the names under “Responses” at the bottom of the original article
-
-
Towards Ineffective Altruism (15 mins.)
-
A critique of effective altruism (11 mins.)
-
Another Critique of Effective Altruism (5 mins.)
-
The motivated reasoning critique of effective altruism (34 mins.)
-
Making decisions under moral uncertainty - Placing credence in multiple ethical systems leads to questions of moral uncertainty when the two ethical systems disagree. This post summarizes the problem and suggests ways to resolve such issues. (16 mins.)
-
Some blindspots in rationality and effective altruism - An EA forum blog post that discusses some common pitfalls for rationalists and effective altruists, as well as some meta-considerations (12 mins.)
-
Free-spending EA might be a big problem for optics and epistemics- EA forum post on risks associated with EA spending trends (12 mins.)
-
Critiques of EA that I want to read (16 mins)
-
Effective Altruism: Not Effective and Not Altruistic (27 mins.)
-
Stop the Robot Apocalypse - Amia Srinivasan - (15 mins.)
-
EA and the current funding situation - not exactly criticism, but discusses some potential pitfalls of EA’s current funding situation (35 mins.)
​
Deference and forming inside views:
-
Some thoughts on deference and inside view models (14 mins.)
-
A sketch of good communication (4 mins.)
-
How I formed my own views on AI safety (21 mins.)
-
Deference Culture in EA (8 mins.)
-
Bad Omens in Current Community Building (27 mins.)
​
Criticism of EA methods:
-
A philosophical review of Open Philanthropy’s Cause Prioritisation Framework (42 mins.)
-
Evidence, Cluelessness, and the Long Term - Hilary Greaves - (30 mins.)
-
Why we can’t take expected value estimates literally (even when they’re unbiased) - Holden Karnofsky explains why he takes issue with using expected value estimates of impact. (35 mins. - skimmable)
-
Some blindspots in rationality and effective altruism - An EA forum blog post that discusses some common pitfalls for rationalists and effective altruists, as well as some meta-considerations (12 minS.)
-
Ethical Systems - Check out other ethical systems not discussed yet in the program. Which ones resonate most with you? (Varies)
-
Summary review of ITN critiques (8 mins.)
Criticism of EA principles:
-
Pascal’s Mugging Critique of the application of expected value theory. How do you deal with very low probability events that would be disastrous if they took place? (5 mins.)
-
Ethical Systems - Check out other ethical systems not discussed yet in the program. Which ones resonate most with you? (Varies)
-
AI alignment, philosophical pluralism, and the relevance of non-Western philosophy - Short talk (18 mins.)
-
The Repugnant Conclusion - Total utilitarianism (maximizing overall wellbeing) implies that it’s better to have many many beings with infinitesimally positive wellbeing to a smaller number of beings that are all extremely well off. Some people find this counterintuitive, but there’s significant debate on this. (Video - 6 mins.)
-
Utility monster - Another thought experiment suggesting that trying to maximize wellbeing may have counterintuitive implications (5 mins.)
-
The bullet-swallowers - Scott Aaronson describes how some theories (like EA) force you to either swallow some tough conclusions or dodge them by contorting the theory. (2 mins.)