top of page

Acerca de

Week 5: What Could the Future Hold? And Why Care?

"Longtermism" is the view that improving the long term future is a key moral priority of our time. This can bolster arguments for working on reducing some of the extinction risks that we covered in the last section. 

 

We’ll also explore some views on what our future could look like, and why it might be pretty different from the present. And we'll introduce forecasting: a set of methods for improving and learning from our attempts to predict the future.

 

Key concepts from this session include:

  • Impartiality: helping those that need it the most, only discounting people according to location, time, and species if those factors are in fact morally relevant.

  • Forecasting: Predicting the future is hard, but it can be worth doing in order to make our predictions more explicit and learn from our mistakes.


You will also practice the skill of calibration, with the hope that when you say that something is 60% likely, it will happen about 60% of the time. This is important for making good judgments under uncertainty.

Required Materials

Hinge of History:

​

The case for and against longtermism:

 

To what extent can we predict the future? How?

 

What might the future look like?

  • Top open Metaculus forecasts Read the first few dozen results and reflect on what you find important and surprising. These are average predictions of how several important trends will unfold over the coming years. We’re not sure how accurate they’ll be, but we think it gives a glimpse into the future.

  • Longtermism and animal advocacy (3 mins.)

 

Strategies for improving the long-term future (beyond reducing existential risks):

Exercise (45 mins.)

​

Part 1 - Helping in the present or in the future? (15 mins.)

A commonly held view within the EA community is that it's incredibly important to start from thinking about what it really means to make a difference, before thinking about specific ways of doing so. It’s hard to do the most good if we haven’t tried to get a clearer picture of what doing good means, and as we saw in session 3, clarifying our views here can be quite a complex task.

 

One of the core commitments of effective altruism is to the ethical ideal of impartiality. Although in normal life we may reasonably have special obligations (eg. to friends and family), in their altruistic efforts aspiring effective altruists strive to avoid privileging the interests of others based on arbitrary factors such as their appearance, race, gender, or nationality. 

 

Longtermism posits that we should also avoid privileging the interests of individuals based on when they might live.

 

In this session’s exercise we’ll be reflecting on some prompts to help you start considering what you think about this question, i.e. "Do the interests of people who are not alive yet matter as much as the interests of people living today?"

 

Spend a couple minutes thinking through each prompt, and note down your thoughts - feel free to jot down uncertainties, or open questions you have that seem relevant. We encourage you to note down your thought process, but feel free to simply report your intuitions and gut feelings. 

 

Of course, these thought experiments all assume an unrealistic level of certainty about your options and their outcomes. For the purpose of this exercise, however, we encourage you to accept the premise of the thought experiments instead of trying to find loopholes. The idea is to isolate one particular aspect of a situation (e.g., the timing of our impact) and try to get at our moral intuitions about just that aspect

 

  1. Suppose that you could save 100 people today by burying toxic waste that will, in 200 years, leak out and kill thousands. Would you choose to save the 100 now and kill the thousands later? Does it make a difference whether the toxic waste leaks out 200 years from now or 2000?
     

  2. Imagine you donate enough money to the Against Malaria Foundation (AMF) to save a life. Unfortunately, there’s an administrative error with the currency transfer service you used, and AMF isn’t able to use your money until 5 years after you donated. Public health experts expect malaria rates to remain high over the next 5 years, so AMF expects your donation will be just as impactful in 5 years time. Many of the lives that AMF saves are of children under 5, and so the life your money saves is of someone who hadn’t been born yet when you donated.

    If you had known this at the time, would you have been less excited about the donation?

​

Part 2 - When will we develop human-level AI? (30 mins.)

 

It’s obviously not possible to just look this up, or to gather direct data on this question. So we need to gather what data and arguments we have, and make a judgment call. This applies to AI and other existential risks, but also to most questions that we’re interested in - “How many chickens will move to better changes if we pursue this advocacy campaign?”, “How much do we need to spend on bednets to save a life?”.

 

These judgements are really important: they could make a big difference to the impact we have. 

 

Unfortunately, we don’t yet have definitive answers to these questions, but we can aim to become “well-calibrated.” This means that when you say you’re 50% confident, you’re right about 50% of the time, not more, not less; when you say you're 90% confident, you're right about 90% of the time; and so on. 

 

This exercise aims to help you become well calibrated. The app you’ll use contains thousands of questions - enough for many hours of calibration training - that will measure how accurate your predictions are and chart your improvement over time. Nobody is perfectly calibrated; in fact, most of us are overconfident. But various studies show that this kind of training can quickly improve the accuracy of your predictions. 

 

Of course, most of the time we can’t check the answers to the questions life presents us with, and the predictions we’re trying to make in real life are aimed at complex events. The Calibrate Your Judgment tool helps you practice on simpler situations where the answer is already known, providing you with immediate feedback to help you improve.

 

Please use the Calibrate Your Judgment app for 30 minutes before this session.

​

More to explore

Global historical trends:

​​

Forecasting:

​​

The case for longtermism:

​​

Criticism of longtermism:

​​

Suffering risks

bottom of page