Tyler Cowen on Effective Altruism

A summary of Tyler Cowen's talk on the Effective Altruism movement.

A calculator that extends into outer space.
Image created by Midjourney

Sometimes instead of providing yet another take, capturing someone else's ideas is valuable. In that spirit, this is a summary of a recent talk Tyler Cowen gave on the Effective Altruism movement. I might do more of this in the future.

Core propositions or themes of EA:

Effective altruism has much in common with utilitarianism:

  • Philosophy should be a central, guiding principle for life.
  • Impartiality: We should care the same for any life, regardless of its proximity to our own, in space or time.

Effective altruism also departs from utilitarianism in the following ways:

  • Existential risk as an emphasis: EA is invested in the idea that the world as we know it could end—so we should actively work to reduce this risk.
  • Legibility: The decision of where to make charitable contributions should be a clear and objective calculation.
  • Scalability: The impact of what you can contribute might be massive (if for example it helps avoid a massive catastrophe).

Cowen likes at least two specific aspects of EA:

  1. The movement attracts a lot of talented young people. Cowen supports anything that spots talent.
  2. EA encourages us to rethink how charity should be approached. Cowen calls the traditional model of charity "poorly conceived, ill-thought-out and badly managed."

Cowen's disagreements with EA

  • We are unable to calculate utility impartially. A central idea in EA is Peter Singer's famous thought experiment where you can save a drowning child at the cost of getting your coat muddy. You obviously save the child. But Cowen doesn't see this anecdote applying universally.
  • There is some "inescapable partiality" in our real-world moral calculations. Can you really quantify the difference in the quality of life between two radically different versions of human life, or between one human life and some number of cockroach lives? We don't have a "natural unit of comparison," which makes the sorts of utility calculations that EA might aspire to ultimately unrealistic. In economic argot, "There something [true] there at the margin, but at the macro level, there's just not a meaningful comparison."
  • In other words, it's great to be marginally more impartial and empirical, but we can be that way universally. Our partiality (for human life, for our own families, etc.) constrains the ideal of legible moral calculations. We prefer humans to cockroaches absolutely.
  • Known risks are a better focus than novel "existential" risks. Cowen is more worried about well-known, well-studied risks like nuclear war, which means EA has less to add (compared to foreign policy experts, for example).
  • With respect to the risks of artificial general intelligence or AGI (murderous robots, another salient concern in EA): Since we can't wave a magic wand and make it all stop, we should prefer the U.S. to be in the lead, rather than a hostile foreign authoritarian regime. (There's that partiality again.)

Why it matters

  • Cowen says he is interested in these ideas because other smart, talented people are. But to offer my own thoughts, the rise of AI in everyday life will make the topic of moral calculation one of the key debates of the next 20 years.

The full talk: