Preventing S-risks
Why is this a pressing problem?
‘Suffering risks’ or ‘s-risks’ are occurrences that could result in ‘astronomical suffering,’ as technological advancements potentially lead to space colonisation. While s-risks are therefore speculative, there are various reasons – based on historical events and present-day developments – to believe they could come about in future.
Firstly, new technologies – such as artificial general intelligence or even technologies we haven’t anticipated – could increase the chance of unprecedented suffering. New technologies could concentrate and even indefinitely consolidate power in the hands of those who develop and control them, which could lead to situations such as leaders high in sadism or psychopathy being much more capable of remaining in power. AI systems could also cause vast harm in the case of an alignment ‘near-miss,’ lock in values that would allow large amounts of unnecessary suffering (e.g. in animals), and increase the potential downsides from large-scale conflict. Additionally, future AI systems could potentially be sentient and might themselves suffer, which could be a disaster given that there are many reasons to expect that digital minds could come to outnumber biological minds, such as because they may be more efficient and faster to replicate than biological minds.
Secondly, humanity’s history shows that we clearly can’t assume powerful technologies will always be used well; history contains numerous examples of intentional cruelty or lack of interest in the well-being of different groups or animals, as well as examples of technological advancement increasing the scale and severity of pre-existing harms (such as in the case of factory farming).
Finally, there may be far more lives in the future than have ever existed to date, particularly if humanity colonises space. This could mean that many more humans, as well as farmed and wild animals, will have lives that could go well or badly. Another possibility is that space will be colonised with artificial agents. If these artificial agents are sentient, this could be where most future happiness and suffering reside.
As research into how best to prevent s-risks is in its early stages, exploring the plausibility of suffering-focused ethical views, increasing concern for suffering to build the field, or doing preliminary research on the interventions that look most promising from the perspective of reducing s-risks all seem like useful directions to focus on. Keep reading this introduction for ideas of research questions you could pursue in these areas.
You could also do research on more specific problems that could be promising to work on if your priority is reducing s-risks. See the profiles below this introduction for a range of research directions that are relevant to s-risks, but bear in mind not all questions in these profiles will be promising from the perspective of reducing s-risks. If you want to work on reducing s-risks, we recommend applying for coaching and reaching out to the Center for Reducing Suffering or the Center on Long-Term Risk for guidance on choosing a research question.
-
Suffering-focused ethics resources is a good way of getting started if you want to learn more about this value system.
Research Paper & Posts:
Cause prioritization for downside-focused value systems – Lukas Gloor
Risks of Astronomical Future Suffering – Brian Tomasik
Astronomical suffering from slightly misaligned artificial intelligence – Brian Tomasik
Reducing long-term risks from malevolent actors – David Althaus and Tobias Baumann
When does technical work to reduce AGI conflict make a difference? – Jesse Clifton, Samuel Dylan Martin, Anthony DiGiovanni
On fat-tailed distributions and s-risks – Magnus Vinding
S-risk impact distribution is double-tailed – Magnus Vinding and Tobias Baumann
Books:
Avoiding the Worst: How to Prevent a Moral Catastrophe, Tobias Baumann
Or listen to the free audiobook.
Suffering Focused Ethics: Defense and Implications, Magnus Vinding
The Tango of Ethics, Jonathan Leighton
Reasoned Politics, Magnus Vinding
Organisations
The Center for Reducing Suffering researches the ethical views that might put more weight on s-risks, and considers practical approaches to reducing s-risks.
The Center on Long-Term Risk focuses on reducing s-risks that could arise from the development of AI, alongside community-building and grantmaking to support work on the reduction of s-risks.
Many other organisations are working on solving problems that it might make sense to work on if you want to prioritise reducing s-risks. See the profiles we list belowfor further ideas.
-
Sign up to this newsletter for updates and opportunities from the Center for Reducing Suffering.
Contributors: This introduction was published 19/06/23. Thanks to Anthony DiGiovanni and Winston Oswald-Drummond for helpful feedback on this introduction. All errors remain our own.