New paper with an exhaustive taxonomy of societal-scale AI risks, based on accountability:
arxiv.org/pdf/2306.06924
Extinction, injustice, and other widespread harms are considered. Additional taxonomies are needed for a more diverse and robust perspective on risk. Meanwhile, you might appreciate:
* this self-fulfilling pessimism story:
arxiv.org/pdf/2306.06924
* this figure depicting industries that could eventually get out of control in a closed loop:
arxiv.org/pdf/2306.06924
...as in this "production web" story:
arxiv.org/pdf/2306.06924
* these two "bigger than expected" AI impact stories:
arxiv.org/pdf/2306.06924
* this email helper story and corrupt mediator story, which go together:
arxiv.org/pdf/2306.06924
arxiv.org/pdf/2306.06924
* this harmful A/B testing story:
arxiv.org/pdf/2306.06924
* concerns about weaponization by criminals and states:
arxiv.org/pdf/2306.06924
The point of the paper is not just to tell stories, but to illustrate a strategy for uncovering new risks and novel ways to address them: exhaustive taxonomy. It's a powerful thinking strategy, similar to classical fault tree analysis, and essential to mitigating AI risk.
Conversation
Can we make it a thing that we keep arxiv links to /abs/ and not /pdf/
1
4
I prefer links to /abs/ when referencing the entire paper. But Critch also includes links to specific pages of the PDF here. Gotta use /pdf/ for those.
1



