The Alignment Library
Comprehensive Knowledge Base on AI Alignment
A structured, comprehensive resource covering all fundamental problems, proposed solutions, and research frontiers in artificial intelligence alignment.
Core Problems
Explore fundamental challenges: outer alignment, inner alignment, corrigibility, and more.
Solutions & Research
Current approaches: RLHF, Constitutional AI, Debate, Mechanistic Interpretability, and their limitations.
Organizations & Researchers
Key players: MIRI, Anthropic, OpenAI, and leading researchers in the field.
Learning Resources
Curated reading lists, papers, videos, and courses organized by difficulty level.
A Note on P(doom)
This library presents alignment challenges honestly. Many leading researchers estimate very high probabilities of existential risk (50-99%+). The content reflects current technical understanding without false optimism.