
Eliezer Yudkowsky began writing about AI alignment in the early 2000s, long before it became a recognized field. He founded the Machine Intelligence Research Institute to work on the problem of ensuring superintelligent AI remains beneficial. Yudkowsky developed foundational concepts in AI safety including the notion of coherent extrapolated volition and detailed analyses of how optimization processes can produce unintended outcomes. His "Sequences" on rationality trained a generation of researchers to think clearly about these problems. He has grown increasingly pessimistic about humanity's chances of surviving advanced AI, but his warnings stem from taking the problem seriously rather than sensationalism.
“The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.”
Artificial Intelligence as a Positive and Negative Factor in Global Risk · 2008