
Ilya Sutskever was the chief scientist who shaped OpenAI's technical direction from its founding through the GPT era. A student of Geoffrey Hinton, Sutskever combined deep theoretical knowledge with practical engineering to build increasingly capable AI systems. His role in OpenAI's November 2023 board crisis, initially supporting Altman's removal, then reversing course, reflected genuine internal conflict about the pace and safety of AI development. In 2024, Sutskever left OpenAI to found Safe Superintelligence Inc., a company dedicated to building superintelligent AI safely. This move signals his belief that safety and capabilities research cannot be separated: you must build the thing to make it safe.
“It's possible that AI will lead to the end of humanity, but it's also possible that it will lead to an unimaginably wonderful future.”
paraphrased · 2023