Just months after departing OpenAI over safety concerns, Ilya Sutskever is back in the AI world, having raised $1 billion in cash for a new venture, according to Reuters.
Aptly named Safe Superintelligence (SSI), the startup aims to develop artificial intelligence that can surpass human intellect — but in a safe, controlled manner. The startup is backed by VC heavy hitters Andreessen Horowitz, Sequoia Capital, DST Global and SV Angel, and Reuters says the valuation was close to $5 billion.
Sutskever was at the center of last year's leadership battle at OpenAI, having served on the board that voted to oust CEO Sam Altman. When Altman returned to lead OpenAI after an employee revolt, Sutskever stepped down from the board, and eventually quit on conciliatory terms. Days after his departure, OpenAI disbanded the superalignment team the Sutskever led, a group responsible for making sure the development of the company's AI models were safe and on a trajectory that benefited humanity.
While Sutskever co-founded SSI, the startup's CEO will be Daniel Gross, who previously led AI efforts at Apple.
"It's important for us to be surrounded by investors who understand, respect and support our mission, which is to make a straight shot to safe superintelligence and in particular to spend a couple of years doing R&D on our product before bringing it to market," Gross told Reuters.
Sutskever will serve as chief scientist of SSI, while the company's third co-founder and former OpenAI researcher, Daniel Levy, will serve as principal scientist.