Ilya Sutskever, the co-founder and former chief scientist of OpenAI, has launched a new AI company named Safe Superintelligence.Â
The startup aims to develop superintelligence that surpasses human intelligence while ensuring safety.
Sutskever played a crucial role in the controversial ousting of Sam Altman, OpenAI’s chief executive, in November last year, a decision he later regretted.
He departed from OpenAI last month, initially providing limited information about his forthcoming plans.
Safe Superintelligence was co-founded by Daniel Gross and Daniel Levy, both seasoned professionals in the AI sector.
The company’s singular focus is on creating safe superintelligence, with no intention of releasing other products. Sutskever will assume the role of chief scientist, aspiring to achieve “revolutionary breakthroughs.”
The launch of OpenAI’s ChatGPT in November 2022 marked a significant milestone in the field of generative AI, demonstrating its potential to transform various domains, including email management and digital assistants.
The AI industry has recently faced legal challenges, with The New York Times suing OpenAI and Microsoft for copyright infringement related to AI systems.
In a dramatic turn of events, Sam Altman was reinstated as OpenAI’s CEO following an employee backlash and subsequent board restructuring.
Sutskever has voiced increasing concerns about the dangers of AI, emphasizing the importance of implementing safety measures. Jan Leike, who co-led OpenAI’s Superalignment team with Sutskever, also resigned and joined Anthropic, a rival AI company.
Safe Superintelligence enters the scene at a critical juncture in AI development, where the balance between innovation and safety is more pivotal than ever.