Ilya Sutskever, a prominent AI researcher and former co-founder of OpenAI, has launched Safe Superintelligence Inc., a new venture dedicated to the secure development of artificial intelligence. The company, co-founded with Daniel Gross and Daniel Levy, aims to focus exclusively on creating "superintelligence"—AI systems surpassing human cognitive capabilities—while avoiding the distractions of typical management and product cycles. Based in Palo Alto, California, and Tel Aviv, Safe Superintelligence Inc. intends to attract top technical talent, leveraging strong regional connections.

Sutskever's move follows his involvement in a contentious attempt to remove OpenAI CEO Sam Altman last year, which highlighted internal conflicts over business priorities versus AI safety. Reflecting on this, Sutskever has expressed regret over the turmoil it caused within OpenAI. During his tenure at OpenAI, Sutskever co-led efforts on the safe advancement of artificial general intelligence (AGI), which extends beyond human intellect.

The establishment of Safe Superintelligence Inc. signals Sutskever's renewed commitment to addressing AI safety concerns independently, free from perceived commercial pressures that may compromise safety priorities. His departure from OpenAI was followed by the resignation of Jan Leike, his co-leader, who criticized OpenAI's focus on product development over safety. In response, OpenAI formed a safety and security committee, although its composition remains predominantly internal.

Safe Superintelligence Inc. represents Sutskever's endeavor to safeguard the development of superintelligent AI, aligning closely with his long-standing dedication to advancing AI technology responsibly and securely.