Ilya Sutskever, co-founder and former chief scientist of OpenAI, is venturing into a new project dedicated to creating safe AI. On Wednesday, Sutskever announced the launch of Safe Superintelligence Inc. (SSI), a startup with a clear mission: developing a powerful AI system with a primary focus on safety.
SSI’s unique approach to AI
SSI aims to balance safety and capabilities simultaneously, ensuring rapid advancement of its AI system while maintaining safety as a top priority. The company seeks to avoid the external pressures that often burden AI teams at big tech firms like OpenAI, Google, and Microsoft. By maintaining a “singular focus,” SSI hopes to avoid distractions caused by management overhead or product cycles.
“Our business model means safety, security, and progress are all insulated from short-term commercial pressures,” the announcement reads. “This way, we can scale in peace.” This approach allows SSI to focus solely on its goal without being sidetracked by the typical commercial demands.
Notable figures in the AI field co-founded SSI. Alongside Sutskever, Daniel Gross, a former AI lead at Apple, and Daniel Levy, previously a member of the technical staff at OpenAI, join the team. This powerhouse trio brings a wealth of experience and expertise to the new venture.
Background and motivation
Last year, Sutskever was at the forefront of a movement to remove OpenAI CEO Sam Altman. After leaving OpenAI in May, Sutskever hinted at starting a new project. His departure was followed by AI researcher Jan Leike, who expressed concerns about safety processes taking a backseat to the development of flashy products. Additionally, Gretchen Krueger, a policy researcher at OpenAI, cited safety concerns when she resigned.
SSI’s mission reflects Sutskever’s ongoing commitment to addressing these safety issues in AI development. By focusing on safety from the start, SSI aims to build AI systems that are not only advanced but also secure and reliable.
The announcement reiterates that “our business model means safety, security, and progress are all insulated from short-term commercial pressures.” This approach enables SSI to maintain its focus on developing safe AI without the distractions of commercial demands.
In conclusion, SSI represents a new chapter in AI development, emphasising safety and a clear mission. With the combined expertise of Sutskever, Gross, and Levy, SSI is well-positioned to make significant strides in artificial intelligence.