Jan Leike, a leading researcher at OpenAI, has resigned, citing a shift in the company’s focus from safety to product development. His departure follows closely behind that of co-founder Ilya Sutskever, amidst growing concerns over the prioritisation of AI safety.
Building smarter-than-human machines is an inherently dangerous endeavor.
— Jan Leike (@janleike) May 17, 2024
OpenAI is shouldering an enormous responsibility on behalf of all of humanity.
Leike’s resignation was highlighted by a series of posts on X (formerly Twitter), where he expressed concerns about OpenAI’s commitment to AI safety protocols. According to Leike, the organisation has increasingly favoured the development of consumer AI products like ChatGPT and DALL-E over the essential safety measures needed for advanced AI technologies. This shift comes after the disbandment of the Superalignment team, a group that Leike led, that was dedicated to addressing long-term AI risks.
The Superalignment Team and Its Disbandment
The Superalignment team was established in July to tackle “the core technical challenges” of AI safety as OpenAI ventured into developing AI capable of human-like reasoning. However, Wired reported that the team was disbanded, leading to further speculation about the company’s direction and safety commitments. OpenAI originally aimed to make their AI models publicly available, aligning with the organisation’s name and mission. However, these plans were altered, turning the models into proprietary knowledge due to concerns over potential misuse.
Leadership Changes and Future Directions
Following the departures of Leike and Sutskever, John Schulman, another co-founder, is set to take over Leike’s responsibilities. This change occurs in a tumultuous period for OpenAI, which also saw a notable failed attempt to oust CEO Sam Altman last year.
Leike’s decision to leave highlights a critical point of contention within OpenAI regarding its operational priorities. He emphasised the need for a serious approach to the implications of artificial general intelligence (AGI) to ensure it benefits all of humanity, a sentiment that he felt was being overshadowed by the pursuit of marketable products.