Wednesday, 22 January 2025
27.1 C
Singapore
33.4 C
Thailand
26.4 C
Indonesia
26.1 C
Philippines

Safety concerns continue to plague OpenAI

OpenAI employees voice safety concerns over rushed launches and inadequate protocols, questioning the company's commitment to AI safety.

OpenAI, a leader in developing AI as intelligent as a human, is facing ongoing safety concerns voiced by its employees. Despite being valued at US$80 billion, the nonprofit research lab has been under scrutiny, with employees speaking out in the press and on podcasts about safety issues. According to an anonymous source from The Washington Post, OpenAI rushed through safety tests and celebrated their product’s launch prematurely.

โ€œThey planned the launch after-party before knowing if it was safe to launch,โ€ an anonymous employee told The Washington Post. โ€œWe failed at the process.โ€

Rising safety concerns

Safety issues at OpenAI are becoming more frequent. Current and former employees have signed an open letter demanding better safety and transparency practices. This comes shortly after the safety team was dissolved following the departure of cofounder Ilya Sutskever. Additionally, Jan Leike, a key OpenAI researcher, resigned, stating that โ€œsafety culture and processes have taken a backseat to shiny productsโ€ at the company.

Safety is a core principle for OpenAI. The companyโ€™s charter includes a clause that pledges to assist other organisations in advancing safety if a competitor achieves AGI (artificial general intelligence). Despite claiming dedication to solving safety problems in its complex system, OpenAI has faced criticism for prioritising other aspects over safety.

โ€œWeโ€™re proud of our track record of providing the most capable and safest AI systems and believe in our scientific approach to addressing risk,โ€ OpenAI spokesperson Taya Christianson said in a statement to The Verge. โ€œRigorous debate is critical given the significance of this technology, and we will continue to engage with governments, civil society, and other communities worldwide in service of our mission.โ€

The stakes are high

The stakes around safety in AI are immense. In March, a report commissioned by the US State Department highlighted that โ€œcurrent frontier AI development poses urgent and growing risks to national security.โ€ The report compared the potentially destabilising effects of advanced AI and AGI to introducing nuclear weapons.

The recent controversies at OpenAI followed the boardroom coup last year that briefly ousted CEO Sam Altman. The board cited a failure to be โ€œconsistently candid in his communicationsโ€ as the reason for his removal, leading to an investigation that did little to reassure the staff.

OpenAI spokesperson Lindsey Held told the Post that the GPT-4o launch โ€œdidnโ€™t cut cornersโ€ on safety. However, another unnamed company representative admitted that the safety review timeline was compressed to a week. โ€œWe are rethinking our whole way of doing it,โ€ the anonymous representative told the Post. โ€œThis was  just not the best way to do it.โ€

Attempts to address safety concerns

OpenAI has made several announcements addressing safety concerns in response to the rolling controversies. This week, the company announced a partnership with Los Alamos National Laboratory to explore how advanced AI models, such as GPT-4o, can safely aid in scientific research. The announcement emphasised Los Alamosโ€™s strong safety record. Additionally, an anonymous spokesperson told Bloomberg that OpenAI has created an internal scale to track the progress of its large language models towards achieving AGI.

Despite these efforts, OpenAI’s safety-focused announcements seem like defensive measures in the face of growing criticism. It’s clear that OpenAI is under pressure, but public relations efforts alone will not suffice to safeguard society. If OpenAI continues to develop AI without strict safety protocols, as claimed by those within the company, the potential impact on people beyond the Silicon Valley bubble is significant. The average person has no say in the development of privatised AGI, yet their protection from OpenAIโ€™s creations is at stake.

โ€œAI tools can be revolutionary,โ€ FTC chair Lina Khan told Bloomberg in November. However, she expressed concerns that โ€œa relatively small number of companies control the critical inputs of these tools.โ€

If the numerous claims against OpenAIโ€™s safety protocols are accurate, it raises serious questions about its fitness to steward AGI, a role the organisation has essentially assigned itself. Allowing one group in San Francisco to control potentially society-altering technology is cause for concern, and there is an urgent demand for transparency and safety, even within OpenAI’s ranks.

Hot this week

TikTok services were restored in the US after a brief shutdown

TikTok restored its service in the US after a brief outage following former President Trumpโ€™s executive action to delay a looming nationwide ban.

Amazon to acquire Indian BNPL startup Axio for over US$150M

Amazon is acquiring Indian BNPL startup Axio for over US$150M, strengthening its push into financial services in one of its fastest-growing markets.

UK unveils digital wallet and AI chatbot to revolutionise public services

The UK announces a digital wallet for IDs and an OpenAI-powered chatbot to enhance public services, aiming for secure and efficient solutions.

Marvel Snap faces sudden ban, joining TikTok in ByteDance crackdown

Marvel Snap faces an unexpected ban in the U.S. due to ByteDance ties, leaving players without access. Second Dinner promises updates soon.

Nintendo Switch 2 reveal: Everything you need to know

Nintendo Switch 2, confirmed for 2025, will have a larger design, improved Joy-Con, backward compatibility, and a new Mario Kart game.

UK unveils digital wallet and AI chatbot to revolutionise public services

The UK announces a digital wallet for IDs and an OpenAI-powered chatbot to enhance public services, aiming for secure and efficient solutions.

Apple set to launch iPhone SE 4 with Dynamic Island and iPad Air featuring M3 chip

The iPhone SE 4 with Dynamic Island and iPad Air with M3 chip are expected to launch soon. They will offer modern design and performance upgrades.

President Trump signs executive order delaying TikTok ban for 75 days

Trump delayed the TikTok ban with a 75-day executive order, allowing time to address national security concerns and find a resolution.

President Trump repeals Bidenโ€™s AI executive order on first day in office

President Trump repeals Biden's 2023 AI executive order on day one, sparking debate over AI regulation, innovation, and national security risks.

Related Articles