OpenAI could soon ask you to complete an ID verification process to access its most advanced AI models. This comes as part of a new plan the company calls “Verified Organization,” which was quietly announced on a support page last week.
The process is designed for developers and businesses that use OpenAI’s API, giving them access to future high-level tools. But not everyone will qualify. To become verified, your organisation must submit a government-issued ID from one of the countries OpenAI currently supports. An ID can only be used to verify one organisation every 90 days.
Why OpenAI is adding ID checks
According to OpenAI, this move is all about safety and responsible AI use. While most developers follow the rules, the company says a “small minority” misuse the platform in ways that go against its usage policies. OpenAI says the new ID verification step is meant to help prevent this misuse while keeping access open to responsible developers.
“We take our responsibility seriously to ensure that AI is both broadly accessible and used safely,” reads a message on the support page. “Unfortunately, a small minority of developers intentionally use the OpenAI APIs, violating our usage policies. We’re adding the verification process to mitigate unsafe use of AI while continuing to make advanced models available to the broader developer community.”
OpenAI released a new Verified Organization status as a new way for developers to unlock access to the most advanced models and capabilities on the platform, and to be ready for the "next exciting model release"
— Tibor Blaho (@btibor91) April 12, 2025
– Verification takes a few minutes and requires a valid… pic.twitter.com/zWZs1Oj8vE
The idea is to create a safer environment for people and organisations using OpenAI’s tools daily, especially as the technology becomes more powerful.
What the new process involves
According to OpenAI, verification takes just a few minutes to complete but requires a valid ID. Once approved, your organisation will be granted the new “Verified Organisation” status, which could become a requirement for using future models and features.
OpenAI also hinted that this step is part of a larger push to prepare for the next big model release. If you’re a developer or a business relying on OpenAI tools, you might need to get verified soon to keep using all the platform’s new capabilities.
Tibor Blaho’s example post on April 12 showed a screenshot of the support page, describing the process and encouraging developers to prepare for what’s next.
A response to rising security concerns
This change may involve more than safety and policy enforcement. The move could also improve security as OpenAI’s models grow in complexity. The company has previously published reports on tracking and blocking misuse, including some linked to foreign actors.
In one example, OpenAI highlighted efforts to stop malicious use of its API by groups believed to be based in North Korea. In another case, OpenAI investigated a data breach connected to DeepSeek, an AI lab based in China. According to a Bloomberg report earlier this year, a group possibly linked to DeepSeek used OpenAI’s API to extract large amounts of information in late 2024. That information may have been used to train competing models, which goes against OpenAI’s rules.
This follows OpenAI’s decision to block access to its services in China last summer. As the platform continues to expand, protecting its data, tools, and users has become a top priority.
So, if you’re part of a business or development team using OpenAI’s tools, it might be time to get verified. While the process seems simple, it could become essential for continued access to OpenAI’s most powerful technology.