As AI-generated images increasingly flood Google’s search results, it has become harder for users like you to find genuine content. In response, Google has announced that it will begin flagging AI-generated and AI-edited images to help you distinguish between real and AI-created content. This update will roll out in the coming months, adding labels to images in Search, Google Lens, and Android’s Circle to Search features.
A new way to identify AI images
Google’s new system will give you more transparency when browsing images online. Using the “About this image” tool, you can see whether AI has generated or altered an image. This feature will give you insights into the image’s origin, making it easier to avoid misleading content. Google will apply this change not only to search results but also to its advertising services. Furthermore, the tech giant is exploring similar measures for YouTube videos, promising more updates later in the year.
The company will rely on metadata provided by the Coalition for Content Provenance and Authenticity (C2PA), a group it joined as a steering committee member earlier this year. The C2PA metadata will track an image’s creation details, including when and where it was generated and what software or equipment was used. This technology aims to give users more control and awareness while browsing.
A growing industry effort
Google is not alone in this effort. Leading tech companies, including Amazon, Microsoft, OpenAI, and Adobe, joined the C2PA. They aim to establish a standard that will help fight misinformation and digital manipulation. However, while many big names are on board, the standard is still in its early stages. Currently, it’s only supported by a few devices, such as some models of Sony and Leica cameras. Notably, some developers of AI-generation tools, such as Black Forrest Labs, have yet to adopt the standard.
The widespread use of AI in generating images has raised concerns about authenticity, particularly in light of recent online scams. AI-generated deepfakes have become a growing problem, with scammers using the technology’s accessibility. In February, a Hong Kong-based financier lost US$25 million to criminals posing as the company’s CFO in a video conference, using AI to fake the CFO’s appearance and voice. A recent study by verification company Sumsub revealed a 245% global increase in deepfake-related scams between 2023 and 2024, with a 303% spike in the U.S. alone.
Tackling online scams
Experts warn that AI technology is lowering the barrier for cybercriminals. “The public accessibility of these services has lowered the barrier of entry for cybercriminals,” said David Fairman, chief information officer and chief security officer of APAC at Netskope. “They no longer need special technological skill sets,” he told CNBC in May. With the rising use of AI in fraud, the need for stronger identification measures is crucial. Google’s new labels aim to tackle this issue head-on, providing you with more security and confidence when browsing images online.
Google’s decision to label AI-generated images is an important step in the fight against misinformation and online scams. It’s also a move that can help protect you from falling victim to deepfakes while offering more clarity in your search results. As AI continues to evolve, these types of protective measures will likely become more common across the web, making the digital world a safer and more transparent place.