Monday, 13 October 2025
31.6 C
Singapore
34.2 C
Thailand
30.9 C
Indonesia
29.2 C
Philippines

Google to label AI-generated images in search results

Google labels AI-generated images in Search, giving users clearer insights and improving transparency in search results to tackle deep fake scams.

As AI-generated images increasingly flood Google’s search results, it has become harder for users like you to find genuine content. In response, Google has announced that it will begin flagging AI-generated and AI-edited images to help you distinguish between real and AI-created content. This update will roll out in the coming months, adding labels to images in Search, Google Lens, and Android’s Circle to Search features.

A new way to identify AI images

Google’s new system will give you more transparency when browsing images online. Using the “About this image” tool, you can see whether AI has generated or altered an image. This feature will give you insights into the image’s origin, making it easier to avoid misleading content. Google will apply this change not only to search results but also to its advertising services. Furthermore, the tech giant is exploring similar measures for YouTube videos, promising more updates later in the year.

The company will rely on metadata provided by the Coalition for Content Provenance and Authenticity (C2PA), a group it joined as a steering committee member earlier this year. The C2PA metadata will track an image’s creation details, including when and where it was generated and what software or equipment was used. This technology aims to give users more control and awareness while browsing.

A growing industry effort

Google is not alone in this effort. Leading tech companies, including Amazon, Microsoft, OpenAI, and Adobe, joined the C2PA. They aim to establish a standard that will help fight misinformation and digital manipulation. However, while many big names are on board, the standard is still in its early stages. Currently, it’s only supported by a few devices, such as some models of Sony and Leica cameras. Notably, some developers of AI-generation tools, such as Black Forrest Labs, have yet to adopt the standard.

The widespread use of AI in generating images has raised concerns about authenticity, particularly in light of recent online scams. AI-generated deepfakes have become a growing problem, with scammers using the technology’s accessibility. In February, a Hong Kong-based financier lost US$25 million to criminals posing as the company’s CFO in a video conference, using AI to fake the CFO’s appearance and voice. A recent study by verification company Sumsub revealed a 245% global increase in deepfake-related scams between 2023 and 2024, with a 303% spike in the U.S. alone.

Tackling online scams

Experts warn that AI technology is lowering the barrier for cybercriminals. “The public accessibility of these services has lowered the barrier of entry for cybercriminals,” said David Fairman, chief information officer and chief security officer of APAC at Netskope. “They no longer need special technological skill sets,” he told CNBC in May. With the rising use of AI in fraud, the need for stronger identification measures is crucial. Google’s new labels aim to tackle this issue head-on, providing you with more security and confidence when browsing images online.

Google’s decision to label AI-generated images is an important step in the fight against misinformation and online scams. It’s also a move that can help protect you from falling victim to deepfakes while offering more clarity in your search results. As AI continues to evolve, these types of protective measures will likely become more common across the web, making the digital world a safer and more transparent place.

Hot this week

Anthropic study reveals malicious data can easily sabotage AI models

Anthropic warns that small amounts of malicious training data can easily sabotage large AI models like Claude.

Most organisations unprepared for major deals, survey finds

A global survey reveals that 97% of organisations face major challenges in transaction readiness, citing resource constraints and economic uncertainty.

GovWare 2025 to spotlight the future of cybersecurity and digital trust

GovWare 2025 will gather over 13,000 cybersecurity leaders in Singapore from 21 to 23 October to shape the future of digital trust.

DEEBOT X11 OmniCyclone review: Powerful suction with a bagless self-emptying station

The Ecovacs DEEBOT X11 OmniCyclone delivers powerful suction, dual cleaning solutions, and a bagless self-emptying station for modern homes.

NVIDIA Blackwell redefines AI inference performance with record-breaking InferenceMAX results

NVIDIA Blackwell leads the new InferenceMAX benchmarks with unmatched AI performance, 15x ROI, and record-breaking efficiency.

Wi-Fi 7 as the nervous system of the intelligent enterprise

Wi-Fi 7 is set to become the backbone of intelligent enterprises in Southeast Asia, enabling faster, more reliable networks and powering advanced use cases.

Anthropic study reveals malicious data can easily sabotage AI models

Anthropic warns that small amounts of malicious training data can easily sabotage large AI models like Claude.

Apple discontinues the Clips app after eight years of creative video editing

Apple ends support for its Clips video-editing app, removing it from the App Store after eight years of creative use.

Little Nightmares 3 disappoints despite striking visuals

Review finds Little Nightmares 3 visually strong but frustratingly dark, with unclear puzzles and weak horror atmosphere.

Related Articles