Thursday, 13 November 2025
26.4 C
Singapore
22.8 C
Thailand
21.5 C
Indonesia
28.4 C
Philippines

Google to label AI-generated images in search results

Google labels AI-generated images in Search, giving users clearer insights and improving transparency in search results to tackle deep fake scams.

As AI-generated images increasingly flood Google’s search results, it has become harder for users like you to find genuine content. In response, Google has announced that it will begin flagging AI-generated and AI-edited images to help you distinguish between real and AI-created content. This update will roll out in the coming months, adding labels to images in Search, Google Lens, and Android’s Circle to Search features.

A new way to identify AI images

Google’s new system will give you more transparency when browsing images online. Using the “About this image” tool, you can see whether AI has generated or altered an image. This feature will give you insights into the image’s origin, making it easier to avoid misleading content. Google will apply this change not only to search results but also to its advertising services. Furthermore, the tech giant is exploring similar measures for YouTube videos, promising more updates later in the year.

The company will rely on metadata provided by the Coalition for Content Provenance and Authenticity (C2PA), a group it joined as a steering committee member earlier this year. The C2PA metadata will track an image’s creation details, including when and where it was generated and what software or equipment was used. This technology aims to give users more control and awareness while browsing.

A growing industry effort

Google is not alone in this effort. Leading tech companies, including Amazon, Microsoft, OpenAI, and Adobe, joined the C2PA. They aim to establish a standard that will help fight misinformation and digital manipulation. However, while many big names are on board, the standard is still in its early stages. Currently, it’s only supported by a few devices, such as some models of Sony and Leica cameras. Notably, some developers of AI-generation tools, such as Black Forrest Labs, have yet to adopt the standard.

The widespread use of AI in generating images has raised concerns about authenticity, particularly in light of recent online scams. AI-generated deepfakes have become a growing problem, with scammers using the technology’s accessibility. In February, a Hong Kong-based financier lost US$25 million to criminals posing as the company’s CFO in a video conference, using AI to fake the CFO’s appearance and voice. A recent study by verification company Sumsub revealed a 245% global increase in deepfake-related scams between 2023 and 2024, with a 303% spike in the U.S. alone.

Tackling online scams

Experts warn that AI technology is lowering the barrier for cybercriminals. “The public accessibility of these services has lowered the barrier of entry for cybercriminals,” said David Fairman, chief information officer and chief security officer of APAC at Netskope. “They no longer need special technological skill sets,” he told CNBC in May. With the rising use of AI in fraud, the need for stronger identification measures is crucial. Google’s new labels aim to tackle this issue head-on, providing you with more security and confidence when browsing images online.

Google’s decision to label AI-generated images is an important step in the fight against misinformation and online scams. It’s also a move that can help protect you from falling victim to deepfakes while offering more clarity in your search results. As AI continues to evolve, these types of protective measures will likely become more common across the web, making the digital world a safer and more transparent place.

Hot this week

Meta introduces a quick connect shortcut for smart glasses

Meta’s new quick connect feature lets smart glasses users call or text with one touch, reducing reliance on “hey Meta” voice commands.

Tenable reveals seven ChatGPT vulnerabilities that expose users to data theft and hijacking

Tenable identifies seven ChatGPT flaws exposing users to data theft and manipulation through indirect prompt injection attacks.

Devialet: How Phantom Ultimate reflects the future of compact high-end sound

Devialet’s Phantom Ultimate shows how innovation, software, sustainability, and design are shaping the next era of compact high-end audio.

Singapore FinTech Festival 2025 marks 10 years with focus on the next decade of finance

Singapore FinTech Festival 2025 celebrates its 10th year, spotlighting AI, tokenisation, and quantum technologies shaping global finance.

Workato launches AI Lab in Singapore to drive applied AI innovation and workforce development

Workato opens its AI Lab in Singapore to accelerate applied AI innovation, create skilled jobs, and strengthen industry-academia collaboration.

GFTN unveils ALFIN, an AI-driven research engine for global finance

GFTN launches ALFIN, an AI-driven research platform offering verifiable, analyst-grade intelligence for finance professionals worldwide.

Meta opens AI showcase to the public in Singapore

Meta AI opens its first public showcase in Singapore, featuring interactive experiences and an exclusive preview of Ray-Ban Meta Glasses (Gen 2).

Nium joins Visa’s stablecoin settlement pilot to advance cross-border payments

Nium joins Visa’s stablecoin settlement pilot to modernise cross-border payments with faster, more secure blockchain-based settlements.

Visa launches Scan to Pay to accelerate QR payments across Asia Pacific

Visa introduces Scan to Pay across Asia Pacific, expanding QR payment acceptance and connecting millions of merchants and consumers through secure digital wallets.

Related Articles

Popular Categories