Friday, 5 December 2025
28.3 C
Singapore
25.4 C
Thailand
25.8 C
Indonesia
27.1 C
Philippines

Google to label AI-generated images in search results

Google labels AI-generated images in Search, giving users clearer insights and improving transparency in search results to tackle deep fake scams.

As AI-generated images increasingly flood Google’s search results, it has become harder for users like you to find genuine content. In response, Google has announced that it will begin flagging AI-generated and AI-edited images to help you distinguish between real and AI-created content. This update will roll out in the coming months, adding labels to images in Search, Google Lens, and Android’s Circle to Search features.

A new way to identify AI images

Google’s new system will give you more transparency when browsing images online. Using the “About this image” tool, you can see whether AI has generated or altered an image. This feature will give you insights into the image’s origin, making it easier to avoid misleading content. Google will apply this change not only to search results but also to its advertising services. Furthermore, the tech giant is exploring similar measures for YouTube videos, promising more updates later in the year.

The company will rely on metadata provided by the Coalition for Content Provenance and Authenticity (C2PA), a group it joined as a steering committee member earlier this year. The C2PA metadata will track an image’s creation details, including when and where it was generated and what software or equipment was used. This technology aims to give users more control and awareness while browsing.

A growing industry effort

Google is not alone in this effort. Leading tech companies, including Amazon, Microsoft, OpenAI, and Adobe, joined the C2PA. They aim to establish a standard that will help fight misinformation and digital manipulation. However, while many big names are on board, the standard is still in its early stages. Currently, it’s only supported by a few devices, such as some models of Sony and Leica cameras. Notably, some developers of AI-generation tools, such as Black Forrest Labs, have yet to adopt the standard.

The widespread use of AI in generating images has raised concerns about authenticity, particularly in light of recent online scams. AI-generated deepfakes have become a growing problem, with scammers using the technology’s accessibility. In February, a Hong Kong-based financier lost US$25 million to criminals posing as the company’s CFO in a video conference, using AI to fake the CFO’s appearance and voice. A recent study by verification company Sumsub revealed a 245% global increase in deepfake-related scams between 2023 and 2024, with a 303% spike in the U.S. alone.

Tackling online scams

Experts warn that AI technology is lowering the barrier for cybercriminals. “The public accessibility of these services has lowered the barrier of entry for cybercriminals,” said David Fairman, chief information officer and chief security officer of APAC at Netskope. “They no longer need special technological skill sets,” he told CNBC in May. With the rising use of AI in fraud, the need for stronger identification measures is crucial. Google’s new labels aim to tackle this issue head-on, providing you with more security and confidence when browsing images online.

Google’s decision to label AI-generated images is an important step in the fight against misinformation and online scams. It’s also a move that can help protect you from falling victim to deepfakes while offering more clarity in your search results. As AI continues to evolve, these types of protective measures will likely become more common across the web, making the digital world a safer and more transparent place.

Hot this week

Cronos: The New Dawn drives major profit surge for Bloober Team

Bloober Team reports record Q3 2025 results as Cronos: The New Dawn drives a major surge in global sales and profit.

Kaspersky reports sharp rise in daily malicious file detections in 2025

Kaspersky reports a rise in global cyberthreats in 2025, detecting 500,000 malicious files daily and significant growth in spyware and password stealers.

ShopBack partners Singapore Tourism Board to boost travel rewards for Malaysians

ShopBack and the Singapore Tourism Board partner to offer Malaysians enhanced Cashback rewards and perks for travel to Singapore.

Slop Evader filters out AI content to restore pre-ChatGPT internet

Slop Evader filters AI-generated content online, restoring pre-ChatGPT search results for a more human web.

Antigravity enters the drone market with the A1, a lightweight FPV model with 360-degree 8K recording

Antigravity launches its first drone, the A1, combining FPV controls with 360-degree 8K imaging in a compact 249g design.

Tiger Brokers: Bringing institutional-grade AI intelligence to global retail investors

AI is redefining retail investing as platforms like Tiger Brokers’ TigerAI integrate verified intelligence, personalisation, and long-term wealth management to empower global investors.

Antigravity enters the drone market with the A1, a lightweight FPV model with 360-degree 8K recording

Antigravity launches its first drone, the A1, combining FPV controls with 360-degree 8K imaging in a compact 249g design.

Micron’s exit from Crucial signals a turning point for consumer memory

Micron ends its Crucial consumer line as it shifts focus to AI and enterprise memory, marking a major change in the PC hardware market.

Sony introduces A7 V with updated sensor, faster processing, and improved stabilisation

Sony launches the A7 V with a new sensor, a faster processor, and upgraded stabilisation, targeting hybrid shooters with enhanced features.

Related Articles

Popular Categories