Google has revealed that it removed more than 5.1 billion ads and suspended over 39.2 million advertiser accounts in 2024, as part of its efforts to combat bad actors and protect users online. According to the company’s newly released Ads Safety Report 2024, artificial intelligence (AI) played a major role in identifying and blocking harmful content before it reached consumers.
The report shows that AI-driven tools, particularly Google’s enhanced Large Language Models (LLMs), enabled more efficient and accurate enforcement at scale. Over 50 upgrades were made to these models last year, which helped speed up investigations and detect fraud indicators such as fake payment information during account set-up.
These improvements meant that AI contributed to the detection and enforcement of 97 percent of the publisher pages where action was taken. As a result, Google was able to restrict over 9.1 billion ads and block or restrict advertising on 1.3 billion publisher pages. It also took broader action at the site level on over 220,000 publisher websites.
Rising scam risks prompt stronger policy enforcement
In Singapore, election-themed scams and phishing campaigns have become increasingly common ahead of the Singapore General Election in 2025. This has prompted local agencies to warn the public to stay vigilant. In response, Google has continued to work closely with partners and use its most advanced tools to ensure a safer online experience.
A key strategy has been the expansion of its Advertiser Identity Verification programme, which now covers more than 200 countries and territories. Today, over 90 percent of the ads viewed on Google are from verified advertisers.
Google has also focused on adapting its policies to deal with emerging threats. This includes a dedicated team of over 100 experts tasked with analysing scam trends and developing countermeasures. In particular, the rise of AI-generated impersonation ads featuring public figures triggered updates to Google’s Misrepresentation policy. Advertisers found promoting such scams now face permanent suspension.
In 2024 alone, more than 700,000 advertiser accounts were permanently removed under this revised policy. The company said this led to a 90 percent drop in reports linked to these types of scam ads. Overall, Google blocked or removed 415 million ads and suspended over 5 million accounts last year for violations related specifically to scam activity.
Protecting election integrity and user trust
Google has also expanded its policies around election-related ads to safeguard integrity and transparency across more markets. As part of this, the company introduced stricter identity verification rules for election advertisers in several new countries. These efforts aim to help users clearly identify election ads and understand who is funding them.
In the past year, more than 8,900 new election advertisers were verified globally. At the same time, Google removed 10.7 million election ads from unverified accounts. These moves reflect Google’s broader aim to ensure the digital ad space remains transparent and accountable, particularly in politically sensitive periods.
The Ads Safety Report underlines Google’s commitment to staying ahead of malicious actors, especially as scams grow more sophisticated with the help of generative AI. The company reaffirmed its role in ensuring that both users and legitimate advertisers are protected, while bad actors are removed before they can cause harm.