Cybercriminals are now using artificial intelligence to create more convincing scams with minimal effort, according to Microsoft’s latest Cyber Signals report. The company says AI has significantly lowered the skill level required to launch fraud campaigns, enabling even low-level attackers to build highly sophisticated phishing schemes, fake websites, and deepfakes in a matter of minutes.
Between April 2024 and April 2025, Microsoft thwarted US$4 billion worth of fraud attempts and blocked roughly 1.6 million bot sign-up attempts per hour. These findings point to the rising threat of AI-enhanced scams targeting online shoppers, job seekers, and individuals seeking technical support.
“Cybercrime is a trillion-dollar problem, and it’s been going up every year for the past 30 years,” said Kelly Bissell, Corporate Vice President of Anti-Fraud and Product Abuse at Microsoft. “Now we have AI that can make a difference at scale and help us build security and fraud protections into our products much faster.”
E-commerce, job, and tech support scams on the rise
In e-commerce, scammers are leveraging AI to set up fraudulent websites within minutes, using AI-generated product descriptions, customer reviews, and brand images to mimic genuine businesses. These sites often spread through social media ads optimised by AI algorithms. Chatbots powered by AI further deceive customers by handling complaints and delaying refunds with plausible but fake customer service responses.
Job scams are also growing in complexity. AI is being used to auto-generate job descriptions, clone recruiter voices, and simulate video interviews. Fraudsters may ask applicants to share sensitive information such as bank details, personal documents, or payments under the guise of onboarding requirements. Microsoft warns that legitimate companies never request such details through informal channels or ask for payment as part of the recruitment process.
Tech support scams remain a persistent threat. Although not always AI-driven, cybercriminal groups like Storm-1811 have abused Microsoft’s Quick Assist remote support tool by impersonating IT staff. Once access is granted, scammers steal data or install malware. In response, Microsoft has implemented additional warning prompts and security checks in Quick Assist to alert users of suspicious activity.
Tools and strategies to protect users
Microsoft has taken a multipronged approach to combating fraud across its platforms. For online shoppers, the Edge browser now includes typo and domain impersonation protection using deep learning. A machine learning-based Scareware Blocker also identifies fake pop-ups and scam alerts designed to frighten users into calling fake support numbers or downloading harmful software.
To tackle job fraud, LinkedIn now features AI-powered systems that detect fake job postings and scam recruiter accounts. Microsoft Defender SmartScreen, integrated into Windows and Edge, scans websites, files, and applications in real time to identify suspicious content.
Quick Assist has been upgraded to require users to confirm understanding of the risks before sharing their screens. Microsoft now blocks over 4,400 suspicious Quick Assist sessions daily, about 5.46% of all global attempts. Digital Fingerprinting technology is used to analyse behavioural patterns and stop fraud attempts in real time.
For enterprise use, Microsoft recommends using Remote Help, a more secure alternative to Quick Assist that limits access to within an organisation’s internal network.
Global collaboration and consumer education
Microsoft is working closely with law enforcement and industry partners to fight fraud at scale. The company’s Digital Crimes Unit (DCU) has played a role in dismantling criminal infrastructure and has helped secure hundreds of arrests globally. Through its partnership with the Global Anti-Scam Alliance (GASA), Microsoft joins forces with governments, financial authorities, consumer protection agencies, and tech companies to raise awareness and tackle scams more effectively.
Bissell, a cybersecurity veteran with experience at Accenture, Deloitte, and the US Department of Homeland Security, highlighted the need for greater collaboration across the tech sector. “If we’re not working together, we’ve all missed the bigger opportunity,” he said. “We must share cybercrime information with each other and educate the public.”
Microsoft has introduced a “Fraud-resistant by Design” policy, requiring all product teams to include fraud prevention measures during development. This includes fraud risk assessments, in-product security controls, and deeper integration of machine learning to detect and prevent suspicious behaviour.
Staying safe in the AI era
Microsoft advises consumers to remain cautious when shopping or applying for jobs online. Urgency tactics such as countdown timers and too-good-to-be-true offers should raise immediate red flags. Users should verify the legitimacy of websites and job listings through secure channels and trusted sources, never sharing personal or payment information with unknown parties.
Job seekers should look out for tell-tale signs of fraud, such as communication through personal email accounts or messaging apps, requests for money, or unnatural video interviews potentially created using deepfake technology.
Microsoft continues to evolve its fraud detection efforts to meet the growing threat landscape, aiming to make its platforms safer and more secure in an increasingly AI-powered world.