Artificial Intelligence (AI) has become a transformative force in society. It powers our cars, diagnoses medical conditions, fights financial fraud, and automates the systems that make our daily lives more convenient. But while AI has unlocked opportunities for progress, it has also opened new doors for scammers. Today, AI scams represent one of the fastest-growing digital threats, targeting consumers, families, and organizations with unprecedented speed and scale.
From cloned voices begging for emergency funds to deepfake executives pressuring employees into wiring millions, fraudsters are weaponizing AI to exploit human trust. Regulators like the NYC Department of Consumer and Worker Protection are sounding the alarm, and cybersecurity leaders are calling for urgent action.
In this article, we’ll unpack:
- How AI scams work and why they’re different from traditional fraud
- Real-world examples of AI-powered scam tools in action
- Signs that can help consumers and businesses spot an AI scam
- What governments and companies are doing to fight back
- Steps you can take today to protect yourself and your organization
What Are AI Scams?
AI scams are fraudulent schemes that use artificial intelligence to deceive victims into handing over money, credentials, or sensitive information. Unlike older forms of fraud, these attacks are personalized, scalable, and highly believable.
According to the NYC Department of Consumer and Worker Protection, the two most common consumer-facing threats today are:
- Voice cloning scams – where fraudsters use AI to mimic the voice of a loved one in distress. Victims may receive an urgent call from what sounds like a panicked family member asking for immediate financial help.
- Deepfake scams – where AI-generated videos or live calls convincingly impersonate a friend, colleague, or authority figure to pressure someone into acting quickly.
Both rely on urgency, secrecy, and hard-to-trace payments like gift cards, cryptocurrency, or wire transfers.
But consumer-facing scams are just the tip of the iceberg. As Sardine highlights in their research on AI scams, entire ecosystems of fraud-as-a-service tools are emerging on the dark web – making it easy for anyone to launch sophisticated scams at scale.
The Rise of Fraud-as-a-Service
In the past, cybercrime required technical expertise. Hackers had to write their own malware, design phishing kits, and build infrastructure from scratch. Today, AI has flipped that model.
Fraud-as-a-Service platforms now sell ready-to-use AI scam kits, complete with tutorials, customer support, and subscription models. For as little as $100, anyone can purchase a toolkit that includes:
- WormGPT – a jailbroken AI model trained to write phishing emails, fake invoices, and malicious attachments.
- Deep-Live-Cam – a real-time video deepfake tool that lets fraudsters impersonate executives on live Zoom calls.
- OnlyFake – a service that generates realistic passports, IDs, and bank documents for as little as $15.
- Invoice Swappers – malware that silently alters payment instructions on genuine business invoices.
These tools are custom-built for fraud, not repurposed chatbots. They evolve constantly, adapting to bypass filters, mimic human behavior, and scale attacks faster than risk teams can respond.
The effect? Fraudsters no longer need to be hackers. They just need a credit card and a Telegram account.
Real-World Impacts: Scams That Hit Close to Home
Families and Individuals
Imagine getting a call from your daughter, her voice cracking in fear: “Mom, I’m in trouble. I need money right now – please don’t tell anyone.” The call is urgent, emotional, and convincing. But it’s not her. It’s an AI clone built from snippets of her TikTok videos.
These voice cloning scams are spreading rapidly across the U.S., often targeting seniors who may be less familiar with deepfake technology. Victims have lost thousands before realizing their loved one was safe.
Businesses and Employees
In 2024, a finance worker in Hong Kong wired $25 million after attending a Zoom meeting where multiple colleagues, including the CFO, appeared to be present. In reality, every participant was a deepfake, generated with AI tools like Deep-Live-Cam.
This case illustrates how corporate deepfake scams bypass email security entirely. They exploit trust in face-to-face interactions, putting employees in impossible situations.
Banks and Financial Institutions
Fraudulent identities generated by tools like OnlyFake are overwhelming traditional KYC systems. Fake IDs complete with holograms, metadata, and realistic barcodes are slipping past visual inspections and basic compliance checks.
As Sardine’s research shows, even fraud detection vendors are struggling to keep pace with the speed and sophistication of these tools.
How to Spot an AI Scam
While AI scams can feel undetectable, there are warning signs to watch for.
Signs of a Voice-Cloning Scam
- The call comes unexpectedly and creates immediate pressure.
- The caller asks for secrecy or insists you don’t hang up.
- Payment is requested through gift cards, payment apps, crypto, or wire transfers.
- Personal details don’t add up when you probe with deeper questions.
Signs of a Deepfake Scam
- Video has unnatural blinking, odd shadows, or jerky movements.
- Speech patterns are stilted, repetitive, or slightly off.
- The person is asking for something out of character – like a large transfer.
General Red Flags
- Requests for urgent action without time to verify.
- Untraceable payment methods.
- Strange phrasing or inconsistencies with known behavior.
The golden rule: Pause, verify, and trust your instincts.
What Governments and Regulators Are Doing
Cities and regulators are moving quickly to address AI-driven scams.
- New York City’s Action Plan for AI calls for responsible use of AI while raising awareness about fraud risks.
- The Federal Trade Commission (FTC) encourages consumers to report suspicious calls, deepfakes, or scam attempts. These reports help build cases against bad actors.
- Policymakers are considering new requirements for watermarking deepfakes, restricting fraud kits, and improving corporate security standards.
But regulation alone won’t be enough. Businesses and individuals must also take proactive steps.
What Companies Are Doing to Stay Ahead
Fraud prevention leaders are investing heavily in detection and intelligence. Sardine, for example, recommends:
- Device intelligence – spotting fraud through unusual hardware setups, emulators, or spoofed devices.
- Behavioral biometrics – detecting fraud by analyzing typing patterns, hesitations, or unusual mouse movements.
- Connections graphs – linking activity across devices, sessions, and accounts to surface suspicious patterns.
- Consortiums like Sonar – sharing fraud intelligence across banks, fintechs, and merchants to flag threats before they spread.
As Sardine emphasizes, stopping AI scams means detecting intent early – not reacting late.
How Consumers and Businesses Can Protect Themselves
For Individuals
- Ask personal questions only your loved one would know.
- Call back using a trusted number, not the one that contacted you.
- Limit personal info shared on social media.
- Pause before acting when urgency feels overwhelming.
- Report scams to the FTC at ReportFraud.ftc.gov.
For Businesses
- Educate employees on AI scam tactics, especially finance and HR teams.
- Use multi-factor authentication for payments and transfers.
- Verify requests via secondary channels before releasing funds.
- Deploy fraud detection tools that use device and behavior signals.
- Participate in intelligence-sharing networks to stay ahead of evolving threats.
Fighting Back Against the AI Scamdemic
AI scams are not science fiction. They’re happening right now – at dinner tables, in small businesses, and inside global corporations. Fraudsters are exploiting trust at scale, using tools designed to bypass both human intuition and technical defenses.
But we are not powerless. By understanding how these scams work, watching for the warning signs, and adopting smarter detection strategies, we can protect both consumers and companies.
If you want to go deeper into the latest tactics and fraud tools, Sardine’s comprehensive guide on AI Scams is a must-read.
As we embrace the benefits of AI, we must also recognize the risks. The fight against fraud is a collective effort – and the sooner we prepare, the harder it will be for scammers to win.