Beklager, indholdet på denne side er ikke tilgængeligt på dit valgte sprog.

Din IP:Ukendt

·

Din Status: Ukendt

Spring til hovedindholdet

AI scams: What are AI-generated scams and how can you protect yourself?

If we really were living in a simulation, AI scams might be the first clue. Voices you trust, faces you know — all recreated by machines and used against you. It sounds surreal, but it’s already happening. From deepfakes to cloned calls, AI is turning classic scams into something much more convincing. Let’s break down how AI scams work and how you can stop them.

12. aug. 2025

Tager 12 min. at læse

ai scams

What are AI scams?

AI scams are a type of cybercrime in which artificial intelligence is used to automate and enhance deception in order to trick people into revealing sensitive information or transferring money. An AI scam is still a scam — just one built with smart machinery.

AI-powered scams exploit our trust in familiar voices and faces in a way that’s entirely new. Emails that read like they came from your manager. Voice-cloned calls from a loved one. Deepfaked videos of someone pleading for help.

Artificial intelligence has advanced to the point where a voice clip or a few public posts are all scammers need to impersonate someone. These small pieces of information are enough material for a cybercriminal to use AI tools to write personalized emails, generate fake conversations, and mimic your speech patterns. You’ll see exactly how scammers use AI a little later — but for now, know this: AI can now do instantly, and at scale, what once took bad actors time and effort.

The FBI reports that cybercriminals use AI to create phishing messages, impersonate real people, and launch convincing scams. In 2024, internet crime losses in the US hit $16 billion — a 33% jump from the year before. AI drove much of that increase. The FTC also logged over $12.5 billion in fraud losses, with nearly $3 billion tied to imposter scams alone. 

These losses show the damage that happens when scammers put AI to work. But the technology isn’t to blame — the cybercriminals misusing it to amplify their schemes are.

Reasons these scams work so well:

  • They can sound exactly like someone you know
  • They can look real — even in video
  • They’re fast, scalable, and cheap to run
  • They target your instinct to trust, not just your data

Forget typos and bad grammar — today’s scams don’t feature those. They speak your language and sound just like someone you know and trust.

How is AI being used in scams?

AI is used in scams to automate key steps of the attack — to personalize messages, impersonate real people, and remove the telltale signs of fraud. Scammers no longer need to write phishing emails by hand, set up fake call centers, or code entire websites from scratch. AI handles it now, and many times it does so better and faster. And these are the most common forms the latest AI scams take:

How scammers use AI

Voice cloning

A short audio clip, whether from a social media post, podcast, or voicemail, is all AI needs to clone a real person's voice, cadence, even emotional inflection. The end product sounds unnervingly close to real.

Take this real voice cloning scam case from Scottsdale, Arizona. In early 2023, a mother received a call from what sounded exactly like her teenage daughter — who seemed to be sobbing, scared, and begging for help in the call. An unknown man then joined the call and demanded a ransom.

The mother described a rush of dread upon hearing the threats, she shared in the aftermath of the event. Fortunately, with people nearby who could help, she managed to confirm within about four minutes that her daughter was safe. The call was a fake — the perpetrator had used AI to clone her daughter’s voice and impersonate her.

What this story teaches us is that, sad as it sounds, we can’t really trust our ears anymore. The call didn’t come from the daughter — but it was made to sound like it did. And that’s what makes voice cloning so dangerous. The imitation isn’t “close enough.” It’s believable — unless you stop and verify.

Deepfakes

AI’s capabilities extend past voice cloning to fabricating faces, gestures, and entire videos from the ground up. Deepfakes can show anyone speaking or acting in any way — and unless you’re trained to spot the subtle clues, they can pass as truth.

In 2023, a deepfake video of Ukrainian President Volodymyr Zelenskyy circulated online, urging his troops to surrender. It didn’t take much to establish that the clip was a fake — the head movements were off, and the deepfake just wasn’t very good altogether — but yet for a moment, it spread panic. The video even appeared on a hacked Ukrainian news site before being taken down.

Now imagine the same AI technology, or a better version of it, used to impersonate your leadership approving a wire transfer or a family member asking for help in a video call. It’s not science fiction anymore.

With voice cloning, we learned not to trust our ears. Deepfakes take it a step further — now even our eyes can be fooled. Awareness is your only defense when reality can now be faked.

AI-powered phishing

Whether powered by AI or not, phishing works the same way. Cybercriminals use it to trick you into trusting the wrong message and giving up sensitive information. A phishing attack can land in your inbox, show up in a text, your DMs, or come through a phone call — and each one has its own way of catching you off guard.

But what happens with AI is that it changes the level of effort for scammers required to execute an attack. Instead of relying on their own grammar skills, writing style, or attention to detail, scammers can now use tools that pull together messages or scripts quickly and with fewer of the giveaways we've learned to watch for. 

Bad actors use AI to create messages that sound exactly like someone you know — your manager, coworker, a representative from your bank, you name it. How, you ask? They start with public information, but it gets far more convincing if they get a chance to get their hands on leaked data or stolen credentials.

This is what makes spear phishing — a form of phishing that targets specific people or organizations — particularly effective when powered by AI. Spear phishing can be casual or formal. It can reference details that make the message personal, like your full name, job title, or recent online activity. And with just a few data points, AI is fine making up the rest.

For cybercriminals, AI speeds up every step — from drafting a realistic message to sending it out to thousands of targets. What once took hours of manual work can now be done in seconds.

You won't identify fake messages just by scanning for typos or broken English anymore. Whether it's an email, text, or phone call — expect it to look polished and sound legit.

Fake websites

Creating a scam website used to take time. But with AI, scammers can generate full landing pages, product pages, support portals, and login screens in minutes. They copy logos, layouts, and even replicate the tone of voice of the companies they pretend to be.

Some of these fake websites push malware downloads. Others mean to trick you into entering login credentials or payment info. You might land on one through a phishing link, a typo in the URL, or even a sponsored ad on a search engine or social platform. And once you're on the page — with AI in play — it’s much tougher to realize you’re in the wrong place. 

That’s where smart tools can help. Take Threat Protection Pro™, for example — NordVPN’s built-in feature that checks websites in real time and blocks access to malicious ones before they load. It also provides scam and fraud alerts to keep you informed about ongoing threats that might reach you.

If you’re ever unsure about a site, you can also use NordVPN’s link checker to scan any URL before clicking.

AI chatbots

Chatbots don’t scam you by mistake — the ones that do are built to do it deliberately.

Fake AI-powered support agents or “representatives” are designed to win your trust then hold your hand and walk you straight into giving up exactly what the scammers behind them are after: login credentials, financial accounts, and other sensitive information.

These malicious chatbots are built into fake websites. Others are deployed via social media DMs or messaging apps. However it starts, once you’re conversing with the chatbot, it gets harder to tell you’re being scammed — because it doesn’t feel like talking to a bot anymore.

How to spot AI-powered scams

The whole point of AI scams is to mimic reality convincingly. They’re polished. They can be created in seconds. And they’re designed to make you trust what you hear, read, or see. But even the most convincing ones leave behind a few signs.

These are the red flags you should watch for:

Red flags
  • Unusual requests for personal details. No honest company asks for your password or banking PIN over text or email. But if someone does ask, the first thing you should do is assume it’s fake, even if the message looks professional.
  • Urgency and high-pressure language. “Act now.” “Send this immediately.” “Don’t tell anyone.” Putting pressure on the target victim is the basis of pretty much all scam attempts. AI can make the message smoother, but the pressure is still usually applied.
  • Unduly formal or robotic tone. AI-generated messages can sometimes read too clean. No slang. No contractions. It feels like someone is trying a little too hard to sound “normal.” That’s a common giveaway.
  • Requests for unusual payment methods. Gift cards. Crypto. Wire transfers to odd accounts. These aren’t just suspicious — they’re deliberate. Scam payments are hard to trace and even harder to reverse.
  • Mismatch between voice and behavior. It’s likely that your family member wouldn’t ask for a loan in a voice memo. It’s equally likely that your company’s CEO wouldn’t just shoot you a DM requesting a wire transfer. If the message feels out of character, trust that instinct.
  • Visual or audio glitches. Deepfake videos and voices are often close to real but rarely flawless. Check for flickering, odd eye movements, forced pauses, or distortions. Typical signs are odd background noises and unnatural facial or hand movements.
  • Outdated or misaligned details. AI can struggle with context. Not every AI model has the same data access, so it might mention events that never happened or timelines that you know don’t make sense. These mistakes are telling — but you need to be observant to identify them.

How to protect yourself from AI-generated scams

To protect yourself from AI-generated scams, you don’t need advanced AI security tools or a degree in machine learning. Most of the time, it’s as simple as pausing, thinking critically, and being more cautious with extending trust.

Take these steps to get ahead of the curve:

Next steps if you suspect an AI scam
  • Slow down. Speed, as we now know, is key. AI helps the bad actors move swiftly — but you yourself don’t have to. So get into the habit of taking a moment to question what you’re seeing or hearing.
  • Cross-check the source. If you get a weird request, don’t respond directly. Reach out through a separate, trusted channel — phone, email, or in person. Even if the message looks real, verify it first.
  • Scan suspicious links. Before clicking on any URL you're suspicious of, run it through NordVPN's link checker. It's a fast way to identify fake websites.
  • Use smart protection. Threat Protection Pro™ and similar solutions block access to malicious websites and stop malware. They run in the background, with no effort required on your part.
  • Stay in the loop. Scam and fraud alerts built into Threat Protection Pro™ warn you about trending cyber threats in real time. It may sound simple, but staying informed can go a long way in keeping you safer online.
  • Be selective about what you put online. Limit what strangers can see on your social media. The less they know about you, the harder it is to train AI on your voice, face, or personal habits.
  • Report suspicious activity. If you catch something unusual, report it. Act immediately. Your action can prevent someone else from falling for a scam.

What to do if you suspect an AI scam

Something’s amiss with the message. You recognize the voice, but the phrasing feels wrong. The video isn’t quite the person you know. Trust that suspicion — and here’s your next move:

  • Stop and pause. Don’t reply. Don’t send money. Don’t click anything. Step back and give yourself space to think clearly.
  • Verify through a different channel. Call the person directly. Visit the official website. If it’s your bank, go into the app. Don’t rely on the message itself — that’s the thing in question.
  • Look for red flags. Is the request urgent? Is the tone a little too formal or weirdly casual? Is it asking for money, personal info, or secrecy? These are clues — don’t ignore them.
  • Do not share sensitive information. Even if the person “sounds right,” AI can fake emotion, urgency, even concern. But it still can’t answer questions it doesn’t already know.
  • Report the scam. If you’re in the US, report the incident to the Federal Trade Commission (FTC) at reportfraud.ftc.gov or to the FBI’s Internet Crime Complaint Center (IC3) at ic3.gov. If you’re outside the US, contact your local cybercrime authority or law enforcement agency.

AI scams: Key takeaways

AI scams are on the rise — and they're more cunning than ever. But knowing how they work is already half the defense.

They don’t come wrapped in broken grammar or obvious baits anymore. They come polished, personalized, and sometimes even wearing a familiar face. But they still rely on the same old tricks: pressure, panic, and misplaced trust.

Stay alert. Slow down. Question anything that feels even slightly off.

You don’t need to be an expert to stay safe — just informed. And now, you are.

Online security starts with a click.

Stay safe with the world’s leading VPN

Copywriter Dominykas Krimisieras

Dominykas Krimisieras

Dominykas Krimisieras writes for NordVPN about the parts of online life most people ignore. In his work, he wants to make cybersecurity simple enough to understand — and practical enough to act on.