Javascript is not enabled.

Javascript must be enabled to use this site. Please enable Javascript in your browser and try again.

Skip to content
Content starts here
Leaving Website

You are now leaving and going to a website that is not operated by AARP. A different privacy policy and terms of service will apply.

AI Fuels New, Frighteningly Effective Scams

The technology can help criminals impersonate celebs, law enforcement — or even you

spinner image a silhouette of a person with a phone over part of the face with a picture of a face
Photo illustration: Tyler Comrie (photo: Getty Images)

When a finance worker in Hong Kong was called in to a live videoconference by the chief financial officer of his multinational company in February, everything seemed normal. The CFO and other executives acted and sounded as they always did, even if the reason for his being dragged in was unusual: He was instructed to wire $25.6 million to several bank accounts. He, of course, did as the boss asked.

Amazingly, the “CFO” image and voice were computer-generated, as were those of the other executives who appeared on the call. And the accounts belonged to scammers. The worker was the victim of a stunningly elaborate artificial intelligence scam, according to local media reports. The millions remain lost.

spinner image Image Alt Attribute

AARP Membership— $12 for your first year when you sign up for Automatic Renewal

Get instant access to members-only products and hundreds of discounts, a free second membership, and a subscription to AARP the Magazine. Find out how much you could save in a year with a membership. Learn more.

Join Now

Welcome to the dark side of AI technology, in which the voices and faces of people you know can be impeccably faked as part of an effort to steal your money or identity. 

Scientists have been programming computers to think and predict for decades, but only in recent years has the technology gotten to the level at which a computer can effectively mimic human voices, movement and writing style and — more challenging — predict what a person might say or do next. The public release in the past two years of tools such as OpenAI’s ChatGPT and DALL-E, Google’s Gemini (formerly Bard), Microsoft’s Copilot and other readily available generative AI programs brought some of these capabilities to the masses.

AI tools can be legitimately useful for many reasons, but they also can be easily weaponized by criminals to create realistic yet bogus voices, websites, videos and other content to perpetrate fraud. Many fear the worst is yet to come. 

We’re entering an “industrial revolution for fraud criminals,” says Kathy Stokes, AARP’s director of fraud prevention programs. AI “opens endless possibilities and, unfortunately, endless victims and losses.”

Criminals are already taking advantage of some of those “endless possibilities.”

Video: How to Avoid Voice Cloning Scams

Celebrity scams. A “deepfake” (that is, a computer-generated fake version of a person) video circulated showing chef Gordon Ramsay apparently endorsing HexClad cookware. He wasn’t. Later, a similar deepfake featured Taylor Swift touting Le Creuset. The likenesses of Oprah Winfrey, Kelly Clarkson and other celebs have been replicated via AI to sell weight loss supplements.

Fake romance. A Chicago man lost almost $60,000 in a cryptocurrency investment pitched to him by a romance scammer who communicated through what authorities believe was a deepfake video.

Sextortion. The FBI warns that criminals take photos and videos from children’s and adults’ social media feeds and create explicit deepfakes with their images to extort money or sexual favors.

Eyal Benishti, CEO and founder of the ­cybersecurity firm Ironscales, says AI can shortcut the process of running virtually any scam. “The superpower of generative AI is that you can actually give it a goal; for example, tell it, ‘Go find me 10 different phishing email ideas on how I can lure person X.’ ”

Anyone can use this technology: “It’s just like downloading any other app,” says Alex Hamerstone, an analyst at TrustedSec, an information security consulting company​. “If you were recording this conversation, you could feed it into the software and type out whatever you want me to say, and it would play my voice saying that.”

spinner image cartoon of a woman holding a megaphone

Have you seen this scam?

  • Call the AARP Fraud Watch Network Helpline at 877-908-3360 or report it with the AARP Scam Tracking Map.  
  • Get Watchdog Alerts for tips on avoiding such scams.

If the person listening asked questions, AI has the potential to create responses in Hamerstone’s voice that would make sense to the listener.

“It’s unbelievable to see it,” he says. “You cannot tell. It sounds just like the person. … It’s just much more difficult to tell what's real and what’s fake.”

Shopping & Groceries

Coupons for Local Stores

Save on clothing, gifts, beauty and other everyday shopping needs

See more Shopping & Groceries offers >

Fighting back — with AI

Governments are scrambling to keep up with the fast-evolving technology. The White House in late 2023 issued an executive order calling for increased federal oversight of AI systems. The technology, it noted, “holds extraordinary potential for both promise and peril.” That led to the establishment of the U.S. AI Safety Institute within the U.S. ­Department of Commerce to “mitigate the risks that come with the development of this generation-defining technology,” as Commerce Secretary Gina Raimondo put it.

As it turns out, AI may be our best tool for countering the malicious use of AI. 

Benishti’s company develops AI software that detects and prevents large-scale phishing attempts and ransomware attacks. AI also is a key tool for detecting suspicious transactions at your bank, for example, flagging unusual charges on your credit card and blocking scam calls and texts.

spinner image a hand holding a phone with a masked criminal on the screen
Photo illustration: Tyler Comrie (photo: Getty Images)

The problem, says Craig Costigan, CEO of Nice Actimize, a software company that develops technology to detect and prevent financial fraud, is that “most of these scams and frauds are done by folks using the exact same tools as we use — but they don’t have to abide by the rules.”

AI technology also is used to tackle robocalls, says Clayton LiaBraaten, Truecaller’s senior strategic adviser. “If we see phone numbers generating hundreds of thousands of calls in a few short minutes, our models identify these patterns as suspicious. That gives us a very early indication that a bad ­actor is likely behind those calls.”

Truecaller will answer and screen calls for scams and has just unveiled new tech, the AI Call Scanner, which can determine if a caller’s voice is AI-generated. It will warn users while they’re on the phone that the call is suspicious.

Banks use predictive AI as well. Costigan’s company, Nice Actimize, creates AI-based software that financial institutions use to sift through vast amounts of data to detect anomalies in individuals’ patterns, he explains. “It could be that someone is withdrawing $50,000, which is an unusual amount. It could be the location of the IP address. Why is the transaction happening in London?”

What’s possibly more alarming is voice cloning in an industry that for so long has used verbal confirmation to authorize transactions, Costigan says. Criminals “can call up and say, ‘Hi, move this money for me.’ And that voice sounds exactly like you. That’s a problem today.”

Banks are considering going beyond voice confirmation, so “you may also get a single follow-up question, like what’s your favorite color,” Costigan says. “They may now even require something additional that validates that you are you.”

Consumers have a role in protecting themselves, Benishti says, by understanding that “they cannot 100 percent trust communication, especially unsolicited.” Fraud fighters need to be ready to adjust their strategies as scammers are “very astute technologists and accomplished psychologists,” with evolving techniques, LiaBraaten says. “It’s a cat-and-mouse game,” he says. “We just have to stay ahead of them.”  

How to protect yourself as AI fuels more sophisticated scams

Don’t trust your caller ID. If you get a call from a business, hang up and find the company’s number (for a bank, it will be on your financial statement, for example), then call directly. No matter what the pitch, anyone asking you to pay with a gift card is a scammer, according to the Federal Trade Commission.

Pause before you click. Never click on a link in an email or text message without confirming that it’s from a legitimate source. Criminals can craft extremely sophisticated-looking messages, as well as fake websites that convincingly mimic real ones.

Consider choosing a safe word for your family. Share it only with family members or others in your inner circle. If someone calls claiming to be a grandchild, for example, you can ask for the safe word or words — rubber ducky, Fred Flintstone, whatever — and if the caller doesn’t know it, it’s clearly a scam.

spinner image a robot hand clicking a computer mouse
Photo illustration: Tyler Comrie (photo: Getty Images)

Call back your “grandchild” in crisis. If you don’t have a safe word and your supposed grandchild or child calls saying there’s a medical emergency or some other crisis (sometimes callers say they’ve been kidnapped), they may add that their phone is broken so you can’t call them. Pause, take a breath (criminals try to rattle you to disrupt your rational thinking), and tell them you want to try to call them back anyway. Chances are your real grandchild will pick up, unharmed and bewildered by your concern.

Don't click on ads to download software. The FTC says that if you see an ad for software that piques your interest, rather than clicking on a link, go to the company’s website by typing in its address. If you search for it online, the agency warns, “remember that scammers also place ads on search engines. They’ll appear at the top of your search results page and might have a label that says ‘Ad’ or ‘Sponsored.’ Scroll past those to get to your search results.”

Guard your personal information. To avoid identity theft, be careful about disclosing your full name, your home address, your Social Security number, credit card and banking information, and other personal details. Definitely don’t share information with someone you only know from email or texting.

Spread the word. Educate your loved ones on the latest scams and the advice above.

Report scams. If you spot a scam or you’ve been a victim of one, report it to the police, as well as the FTC at The more information authorities have, the better they can identify patterns, link cases and ultimately catch the criminals.   

Fake Ads, Fake AI

It’s worth playing around with a chatbot to get a sense of the technology’s potential (and it’s kind of fun). But note that cybercriminals advertise AI tools on social media and search engines with links that will download malware on your computer if you click on them, the Federal Trade Commission (FTC) warns.

Some sites are fake, the FTC says, but “some ads actually take you to the real software and download the malware through a ‘backdoor,’ which makes it hard to know you got hacked. Then, the criminals could steal your information and sell it to other hackers on the dark web, or get access to your online accounts and scam others.”

You can also report scams to the AARP Fraud Watch Network Helpline, 877-908-3360. It’s a free resource, with trained fraud specialists who can provide support and guidance on what to do next and how to avoid scams.

Discover AARP Members Only Access

Join AARP to Continue

Already a Member?

spinner image cartoon of a woman holding a megaphone

Have you seen this scam?

  • Call the AARP Fraud Watch Network Helpline at 877-908-3360 or report it with the AARP Scam Tracking Map.  
  • Get Watchdog Alerts for tips on avoiding such scams.