Javascript is not enabled.

Javascript must be enabled to use this site. Please enable Javascript in your browser and try again.

AI Scams Are on the Rise. Detecting Them Is Next to Impossible

COVER STORY: FRAUD 2026

AARP HELPS YOU STAY SAFE

FRAUD 2026

YOUR GUIDE TO SPOTTING SCAMS 

Illustration of a lifeguard looking out into a red colored sea with binoculars. His chair sits on a keyboard, and he is watching a warning symbol float precariously in the water.

ARTIFICIAL ENTRAPMENT

AI scams are on the rise. And for now, detecting them can be next to impossible

Last September, Dr. David Amron watched a Facebook video of himself with growing horror. A recognized specialist in lipedema surgery, he saw and heard himself hawking a $50 “miracle” cream for this painful, incurable condition in which fat accumulates under the skin.

But he had never made the video or endorsed the product. It was a deepfake scam, cooked up by criminals using artificial intelligence (AI). And it was so convincing that some of his own patients bought the cream.

“The video was disturbingly realistic,” says Amron, director of the Roxbury Institute in Beverly Hills, California. “My reaction was disbelief, anger and genuine concern for my patients. What unsettled me most was how authentic it appeared.”

Amron wasn’t the only one deepfaked; the video also featured Oprah Winfrey, Kelly Clarkson and the institute’s research director. Amron thinks criminals digitally altered legit online videos of his work, then used a photo of the researcher to create realistic audio and video. “This level of realism is exactly why these scams are so dangerous,” he says. “They are engineered to be believable, making it easy for vulnerable patients to trust them.”

In fact, AI-enabled scams are skyrocketing, the Federal Bureau of Investigation warns. From deepfake videos on social media and cloned voices on the phone to impostor websites and phishing emails and text messages, increasingly sophisticated AI scams put older adults in the crosshairs, says a December 2025 Microsoft study of fraud data from AARP and the Better Business Bureau.

At risk: your money, your personal information and your health. “We’re getting deluged,” says Bob Sullivan, host of AARP’s The Perfect Scam podcast. “A couple of years ago, you might have encountered one or two AI-generated scams a year. Now scammer call centers are sending out tens of thousands of scam messages per minute.”

Nearly 9 in 10 older adults told a recent AARP poll they’re worried about AI-enabled scams. And just 6 percent of people 50 and older said they feel very confident they could spot AI-generated false information, according to the 2025 University of Michigan National Poll on Healthy Aging.

What should you know and do to stay safe from this growing threat? We asked cybersecurity and scam experts—and their answers will surprise you. Spoiler alert: You don’t have to outsmart high-tech AI to sidestep AI-enabled scams, every single expert told AARP. Instead, stick with proven strategies: Pause and reflect before acting. And be skeptical of an offer that sounds too good to be true. “You can’t trust anymore with your eyes and ears,” says Vijay Balasubramaniyan, CEO and cofounder of the cybersecurity company Pindrop. “You have to verify. The big thing is to slow down and evaluate the situation.”

WHY SCAMMERS LOVE AI

“You know your child’s voice, you know your child’s cry,” says Sharon Brightwell, 69, a retired music teacher from Dover, Florida. Last July, Brightwell answered a phone call from an unfamiliar phone number and heard her 43-year-old daughter sobbing. “She said, ‘Mom, I’ve been in an accident. I broke my nose. The police are here.’ ”

A man took over the call and identified himself as a police officer. He said Brightwell’s daughter had struck the car of a pregnant woman. A $15,000 bond, in cash, would release her. He directed Brightwell to withdraw cash at her bank, shove it into a cardboard box and walk out to her driveway to wait for a courier. That courier tossed the cash into the trunk of a car and drove away. “All this time, I was sick to my stomach,” she says.

Then the scammers called back. The pregnant woman’s family wanted $30,000 more because the baby had died, he said. Brightwell sank to her kitchen floor in tears. But her 16-year-old grandson intervened. He used a locator app on his phone to see that his mother, Brightwell’s daughter, was at work at a local hospital. He contacted one of his mother’s friends, who counseled Brightwell to stop talking to the scammers.

Brightwell believes scammers used a snippet of audio of her daughter’s voice from a Facebook post to create her AI-generated voice. “There is no way you could have told me that was not my daughter,” she says. “That was her voice.”

Scammers love AI. Half of all spam emails are now generated with AI tools, says a 2025 Columbia University study. So are a shocking 82 percent of phishing emails aimed at getting personal identifying information or direct access to your financial accounts, according to a 2025 report from the cybersecurity company KnowBe4. Every day, Americans see an average of 2.6 deepfake videos—scams featuring fake images of celebrities, politicians and other trusted public figures (like Amron), designed to get you to give money or buy something—says a 2025 report from the computer-security company McAfee.

WE COME TO YOU
This April, AARP is showing up in your community with free local scam-fighting events—including fraud-prevention workshops—to help you better protect your money and personal information. Go to aarp.org/fraudsafety to learn more and find events near you.

“AI doesn’t sleep,” says Balasubramaniyan. “It’s cheap. It works 24/7.” And it works well. Criminals are deploying free or low-cost AI tools like ChatGPT and Sora—the same ones the rest of us use for web searches and to turn photos into fun videos—as well as underworld versions with names like FraudGPT, SpamGPT and Xanthorox. “AI is accelerating how scams are created and scaled,” says Teresa Hutson, corporate vice president of Microsoft’s Trusted Technology Group.

Deepfake videos, cloned voices and chatbots that can hold realistic conversations via text, email or phone are a snap to produce. “Eight years ago, it took 20 hours of recordings to clone someone’s voice for a scam,” says Balasubramaniyan. “Now, with a photo from LinkedIn and three seconds of your voice, a scammer can create a deepfake video with audio.” In a 2025 U.K. study, participants could detect AI-generated images and voice clones only 51 percent of the time—little better than a coin toss. Older adults were even less accurate.

Scammers like it so much, they’re replacing their employees at scammer call centers with AI systems, Balasubramaniyan discovered. His team has been eavesdropping on a West African scammer call center for years. In 2024, they stopped hearing the familiar voices of 12 call-center employees. AI-generated voices had taken over.

When researchers in Microsoft’s AI for Good Lab analyzed 531,000 fraud reports from AARP ’s Fraud Watch Network Helpline and the Better Business Bureau’s Scam Tracker, they found a disturbing trend: Scams that victims identified as AI-enabled—such as with realistic voices or videos—increased 20-fold from 2023 to 2025. The increase lines up with the arrival of AI, says Lisa Reppell, a report coauthor and senior program manager for information literacy at AI for Good.

Illustration of a hand peeling off a phone screen to reveal another image below it. The image on top is of a person's face with a play button on it. The image below is dark and shows a digital skeleton of the person’s face.

AI CHATBOTS TURBOCHARGE SCAMS

Impostor scams—whether they’re family impersonation scams, like the one that snagged Brightwell, or faked calls, texts and emails purporting to be from government agencies, businesses, employers with job openings and package delivery services—are on the rise in the era of AI-enabled fraud. Half of all scams reported to AARP’s Fraud Watch Network Helpline in 2025 were impostor schemes, the Microsoft report found. Most—20,400—were business and government scams, and just 700 were family impersonation scams. “Most impostors pose as businesses or officials, not people you know,” the report notes.

It’s a lucrative swindle: Impostor scams involving fake unpaid bills and undelivered packages cost consumers $785 million in 2024, according to the FBI’s Internet Crime Complaint Center. It’s getting harder to detect these fakes. “We know criminals are using AI to make emails and texts sound better,” says Amy Nofziger, senior director of fraud victim support for the AARP Fraud Watch Network. “The old giveaways have changed,” says Hutson. “Bad grammar, poor spelling or clunky websites are less likely with AI.”

AI impostor scams take things to a new level, with computer-generated chatbots that act and sound like human beings.

After a layoff in 2023, Ron O’Brien, 65, a corporate communications executive from Boston’s North Shore, went back to school to learn more about AI in the workplace. In May 2025 he got an unsolicited email about a $300,000-a-year work-at-home job with a Chicago medical device company. O’Brien applied—and entered an elaborate AI-powered scam. “The company was real,” he says. “It turns out the job was not.”

Scammers set up a text interview via WhatsApp. “I realized I was chatting with a chatbot,” O’Brien says. “It was actually processing information I was providing and commenting on it, saying things like, ‘That’s interesting, Ron, tell me more.’ ” The criminals followed up with a 10-page job description, which O’Brien thinks was also created with AI tools. Then came a 30-minute phone interview. “The voice they generated matched the ethnicity of a real company employee,” he says. “I realized it was a bot. It was incredibly responsive. I kept looking for mistakes, but there were none.” When the scammers quickly set up a video chat and offered him the job, O’Brien got suspicious. “Something was fishy,” he says. “Things like this don’t happen in three days.” During the call, he texted his wife that it looked like a scam—but mistakenly sent the message to the scammers. They hung up. O’Brien suspects he would have had sensitive banking and other financial information stolen if he had continued.

AI chatbots—the same type that pop up offering help on legit shopping and informational websites—are yet another weapon in the hands of scammers. Some are paired with cloned voices or faked images for convincing phone and video calls. “By manipulating and creating audio and visual content with unprecedented realism, [scammers] seek to deceive unsuspecting victims into divulging sensitive information or authorizing fraudulent transactions,” the FBI warned in a 2024 notice.

AI THREATS ARE EVERYWHERE

In October 2025, the Tech Transparency Project found that scam advertisers had paid Meta $49 million to run deepfake videos of politicians and government appointees—including President Donald Trump, Elon Musk, New York Rep. Alexandria Ocasio-Cortez, Massachusetts Sen. Elizabeth Warren and Vermont Sen. Bernie Sanders—hawking scams, including fake government stimulus checks and benefit cards on Facebook and Instagram.

The deepfakes also promoted questionable investments, free merchandise that signed people up for expensive hidden subscriptions, and fake Medicare payments.

Katie A. Paul, director of the Tech Transparency Project, says social media is awash in deepfake scams. And the problem persists. “While Meta rakes in advertising profits, it’s letting deepfake scams target seniors and their bank accounts,” she says.

Since the report, she adds, “not only have we seen new advertisers from the same scam networks but also new pages running identical ads. One-third of the entire planet, 3 billion people, are on Facebook. No platform has the same kind of reach.”

Paul notes that deepfakes use several tricks that pull consumers in. “If you pause too long or click on one, the platform sends you more and more,” she says. “Some tell you, with a sense of urgency, to call. They know older Americans will be wary of typing in their personal information online but may have more trust on the phone.”

In November, two U.S. senators asked the federal government to look into scams on Facebook and Instagram; Meta countered by saying it has reduced scams significantly, according to a Reuters news service story.

Amron, the lipedema surgeon, says it took months to get the deepfake video taken down despite posting warnings on social media and appearing on the Today show. “These scams don’t just take money,” he says. “They derail patients from the care they need. Approach any online medical advertisement with caution. Deepfake videos can look completely real, even to trained professionals.”

Sari Harrar is an award-winning journalist who writes frequently on fraud and health for AARP.

DO THESE 9 THINGS TO FEEL SAFER NOW

We are all under assault from scammers, and trying to stay safe can seem like a futile effort. But there are some relatively easy things anyone can do right away to improve security, according to AARP’s fraud prevention experts.

BY AMY NOFZIGER

Icon of a padlock 

Update your passwords, including for your home Wi-Fi. And don’t use your pets’ or grandkids’ names. Instead, try a passphrase you’ll remember, but substitute a symbol for a letter, like so: Il@veIceCream1960!

Icon of a gear 

Change your settings on your smartphone to send all unknown numbers to voicemail. (On iPhones, go to Settings, Apps, Phone, then Silence. Android phones vary, so google it.)

Icon of a shield with an eye on it 

On social media, do a privacy checkup. Under Settings, choose Privacy, and check to make sure only people you choose see your social media posts.

Icon of a credit card 

Freeze your credit. This will

prevent crooks from stealing your identity and opening new credit cards and other accounts in your name. Unfreeze it when necessary to allow a credit search.

Icon of SSN card 

Take your Social Security card out of your wallet, and put it in a safe place.

Icon of a long scroll

Review your bank and other financial accounts right now for suspicious activity. Do this daily, weekly or monthly, especially for bank and credit card statements.

Icon of a trash bin 

Delete apps not in use, including those with saved passwords.

Icon of a person with an arrow pointing right 

Go into your device and log out of all your apps to avoid unauthorized access.

Icon of a call symbol 

Add the AARP Fraud Watch Network Helpline (877-908-3360) to your contacts for quick access, if needed.

Visit aarp.org/fraudwatch network for more resources.

Amy Nofziger is the senior director of fraud victim support for the AARP Fraud Watch Network.

UPDATE

ALLIANCE IS SCORING WINS AGAINST CRIMINAL GANGS

A year ago in April, AARP joined Amazon, Google and Walmart to establish the National Elder Fraud Coordination Center, with the goal of bringing together resources from the private and public sectors to help law enforcement prosecute fraud rings targeting older Americans.

The goal was to use the data available through private companies to find patterns and connections in major elder fraud cases and place related, local scams under one investigation to target large-scale fraud operations.

With the NEFCC’s assistance, state and federal law enforcement agencies can more efficiently identify and dismantle destructive criminal enter­prises, seize stolen assets and return them to victims.

Since its launch, the NEFCC has identified new fraud trends and patterns and sent those insights to state and federal law enforcement agencies to help them build investigations.

SOME EXAMPLES OF THE CENTER’S WORK:

▶︎ The NEFCC examined 24 companies receiving funds from tech support scam victims and passed the findings along to federal law enforcement agencies and American tech companies. This helped restart a stalled elder fraud case.

▶︎ When a romance scam victim lost about $350,000, the NEFCC tracked money transfers, cryptocurrency transactions, addresses, and business names and phone numbers related to the scam. As a result, other victims, who had lost more than $1 million combined, were identified, and law enforcement was alerted.

▶︎ By helping link fraud investigations in three states and advising law enforcement of the connections, the NEFCC enabled the issuance of criminal indictments.

The center also added new members, including Capital One, Chainalysis, Meta and Microsoft.

“Older adults often face the greatest financial losses, putting their retirement security and well-being at risk,” says Kathy Stokes, senior director of fraud prevention programs, AARP Fraud Watch Network. “The NEFCC has begun to play a critical role in addressing the fraud crisis in our country.”

Visit FightElderFraud.org to learn more about the NEFCC.

Unlock Access to AARP Members Edition

Join AARP to Continue

Already a Member?

of