Javascript is not enabled.

Javascript must be enabled to use this site. Please enable Javascript in your browser and try again.

Skip to content
Content starts here
CLOSE ×
Search
CLOSE ×
Search
Leaving AARP.org Website

You are now leaving AARP.org and going to a website that is not operated by AARP. A different privacy policy and terms of service will apply.

Deepfake Doctors: AI-Generated Medical Ad Scams

A doctor discovers scammers used AI to fake a TikTok video of him promoting vitamin supplements. He must now defend his reputation and attempt to stop potential victims from losing money to a scam.

an illustration shows a doctor reaching out of a smartphone. With one hand he is removing a mask and with the other he has a medicine bottle. A woman in the foreground is desperately reaching toward him
AARP

Subscribe: Apple Podcasts | Amazon Music | Spotify | TuneIn

Dr. Maurice Sholas, a pediatric rehabilitation specialist based in New Orleans, discovers that his likeness is being used in an ad when a friend sends him a TikTok clip that looks convincingly like him. It’s an AI-generated version of him selling vitamin supplements, something he has never done. Feeling violated, Sholas must now defend his reputation, and attempt to protect potential victims from losing money on and possibly being harmed by a fraudulent product. In addition to causing monetary losses, AI‑generated impersonation of doctors and celebrities can undermine trust and speed up the spread of health misinformation.

a quote from the episode is represented graphically
AARP
Full Transcript

(MUSIC INTRO)

[00:00:01] Bob: This week on The Perfect Scam.

[00:00:04] Dr. Maurice Sholas: A friend of mine sent me a link on TikTok, and said, "Is this you?" I clicked on the video and it was a likeness of me hawking vitamins, K2D3, which is something I've never done, and something I've never recorded. And I was just stunned to see what looked like me and words coming out of my mouth. It wasn't me.

(MUSIC SEGUE)

[00:00:30] Bob: Welcome back to The Perfect Scam. I'm your host, Bob Sullivan.

(MUSIC SEGUE)

[00:00:36] Bob: Imagine scrolling through TikTok or Instagram or Facebook and you see an ad that appears to be from a doctor you know and she's telling you that there's a new supplement that she thinks would work perfectly for you at your age or with your special skin condition or as you face that scary diagnosis. And it's all fake. This sounds a bit like science fiction, but this is happening right now. In a moment you'll hear from a doctor who had his image, his personhood really, and his reputation stolen and used to sell supplements. We've heard a lot about deep fake videos and how they might distort a politician's point of view, but we've heard much less about deep fake advertisements. In the end, that might be an even scarier problem.

[00:01:29] Bob: Okay, now I want you to meet a specialist who works with kids in the most heartbreaking situations; kids with spina bifida, or kids left paralyzed by accidents, and as he says, he tries to get them to win whatever winning means for them. But recently, criminals put him in a no-win situation.

[00:01:51] Dr. Maurice Sholas: My name is Dr. Maurice Sholas, and I'm a doctor peds rehab doctor based in New Orleans, Louisiana. That means I take care of children with acquired and congenital physical disabilities. I'm the kind of doctor that I pray nobody ever has to come and see, so if a child is born with cerebral palsy or spina bifida, those are my patients. If a child is typical and has an injury to their brain causing a traumatic brain injury, they become my patients. If they get paralyzed because of injury to their spinal cord, they become my patient. If they are missing a limb they become my patient, so I deal with anything that sort of is a disorder of how well your brain/spinal cord works, or a complex musculoskeletal joint, nerve muscley kind of thing, they become my patients. But some people I take care of, we know what to call it, and some people are just disabled from a reason we don't know what to call it, but I take care of them and love them just the same.

[00:02:43] Bob: Can I just start off by saying I, I just so admire the work that you do.

[00:02:48] Dr. Maurice Sholas: Oh, thank you so much. And I am fortunate in that I get to do things that I love and the things I love actually pay me enough to pay my bills.

[00:02:57] Bob: That's the dream for most people, isn't it?

[00:02:59] Dr. Maurice Sholas: (chuckles) Yes, it is.

[00:03:02] Bob: You take people who are told that they can't do something and you help them find a way, right?

[00:03:06] Dr. Maurice Sholas: Yes, my job is to help people win, and what that win looks like depends on the goal. Some people's win is I want to be able to use the bathroom by myself, and we can do that. Some people's goal is I want to play competitive sports. We can do that. Some people's goal is I just want to be a blessing to those that love me, and we can do that too. So a lot of my job is managing the consequences of a disability, but I really think my real job is helping people be fully integrated members of their family, their community, and their society in whatever way that looks like to them.

[00:03:40] Bob: Maurice is, unfortunately, in high demand. There aren't many with his specialty and there are far too many children who need his special kind of care, which makes what I'm about to share with you all the more infuriating because a few months ago, Maurice was stuck spending an awful lot of time and mental energy not helping kids in need, but fighting off criminals in a battle that he really can't win. It all began when...

[00:04:09] Dr. Maurice Sholas: So a friend of mine sent me a link on TikTok, and said, "Is this you?" And so I'm, I was relatively new to TikTok at the time; I'm trying to be the old guy that stays current on all the new apps (laughs) and I clicked on the video and it was a likeness of me hawking vitamins, K2D3, which is something I've never done, and something I've never recorded. And I was just stunned to see what looked like me and words coming out of my mouth. It wasn't me. Now the voice didn't sound like me, but it definitely looked like my face.

[00:04:44] Clip: I asked, do you want me to give you the safe answer or the truth? He said, the truth.

[00:04:51] Bob: That's the sound from a TikTok video of someone who looks like Maurice hawking a vitamin saying things Maurice has never said. Seeing this is shocking for Maurice. It takes a moment, but pretty quickly the doctor figures out where the germ of that video came from.

[00:05:09] Dr. Maurice Sholas: It was a background that was pulled from a story I told, one of my most popular videos where I talked about how one of my patient's brothers hit the panic button on my name badge, and the security busted into my visit thinking I was in trouble when I was really just being hugged and kissed to death by a, a 3-year-old. (laughs)

[00:05:29] Bob: The authentic video is purely charming. This fake video is alarming, and there are more like it. Soon Dr. Maurice, he calls himself Doc MoSho, finds himself a host of similar videos all over TikTok. Criminals have taken a small segment of one of his videos and created a host of fakes designed to sell this supplement. Why was he targeted? Maurice thinks he knows exactly why. The ads targeted Black Americans.

[00:05:59] Dr. Maurice Sholas: Yeah, we are closer to 16, 18% of the population. I live in New Orleans, which is a 60% black city, so there are pockets of Black Americans that are vital members of communities all over, and to establish trust, people know that when your doctor shares the same representation of you, it helps with that trust. So they picked me specifically to talk about that product, to talk to the population they wanted to buy supplements.

[00:06:23] Bob: And supplements that were allegedly essential because they're Black, right?

[00:06:29] Dr. Maurice Sholas: Correct, supplements that were allegedly essential because they're Black, when, you know, there's no data showing those supplements are more or less essential to Black people, there's no data showing that these supplements are coated in a way that makes melanated skin process them better than if they were given to non-melanated people. It just was a flim flam that used my reputation to sell a product.

[00:06:51] Bob: The flim flam that used his reputation to sell a product. All those years, decades, making very sick children and their families trust him, abused, violated on bogus supplement ads.

[00:07:06] Dr. Maurice Sholas: It felt like the 20 years I've spent in medical practice, all of the work I did to become a first generation doctor and get an MD PhD from Harvard University and to really break my way into a specialty that most people had never heard of before, they heard of it from me, was just taken without my permission to do something I would never do or say.

[00:07:29] Bob: Is there some kind of mistake? The voice isn't his. But when he zooms in on the video, his name is clearly on the lab coat, so he reaches out to whoever posted the video.

[00:07:43] Dr. Maurice Sholas: I tried to respond to the video, and, and I was like, hey, this isn't me.

[00:07:49] Bob: Silence. He gets no response. But he does get a reaction.

[00:07:55] Dr. Maurice Sholas: What they did was they didn't take it down, they added age lines to my mouth and added like some little sets to look like some of my teeth were missing or different, so they could say it's not you anymore, 'cause it's older, or they don't have the same, I have good teeth, I'm proud of my teeth. But they could say it wasn't you. But they didn't take it down, they just tried to alter the digital image to make it so they could pretend like it wasn't me or utilizing my likeness.

[00:08:20] Bob: So it sounds, you're saying you complained, first you complained directly to the creator who...

[00:08:26] Dr. Maurice Sholas: Yes.

[00:08:26] Bob: ...then made this crazy adjustment, right?

[00:08:29] Dr. Maurice Sholas: Yes.

[00:08:30] Bob: And then he escalates the complaint.

[00:08:34] Dr. Maurice Sholas: Uh, so I reported it to TikTok and TikTok basically didn't do anything. I guess I don't have enough followers and I'm enough of a nobody on there that they didn't respond. I don't have a PR firm. So TikTok just ignored me.

[00:08:49] Bob: Frantic, Maurice turns to journalism for help to WWL-TV, that's where we got the audio for the fake doctor.

[00:08:58] Dr. Maurice Sholas: I actually have a good friend that's a news anchor here, I reached out to her and said have you ever heard of this, what's going on. She ended up talking to her producers. They sent a reporter out to talk to me and that reporter reached out on my behalf. As soon as the reporter reached out, hours later the account that was offensive was banned and taken down.

[00:09:18] Bob: But that is hardly the end of the story. For one thing, Maurice quickly learns there's already been real-life consequences.

[00:09:26] Dr. Maurice Sholas: When I actually, the story ran here in New Orleans on TV, my friend's husband that saw the story on TV and said, "Oh gosh, I actually bought that because I thought it was you."

[00:09:37] Bob: (chuckles)

[00:09:39] Dr. Maurice Sholas: And I was like, bud, that wasn't me! He said, "I'm sorry, I didn't know." I was like, it didn't link to me or any of my accounts. It, so why would I be saying something about a product I never use in real life, you've never heard me talk about in real life, and it didn't link to any of my real-life accounts?

[00:09:57] Bob: And at this point Maurice learns his trouble is only beginning. Those TikTok videos were removed.

[00:10:04] Dr. Maurice Sholas: By that point, the material that spread from TikTok to Instagram and also to X, when I tried to reach out to Instagram to get them to remove this TikTok generated content, again, just ignored me. I made my account verified to see if I could get more traction, and it was just literally like whack-a-mole; as soon as you get one thing, a video would pop up somewhere else.

[00:10:27] Bob: With, but this was with Instagram. Are, you actually paid for the verification process?

[00:10:31] Dr. Maurice Sholas: I did. I paid for the verification process on the advice of my attorney because they said, if you're verified it gives you more rights and privileges, so I feel like I had to pay for the privilege of being me.

[00:10:41] Bob: Oh my God. And then it didn't work anyway, right?

[00:10:44] Dr. Maurice Sholas: It didn't work anyway. They were just, sorry, bud. Kick rocks. Just wait for it to blow over.

[00:10:50] Bob: Meanwhile, the problem keeps getting worse.

[00:10:53] Dr. Maurice Sholas: There were more videos that they used my likeness or something similar to like, my likeness. Again, I'm bald and I have a gray beard. They tried to make my beard darker; they tried to take in and take out of some of my teeth. They added some frown lines to my forehead and my cheeks to make me look a little older. I am 55, but I'm proud that I've got good skin and not too many wrinkles, so they got plausible deniability. And so every time I would say that's me and I never said that, they eventually, the creator blocked me so I couldn't see it anymore or any of these things so I had to have another person say, is it down yet? (laughs)

[00:11:31] Bob: Okay...

[00:11:31] Dr. Maurice Sholas: So it's something to be blocked from yourself.

[00:11:34] Bob: (laughs) This is such like a Kafkaesque odyssey you're describing to me. That's, first of all, like we're running past, it's unnerving to have someone make a video of you that's an older you that you've got to look at. That's weird, right?

[00:11:48] Dr. Maurice Sholas: Yeah.

[00:11:49] Bob: Yeah, and that's the smallest offense that I'm hearing of all these things, but then this could still be going on and now the tools of this tech company are being used so that you can't even defend yourself, you're protected from seeing it.

[00:12:01] Dr. Maurice Sholas: Correct.

[00:12:02] Bob: That's awful.

[00:12:03] Dr. Maurice Sholas: Correct. They, so what they started doing is they started systematically blocking me on all the platforms that I was complaining on. So I couldn't see whether or not the information was down.

[00:12:13] Bob: Maurice keeps talking to his lawyer, keeps looking for some kind of solution. The answers aren't comforting.

[00:12:21] Dr. Maurice Sholas: And when I talked to the lawyers, what became clear to me is that this is a way that technology really outstrips or outpaces the rules. We had a law that actually was passed last year in Louisiana's legislative session, but it was vetoed by our governor, that would have given us some sort of redress for people misappropriating and misusing our likenesses. That's important here in Louisiana because we have a film industry that's based here, so you can imagine if you're Meryl Streep and somebody doesn't want to pay your rate, but they just use you to do what they need to do -- boom -- they can get a lot of traction. But that law was shut down and there's really nothing at the federal level that we can lean to. I was talking to my attorney and I was like, what can I do? And she said, "Send me, now that we know somebody bought it, we can actually see who that actual distributor/manufacturer is.

[00:13:10] Bob: Ah-ha.

[00:13:10] Dr. Maurice Sholas: Because the other part about this, Bob, is that even if you could figure out what rule of law applied to this, which I told you was already dicey, then you have a jurisdiction issue. Because what happens is the product manufacturer typically hires a separate company to do the marketing. So the product manufacturer can say, I didn't really misuse you, that was the marketing company. And the marketing company can say, I didn't misrepresent you, because the product company told me it was cool. So it's like fingers pointing back and forth and none of this matters if those companies are not based in the US.

[00:13:44] Bob: And boy, are you getting an education in all this very quickly.

[00:13:47] Dr. Maurice Sholas: Yes, I never thought I would want to understand the ins and outs of intellectual property rights and how they apply to an individual and their likeness. I'm not an actor or a famous person, but I've learned way more about this than I thought. So in talking with the attorney, they brought in a crisis management attorney...

[00:14:07] Bob: Oh my God.

[00:14:07] Dr. Maurice Sholas: Yeah, and then ultimately, they said the best thing you could do is hire a PR firm basically to go out there and do a sweep of the internet and push positive content to counteract whatever misinformation is there. And when I quoted that person, it was going to cost me 9 to 20 thousand dollars to actually go out and have a firm look at what's out there about me that's being misrepresentative and clean it up.

[00:14:30] Bob: That is an unbelievable punishment that you never asked for.

[00:14:33] Dr. Maurice Sholas: Yes.

[00:14:34] Bob: Wow.

[00:14:34] Dr. Maurice Sholas: I looked at it and I interviewed two or three firms, and they were lovely and one of the women were like, I'll give you a discount. I don't believe in making people do their jobs for free, but I couldn't afford it, Bob.

[00:14:45] Bob: Yeah sure.

[00:14:46] Dr. Maurice Sholas: I couldn't afford it. And so I just had to say, look, I'm going to just do my best to drown this out by making my own content. Anything I see or anything somebody else sees, I'll report it, and that's all I can do.

[00:14:58] Bob: That's all I can do. Maurice is the victim of what are sometimes called deep fake videos, and now that his reputation has been threatened, Maurice has launched a crusade because if you really think about it, deep fake ads using experts to sell cures, well that can go down a dangerous road pretty fast.

[00:15:21] Dr. Maurice Sholas: So what I'm doing, what I did now is that when I realized I couldn't afford a PR firm, I leaned into organized medicine. So the State Medical Society, I said, hey guys, this is an issue. What if somebody takes a product that they think I'm recommending and has a bad outcome? That's a cause for them to sue me, and I'd have to defend myself and my license for somebody "following my advice" that I never gave. So this is really an issue for us. The Medical Society Census had a discussion with the Attorney General here in Louisiana to put that issue on their radar. I've taken this to the national level talking at national medical association conventions and regional conventions about my experience and talking to people like you just to raise awareness, and I hope one day the laws will catch up to help people protect their own well-earned reputation and name.

[00:16:11] Bob: Oh dear God. Okay, so tell me where things stand today.

[00:16:16] Dr. Maurice Sholas: Where things stand today is that I am combating the misinformation by putting real information out there. I have Doc MoSho which is my sort of online name, protected on all platforms and even ones I don't plan to use. I'm vigilant at making sure to tell people if it's me talking, but it's not linked to any of my accounts, beware. And I'm spreading the word by doing podcasts like this that the law and the rules about how the wild, wild west of the internet should work for everybody and the little man, just as well as it works for these big corporations.

[00:16:51] Bob: And so, as you can see if you google Maurice, there are plenty of videos and posts from him that mainly drown out these scammy deep fakes for now. But they aren't gone from the internet, and he really never knows when another one might pop up on top of someone else's feed anyway.

[00:17:08] Bob: Try to explain to someone how frustrating it is to be in the weird limbo in this situation.

[00:17:16] Dr. Maurice Sholas: What's frustrating is that it costs money, time, effort, and relationships to protect something that should be intrinsically mine. In addition to practicing medicine, I am a medical legal expert, so my reputation literally is monetizable in matters of the court of law. Law firms pay me to give my opinion, my best medical opinion. So it's something that I am regularly asked about and cross-examined on. When someone borrows, to use a kind word, or steals, to use a real word, it puts me at risk, it puts my medical license at risk, and it puts my livelihood at risk. And to protect all of that, there's nothing I can do as a small guy but spend more money.

[00:18:03] Bob: The cost that I'm sitting here thinking about, which is absolutely infuriating, is there aren't many of you, I'm sure, and there are a lot of kids who need help, and instead of helping a child say with a spinal cord injury, you're dealing with this crap. That's maddening.

[00:18:20] Dr. Maurice Sholas: Yep, yep, rather than doing the thing that we started this conversation out about saying how much I love the privilege of doing my job, I don't love the privilege of defending my reputation and likeness. That is not, talking with lawyers and crisis management, PR firms is not my idea of a great day.

[00:18:38] Bob: And it's a waste of your very valuable time.

[00:18:40] Dr. Maurice Sholas: Uh-huh.

[00:18:41] Bob: I mean one thing we talk about a lot on this podcast is everything in the end is about stealing money, but it's always about something so much more than money. And in this case, again, like there are children who are getting, and even if it's not taking your time, how could you not be distracted while you're trying to perform important con--, consultations or treatment. This is weighing on your brain. That itself is a cost.

[00:19:01] Dr. Maurice Sholas: Yeah, I, I will tell you when tough things happen to good people, you have to find a way to laugh so you don't cry. I did actually call my mom and said, "Momma, I made it. I'm so important that they want to pretend to be me." (laughs)

[00:19:16] Bob: Despite all that, this experience has not made Maurice sour on technology; far from it.

[00:19:24] Dr. Maurice Sholas: As, as a matter of fact, one of the companies I work with that I founded here, called Singular New Orleans, uses AI to churn through massive amounts of data with neuroimaging and processing the 100 billion neural connections everybody has to help with medical technologies and treatments and precision medicine. So I'm not against technology at all. I'm a huge, new adopter of new technology, as a matter of fact. But I don't think that new technology, and I don't think that machine learning should come at the expense of people's reputation. I don't think people have the right to borrow expertise and borrow credibility from folks without paying them or getting their permission, and I encourage everybody hearing my voice to lean into protecting not just yourself, but people you see. Again, I just want people to walk away knowing that technology in and of itself is not evil and not bad, and I don't push against it just because it's new and I don't understand all of it, but I always want people to be careful and be aware. If something sounds too good to be true, it usually is. If a talking head is telling you something with no verification and no connection to that talking head, it's probably not true. And if you see something that doesn't feel right, even if it's not you, fight back and report it.

[00:20:43] Bob: And, of course, be very, very careful and selective with where you get your medical advice.

[00:20:51] Dr. Maurice Sholas: Talk to your doctor. Talk to your healthcare provider about medical things. In this era of fierce and rugged individualism that we are leaning into here in the United States, there's a little bit of a sort of smart guys don't know everything, and I'm, smart guys may not know everything, but there are some things that smart guys know a little bit better than the rest of us. And just like I am not an expert at fixing my car, but I am an expert in taking care of kids with disabilities and broken development; I listen to the mechanic. And by me listening to the mechanic, it doesn't mean I'm dumb or I don't have anything to offer, it just means that somebody can help me in things that they might know better than me.

[00:21:27] Bob: And an impulsive purchase that has to do with your body based on a few seconds of a TikTok video sounds like a bad idea to me.

[00:21:33] Dr. Maurice Sholas: Yeah, it's a bad idea and beware because even though you might be nice and have good intentions, there are some people out there lurking that don't, and, and they prey on the worst of us and the best of us to make you spend and part with your money.

[00:21:46] Bob: On the one hand, we live in an age where people are very skeptical of experts. I pretty much welcome that. I think everyone should ask a lot of questions of doctors, of journalists, of financial advisors. I just wish that skepticism extended to, well, everyone we encounter. Sometimes people are more likely to take advice from someone they talk to online for two minutes than someone who's studied a topic for two decades, and that, well I think we'd be better off maintaining our skepticism. Our next guest has something similar to suggest, but he'll do it much more elegantly. I want you to meet fraud expert Frank McKenna. You might have met him on The Perfect Scam before. He was a guest way back in 2023, and he's going to tell us why we are careening towards a world where everyone's default position will be to assume that everything online is fake. A deep fake. And while that sounds cynical, he thinks the sooner we get there, the better.

[00:22:45] Frank McKenna: I am the Chief Fraud Strategist of a company called Point Predictive. We're an AI company that fights fraud using artificial intelligence. I also write a blog called "Frank on Fraud." And Frank on Fraud is my kind of personal project. I've done it for about 10 years, and I hunt down scams and schemes that are happening in the world, and I write about them to educate consumers, to educate fraud fighters in the banking and finance industry where I work, and just to stay ahead of what's happening in the world of fraud. So I spend probably 8 to 10 hours every week just researching new frauds and scams, and a big part of what I've been investigating over the last, I'd say, 18 to 24 months, has been all around AI. So I, I've got a pretty good idea of all the scams and schemes that are happening out there.

[00:23:35] Bob: Hmm, and AI has consumed almost all of that work, I guess.

[00:23:37] Frank McKenna: It's been 90% of it. Yeah, I think I used to write a lot about run of the mill credit card fraud and check fraud, but over the last 18 months, I'm finding that most of the new scams and frauds involve AI. And in fact, by my estimate, 100% of the scams today involve AI, whether it's the creation of deep fakes, using ChatGPT to converse with people through text messages, creating deep fake images, creating deep fake identities, there’s AI interspersed, interwoven into almost every scam now.

[00:24:17] Bob: And what happened to Maurice, he's seen a lot of that too.

[00:24:23] Frank McKenna: I wasn't surprised because I see these all over TikTok, all over Instagram, all over Facebook. They're inundating people's news feeds; the social media platforms I don't think are doing enough to kind of control the problem.

[00:24:42] Bob: How hard is it for a criminal to create a deep fake like that?

[00:24:47] Frank McKenna: Unfortunately, it's not hard at all. Using information off of YouTube videos, Instagram videos, or Facebook videos that you post, the criminals and scammers can take that content and put those into AI generating videos, and make you say anything that they want. So just a few seconds of video can create these, they call them like AI avatars, and they can basically make you sell vitamins or make you sell crypto investments and things like that. So it's not hard at all, anybody can do it and a lot of scammers are.

[00:25:28] Bob: So it, I think this is one of those situations where when we talk about it, you kind of say, yeah, sure, that's possible, but then when you see it, if you saw me make a video of Frank saying, buy this crypto, your skin would crawl, right?

[00:25:44] Frank McKenna: Yeah, when you see yourself and you're going to be much more critical of the video too, so I think he had heard the voice and while many people said it sounds familiar, he instantly knew it wasn't his voice, so he could tell he could probably look at the way that the speech was happening with the lip movements and things like that, and he could obviously tell the fake because he knew, he knew it wasn't himself. Yeah, there's, there's a certain visceral reaction you have about seeing a deep fake of yourself that he probably, like you said, made his skin crawl.

[00:26:18] Bob: And I just think, if people haven't encountered this yet, they're going to soon, and they're going to be surprised at how easy it is. And I guess most people at this point are probably familiar with political deep fakes, right? They've seen the pictures of our president riding a horse bare-chested or something like that and well we know that didn't happen, but here it is. And those, to me, while we've talked a lot about political deep fakes, I don't, I'm not as concerned about those as I am about this kind of thing which I think will just hurt regular people. What do you think?

[00:26:49] Frank McKenna: Yeah, I do. I think people probably don't realize how many deep fakes they're seeing as they scroll through social media. From my experience, it's at least half the videos that you're seeing are some, there's some element of AI generation in those videos. And that's only going to get worse. It's probably going to be, the case will be that most of the content you're looking at online is AI-assisted in some way all the way from deep fakes to maybe edited with AI. So people are going to have to get accustomed to the fact that they're going to have to question pretty much everything. And I think to your point, yeah, the political videos are easy because if that's the kind of thing that you see a lot, but these other celebrity deep fakes, I think, are going to surprise a lot of people, because they're becoming more and more common.

[00:27:41] Bob: In fact, this happened to celebrities like Al Roker and Oprah. So Frank says we need to get used to seeing these things everywhere, but right now, many people overestimate their ability to identify deep fakes.

[00:27:55] Frank McKenna: Here's the thing about AI deep fakes is 60% of the population thinks they can spot them, but in reality, I think a study done by, iProov about a year ago, that's a cyber company, they found that only .1% of people can actually identify those deep fakes. So...

[00:28:14] Bob: Wow!

[00:28:15] Frank McKenna: ...99.9% of people are scrolling through social media not realizing what they're looking at is deep fakes. So that's why this, this is such a problem, right, because a lot of the content that you or I are looking at that we think is real, is probably not real. There's probably elements of deep fake in it.

[00:28:33] Bob: So ev--, everybody listening to this just about has seen things that they thought were real, but weren't.

[00:28:39] Frank McKenna: 100%, yeah, absolutely.

[00:28:42] Bob: Wow.

[00:28:42] Frank McKenna: If you're on social media.

[00:28:44] Bob: I keep wondering why everyone is so much better looking than I am.

[00:28:46] Frank McKenna: (laughs) Yeah, fair point.

[00:28:49] Bob: Now I feel better.

[00:28:52] Bob: And with all this fakery going on, it just makes the job for criminals that much easier.

[00:28:58] Bob: There was a time, I don't know, maybe 18 months ago, 24 months ago where there was a wave of say Instagram account hijacks where someone would hack my account and then contact all my friends and say, I just made a bunch of money in crypto, you should do it too. And it was really effective because it was a friend inviting you, not a random person, and then of course Instagram was terrible about giving the accounts back to the real account holder. But what you're describing to me allows these criminals to skip that step. I know you hack your Instagram account; I can just make one that looks and sounds like you.

[00:29:29] Frank McKenna: Absolutely, yeah, and that's something we're already seeing, right? Family members are scamming family members. Voice clones, right, where you call somebody. The voice clones are very easy to do, right. Those only take like 3 seconds of audio at the very minimum to create a clone of your son or your daughter or your grandson, whoever they can call you with it. But I think the next generation of these scams is people going on Facebook and taking imagery and voices of people from your Facebook and creating deep fakes of them, and then having those people call you potentially, or give you a message. So those are, will happen soon, right, given where we're headed.

[00:30:13] Bob: Health information is particularly vulnerable to these kinds of attacks.

[00:30:17] Frank McKenna: Yeah, so I think medical advice is always a target for scams, right, because whether it's weight loss or improving your health, or mental stamina, or whatever it is, it's, it's an industry that's highly ripe for scamming, and it has been for many years. So the use of AI is making those scams all the more believable because you can put a well-renowned doctor who is on social media, who's well-respected, and you can use that image to sell scammy vitamins or sell health aids or weight loss treatments. And I think that's why it's always been targeted and always will be. I think AI isn't really changing that, it's just making it so much easier for anybody to do.

[00:31:09] Bob: And there's something specific about health advice, right? A lot of people are vulnerable, looking for a solution to a really hard problem, or grasping for straws, right, and that makes them vulnerable.

[00:31:18] Frank McKenna: Yeah, I think, and that's a key point you bring up. That anybody can be scammed at any time given the right context, and when people are afraid or people are vulnerable, or they need something in their life, they are at tremendously more risk of being a victim of a scam. People might think they can spot a scam, but if you're in a vulnerable position, your odds of getting scammed are that much more.

[00:31:45] Bob: Are there any other categories of product that you think are particularly ripe for deep fake kind of frauds like this?

[00:31:52] Frank McKenna: I guess the only other categories that I'd say that are prone to this are financial services. So these are people that are promoting get rich type schemes, Ponzi schemes, crypto investments, all of those get rich type of things where you can work from home and make a lot of money or invest in something, passive income. Those are highly ripe for these deep fakes, particularly if you see a well-known person, a celebrity that's touting them, so that's another area.

[00:32:31] Bob: Frank recently played along with a criminal just to understand how they use AI and it all started with a text that was written to appear to be from someone he knew.

[00:32:41] Frank McKenna: So I replied, they said, um, "You know I'm doing fine." And then they replied and I said, they said, "I hear you're still in fraud. I never understood what that was. Tell me about it." And so here was somebody that was interacting with me, they knew my profession, they knew my name, they knew my phone number. It turns out, you know, through a conversation, I played along with it, but it was a, a scammer in Cambodia, that was trying to get me to invest in crypto. They went through the whole process of sending me deep fake photos of themselves; it was a completely AI-generated person. They sent me deep fake voice messages. And then they did a Zoom call with me. So they wanted to get to know me better, so they did a Zoom call with me, and it was a woman, but she was using an AI software that disguised who she was and it disguised her voice, and I was having a conversation with a scammer.

[00:33:42] Bob: Frank recorded the Zoom call he had with this woman; she's sitting in an office, books about golf behind her on a shelf, perfect blonde hair, blue eyes, and this beautiful woman, likely a man, talks with Frank giving real answers in real-time. Here is part of that conversation.

[00:34:00] (recording) AI Woman: ... as a CFO and founder of Wellora Health. We focus on the medical device and health terminal markets striving to push forward key trends such as digital healthcare, personalized medicine, uh, remote monitoring, artificial intelligence, and big data an-, analysis. Uh how about you?

            Frank McKenna: Big data analysis.

            AI Woman: What are your…

            Frank McKenna: Uh for me, I like to do, well I told you I do a lot of writing, so my profession is mostly writing and research in the area of scams and fraud, so we talked a little bit about that. So yeah, so, and how did you, I was just curious because you sent me the email and we had met before, but I don't recall exactly how we had met. Do you remember?

            AI Woman: I think that it, it was through LinkedIn. Um...

            Frank McKenna: Through LinkedIn, okay, oh, so we connected via LinkedIn.

            AI Woman: Probably yeah.

            Frank McKenna: Ah, okay, okay, interesting. Uh, I had no idea because you had known me and I was like, oh, interesting message. So in terms of collaboration what would that look like? Would, are you looking for, are you selling to customers or is there, what other opportunities would there be?

            AI Woman: Hmm, uh as a diversified investor, actually my investment areas include real estate, AI arbitrage, biotechnology, and healthcare...

[00:35:29] Bob: I don't know about you, but I found that really disturbing. It's so far from a single fake photo and some awkward text messages.

[00:35:39] Frank McKenna: Yeah, it was a, it looked to me and sounded to me exactly like a woman. The woman looked like a model, but what I really noticed almost off the bat was that the voice, as she was talking to me through Zoom call, she was talking to me, but the voice was delayed and lagged, and that's because the software was trying to sync up her image with the voice, but also the lip synching was completely off. So that was a dead giveaway for me. And then as she moved her head, things started to warp a little bit in an unnatural way. So there were clues there, but if I you was in the midst of a romance scam, I might not look so closely or try to spot those types of things.

[00:36:27] Bob: And also, let's be honest, right, there are some people who will just entertain the conversation with a model that they wouldn't entertain in other situations, right?

[00:36:37] Frank McKenna: That's right, yeah, because there's a hope there that I mean oh, this is a romantic situation, maybe.

[00:36:45] Bob: Frank spots or tries to spot deep fakes for a living. What are some of the things he does that might help you?

[00:36:54] Frank McKenna: I think there's a lot of ways that people can become pretty good at spotting deepfakes if they know what to look for and they know what some of the red flags are. I've watched a lot of deepfakes, and I have a pretty, pretty good idea of how to spot them, but some of the things that I do is go immediately to the face. So typically when you have a deepfake, and maybe this is the case with the deepfake videos for the doctor, but a lot of times AI is going to show fringing around the edges of the face. So you're going to see this like strange outline around the face. The lip movement is going appear slightly odd; that's always I think the dead giveaway for people is looking at those lip syncing. Typically, especially when there's like an L, or an M, or a B at the start of a word, it's going to shi--, the mouth movement won't match. When you say B, your mouth, everybody's mouth moves in the same way, or M, the mouth closes. You're going to see those subtle things that are not going to match up, and the lip-syncing is always going to be dead giveaway. The other thing that I like to do on, every time I'm skeptical or I think something is wrong, I'll create a, like a screen shot of the image or the video, and I'll look at the pupils, and AI cannot get the pupils right. It usually, pupils are round, but in a deep fake image or video, oftentimes the pupils will be oblong shapes or triangles, or very strange shapes. So that's always a dead giveaway of a deep fake video. The other thing that you always listen to, I'm certain this was probably the case with the doctor's videos, the voice sounds too perfect. It sounds robotic. And it doesn't have that natural cadence or tone that and imperfections, I should say that you're typically going to hear when you're watching a live real video.

[00:38:56] Bob: And when you're watching a video that seems really life-like but something is just off, your natural human reaction to that can be pretty strong.

[00:39:06] Frank McKenna: It's that reaction you get when you see something that actually looks too perfect. I think one of the things that if you recall, there is, any time there's a tragedy, like a flooding I think in Texas, or uh there was also one in Alabama, and a lot of the scammers will post pictures of let's say a little girl with her dog and she's crying and she's standing next to a river, raging river. And this looks too set up, like you said. It looks too perfect; it looks too on point for trying to draw on your emotion. But that reaction that a lot of people will get when they see that is, for many people, is that just doesn't look right. It's, the image looks too perfect, and you do have this reaction that your, this aversion to it like you don't know what it is and you can't put your finger on it, but something tells you that it's not right.

[00:40:04] Bob: Yeah, and it's almost a disgust, so it's a very powerful sort of feeling.

[00:40:09] Frank McKenna: It's human nature, I think is protecting us, right, because over the years, we've learned to not trust things that are, if it's too good to be true it probably is. And I think many of us have learned this is too good to be true.

[00:40:23] Bob: What is it that you recommend consumers do to protect themselves?

[00:40:27] Frank McKenna: Okay, so we talked about too good to be true. Just don't trust free giveaways or things that seem too good to be true; medicines that seem to solve all your problems. If you are suspicious, check the account, right? A lot of these TikTok accounts of um, Dr. Maurice were questionable accounts. So you can check and look at the account itself and see, was it recently created? Does it look like there's other scam content on that account? If it's a doctor that's giving you advice and make, touting something you want to buy, search them online and go to their website. If they're really endorsing something, it's going to be on the website or there's going to be other material out there showing it rather than just what you're seeing on TikTok. Use reverse image searching if you can. Take a screenshot of the video, and then run it through Google Search on the image. A lot of times what you'll see pop up if you were take a, an image of that, of Dr. Maurice from those deep fake ads, probably you'd see hundreds of matching images because those were everywhere, so you can use Google to spot those type, help you spot those types of things.

[00:41:40] Bob: So I didn't, I've never thought about that, that would work. If I take a still of a video, and I use something like, I don't know, is TinEye still a thing, or we just Google Images now, but and I can reverse image search and it'll use that frame from a video and find me other videos, that thing might be in, is that how it works?

[00:41:57] Frank McKenna: Yeah, that's right. So I was doing research on a AI dealer cloning scam, and this was a scam where scammers were creating deep fake AI sites, like creating them with AI, of real car dealerships and they were selling fake cars and having people wire money in to get these cars delivered to them.

[00:42:17] Bob: Whoa.

[00:42:17] Frank McKenna: On the website, yeah, on the website they had videos of customers that were touting how great the service was in getting their car shipped and so I took a still image of one of the videos, and I put it into Google, and I reverse image searched, and I found that there were hundreds of other videos that were using that same person. It turns out it was a AI avatar that you could buy online and make it say whatever you wanted it to. So you can do those reverse image searches and find lots of evidence if it's a deep fake scam.

[00:42:57] Bob: That's a really great idea. Honestly, I think people love when you give them homework like that to do.

[00:43:02] Frank McKenna: It's fun to find fraud sometimes.

[00:43:04] Bob: Yeah.

[00:43:06] Bob: And report deep fakes you find. It's genuinely helpful, even if it can sometimes feel like you aren't doing any good.

[00:43:14] Frank McKenna: You know, having been a very frequent reporter, and I'd advise everybody to be the same in today's world is, if I see a deep fake I will, it's very easy now. You click those three buttons and you report it. Usually there's a, a list of options that you can choose. The more people that report these things, the more likely it is that they get taken down, so it's kind of like that old adage; if you see something, say something. We can all, doctors like, like this victim, was by reporting things when we see them because there's a strength in numbers here.

[00:43:51] Bob: Hmm, that's like a volunteer force, however, and it does strike me that the social media company should be spending more money, they're only this not asking us to help.

[00:44:00] Frank McKenna: Yeah, the instinct, 'cause I'm in the fraud fighting community, 30 years in fraud fighting, and uh many of us are very frustrated because we feel as frequent reporters because we're trying to stop the problem, we feel like we're acting as Meta's fraud force so they don't have to hire people and we're volunteering. If we don't do it, who's going to do it?

[00:44:21] Bob: If we don't do it, who will? That might not be ideal, but it's reality. And so it this. Imagine a world where most friend requests you get, most phone calls you get, most texts you get, most posts are AI-generated. Almost nothing is real. It sounds stark, but it all reminds me of the early days of email when our inboxes were teeming with spam, so much spam that email became almost unusable.

[00:44:52] Bob: There was a time when I don't believe the future of email had been decided when spam threatened to basically break email entirely, and I guess we got a tech solution to that, but it wasn't a given, I don't think.

[00:45:02] Frank McKenna: I don't, yeah, I think it's, it's gotten under control because of Google and Gmail where they can filter out most of it. You still get the odd ones in here and there, but yeah, there will need to be a tech solution that can stop these deep fakes and it's going to be difficult because if you think about it and what, how much investment is going into the generative AI is hundreds of billions of dollars going into Meta and going into Open AI, and Anthropic, and Google, to create these generation, AI-generation machines that can create the perfect image, the perfect video. But there's very little money being spent or invested on the detection of those. Um, I think a stat I read a few years ago was that deep fake detectors are about 5 years behind the generators. So we were there behind the ball in being able to find this stuff with a tech solution. I think some of these big AI companies will want to invest or create their own detection of systems so that they can stay ahead of the game because it is, we truly are behind with, on a tech solution to stop it.

[00:46:20] Bob: We are truly behind on a tech solution to this, a spam filter for fake images and video, if you will, and so you have to be that filter. Now we spent a lot of time in this episode talking about how you can detect fake videos online, but everyone who works in this space appreciates that these tips are only temporary. Eventually, the technology will get so good that there will be no time lag in conversation, no weird artifacts around faces, and so all this is really headed to one place, a cynical place perhaps, but I think the sooner we get there the safer we will all be.

[00:46:58] Frank McKenna: Yeah, there'll come a point, I think it happens to everybody, when you encounter it enough it, the, rather than think everything is real online, you're going to switch to a completely different frame of mind which is everything is fake. Because everything you look at's going to start to become, look like AI. I think that's not a bad place to be because it puts you in a much more protective stance when you pretty much don't trust anything at face value. You do, you use critical thinking and maybe some techniques to avoid being scammed, but everybody will have that switch at some point. I have had that pretty much, which is why I think I, I believe almost nothing online anymore.

[00:47:46] Bob: I think this is a powerful insight. So the door is closed unless you open it occasionally for something, but for the most part, boom, it's all fake.

[00:47:55] Frank McKenna: Exactly. Yeah, I think we're going to get to that point. Each of us will hit that point at, at some point in the future, but it, it will happen.

[00:48:04] Bob: I feel like I want to start using the hashtag, it's all fake.

[00:48:07] Frank McKenna: (laughs) Yeah. You would be right more than you would be wrong, too.

[00:48:11] Bob: Yeah, no, but the, the mental switch you're describing which is, no, I've never heard someone quite describe it as clearly as that to me before, is a really powerful concept, that the sooner we all get to that point the better, I think. That's great.

[00:48:23] Frank McKenna: Yeah, I think so. I think a lot of people in the industry, I work with fraud fighters, we are all at that point.

[00:48:28] Bob: I know people use the phrase zero trust, and that means something slightly different than that, but this kind of strikes me as what zero trust conceptually is.

[00:48:34] Frank McKenna: It's zero trust. It's not trusting anything, especially on social media that you see, a video about an event, a terrific plane crash, a catastrophic event, you just double-check everything. Don't have any trust in anything and take it at face value.

[00:48:52] Bob: Just assume everything is fake unless proven otherwise. It sounds extreme, but in a recent episode you heard the the Washington Post’s Michelle Singletary say something very similar. She suggested putting an index card with the word "LIE" right on your smartphone. Maybe you can just do that metaphorically anyway, but know this, deep fake scams aren't the future; they're hear now, and you can't take anything at face value. For The Perfect Scam, I'm Bob Sullivan.

(MUSIC SEGUE)

[00:49:32] Bob: If you have been targeted by a scam or fraud, you're not alone. Call the AARP Fraud Watch Network Helpline at 877-908-3360. Their trained fraud specialists can provide you with free support and guidance on what to do next. Our email address at The Perfect Scam is: theperfectscampodcast@aarp.org, and we want to hear from you. If you've been the victim of a scam or you know someone who has, and you'd like us to tell their story, write to us. That address again is: theperfectscampodcast@aarp.org. Thank you to our team of scambusters; Associate Producer, Annalea Embree; Researcher, Becky Dodson; Executive Producer, Julie Getz; and our Audio Engineer and Sound Designer, Julio Gonzalez. Be sure to find us on Apple Podcasts, Spotify, or wherever you listen to podcasts. For AARP's The Perfect Scam, I'm Bob Sullivan.

(MUSIC OUTRO)

END OF TRANSCRIPT

The Perfect ScamSM is a project of the AARP Fraud Watch Network, which equips consumers like you with the knowledge to give you power over scams.

 

How to listen and subscribe to AARP's podcasts

Are you new to podcasts? Learn how to subscribe to AARP Podcasts on any device.

Unlock Access to AARP Members Edition

Join AARP to Continue

Already a Member?