Javascript is not enabled.

Javascript must be enabled to use this site. Please enable Javascript in your browser and try again.

Skip to content
Content starts here
CLOSE ×
Search
CLOSE ×
Search
Leaving AARP.org Website

You are now leaving AARP.org and going to a website that is not operated by AARP. A different privacy policy and terms of service will apply.

AI Slop Is Dangerous. Here’s How to Detect It

Viral videos generated by artificial intelligence can appear uncannily real, even with Sora being discontinued


generic-video-poster

Key Takeaways

  • AI-generated short videos are getting slicker and harder to tell from real footage.
  • Easy-to-use tools let almost anyone create fake clips in minutes.
  • Viewers need to exercise restraint and check sources before sharing videos.

For years, people have been warned not to believe everything they read on the internet or on social media. With the emergence of AI slop (short for AI-generated videos), not everything they see or hear is trustworthy either. 

Indeed, these all-too-easy-to-create videos, which you can generate in minutes by merely entering text and/or photo prompts, are a stark contrast to the poor-quality and sometimes grotesque content that came before — a chief reason the entire category earned the unflattering “AI slop” moniker at its start.

But AI slop is generally becoming slicker and, while not perfect, a lot less sloppy, making it more difficult to detect what is real. 

Deepfakes are designed to mimic a real person or situation via voice and/or imagery. Fake photos of the late Pope Francis wearing a designer puffer coat famously went viral a couple of years ago. Videos circulating of Taylor Swift promoting free cookware were also fake. 

The latest AI slop can create what amounts to high-resolution deepfakes on steroids.

Last October, when ChatGPT developer OpenAI announced it would block “disrespectful” and vulgar user-generated AI slop clips depicting the likeness of Martin Luther King Jr. that were created in the company’s Sora 2 video-creation app, it brought renewed attention to AI slop and internet deepfakes. 

Sora had jumped to the top of the free downloaded app charts in Apple’s App Store after its release at the end of September, even though it initially required an invitation code to use it. An Android version of Sora hit the top of the free Google Play Store charts when it showed up weeks after iOS.

By January, however, the bloom was apparently off the rose. TechCrunch reported that Sora installs plunged 45 percent month over month, citing numbers from market intelligence provider Appfigures. On March 24, seemingly out of the blue and with scant details, OpenAI announced it was saying goodbye to Sora, with more still to come on how people who created Sora videos could preserve them. No date has been given for when Sora goes away.

Such fake, TikTok-like Sora videos, which for certain paid tier users on the web version are up to 25 seconds long, could appear unnervingly polished. For a time, they flooded social media.

The Sora iOS app enabled nonpaying users to easily create AI videos of up to 15 seconds long, though the number of videos a person could produce in a single day was limited.

The web version was released to the public last year. 

By shuttering its video generation tool, a $1 billion investment deal OpenAI struck with Walt Disney — to bring Mickey Mouse, Ariel, Cinderella, Luke Skywalker and more than 200 other licensed characters, costumes and props from Disney, Marvel, Pixar and Star Wars to Sora — also collapsed.

A Disney statement shared with media outlets indicated that the company “will continue to engage with AI platforms to find new ways to meet fans where they are while responsibly embracing new technologies that respect [intellectual property] and the rights of creators.”

Even with Sora’s imminent departure, other AI tools, such as Veo 3 in Google’s Gemini, Midjourney V1, Luma's Ray3, and Vibes in Meta AI, are also lowering the bar for almost anyone to generate what can be uncannily realistic video clips, though you will have to pay to use some of them.

Entertaining fluff

Some of the newer AI slop videos are funny, mischievous memes that are entertaining or, like much of what passes through cyberspace these days, downright silly, from a cat in a mixed martial arts ring celebrating a knockout to an Elvis Presley impersonator singing about making a peanut butter and jelly sandwich.

These are generally more slickly produced than what came before, even if the outrageous content should be a tip-off that what you’re seeing isn’t exactly realistic.

And as the ill-fated Disney and OpenAI deal suggested, the lines are blurring.

Concerns of misuse

Indeed, there are still potentially troubling consequences associated with copyright, hoaxes, disinformation and the increased difficulty in determining what is credible and what is not across AI-generated videos.

“The barriers to entry to create AI disinformation and scams are lower than they’ve ever been, and the technology is better than it has ever been,” says Alex Mahadevan, director of MediaWise at the Poynter Institute. “I think what you see in Sora is the convergence of AI slop and deepfakes.”

To be sure, deepfakes have been around for several years, but people could usually figure out what wasn’t real from revealing visual imperfections: an extra finger on someone’s hand, say, or misaligned pupils.

These days, it’s harder to tell.

A recent research study from the CareSide, a home care provider in Australia, examined how well a sample of nearly 3,200 older adults in English-speaking countries, including the U.S., could correctly distinguish between AI and human-generated content: text, photos, audio and, yes, video. Scores declined significantly with age, and the study found that “older adults can detect AI-generated content only moderately better than a coin flip.”

In October, NewsGuard, an organization that rates the trustworthiness of news and information websites, found that Sora generated convincing videos that advanced “provably false claims” 80 percent of the time when prompted to do so.

“You’re seeing the ability to generate real people, and they might be historic figures,” Mahadevan says. “In the right hands, AI is a dream. But what I’ve seen is that the technology is advancing more rapidly than I think we can help people understand the dangers of it.”

Mahadevan had actually encouraged people to use Sora, if only to understand it. Then, he says, “think about how the tech might be used by political operatives. It’s pretty freaky.”

Beyond having fun, there are certainly legitimate uses of AI video technology. You might, for example, make explainer videos to show how something is done.

Through a feature called Cameos, you could appear in your own Sora videos, or in videos from people you approve of using your likeness.

As an exercise, I used my own Cameo to produce an implausible clip of myself feeding M&Ms to a bear, and another where I’m doing a funky song and dance routine not quite worthy of Fred Astaire. OpenAI’s terms of use stipulated that you cannot represent what you produce as human-generated when it is not, or upload images containing photorealistic people.

Still, the abuses that surfaced serve as a warning not only of what can conceivably happen with other video generation tools. Such abuses also raise continuing questions about who controls the likeness of public figures and other people. 

OpenAI’s decision to halt the King videos came at the behest of the civil rights leader’s family.

Other guardrails have been a mixed bag. Sora videos carry a visible Sora watermark identifying them as such, and OpenAI says it can utilize reverse-image and audio search tools to trace the origins of those videos. OpenAI also put measures in place to ensure that your audio and image likenesses are used only with your consent, which you can revoke at any time.

But none are foolproof. Tools emerged with the specific purpose of removing Sora watermarks, and some folks may be able to crop the watermarks out of downloaded Sora videos or ones shared across social media.

Moreover, even when warning labels appear, they can be easily overlooked.

A skeptic’s checklist

While it is becoming increasingly difficult to distinguish what’s real, there are some commonsense steps you can take to help verify the authenticity of what you are watching.

Who or what is behind a video? Click on the profile of the person who shared the video to discover their political leanings or if they’re trying to sell you something. For instance, if you come across a video of a celebrity hawking a health supplement, try to determine if that person would legitimately promote the product. 

If a company is mentioned, do your due diligence by researching publicly available information and by running an old-fashioned Google search.

Examine the metadata. Granted, this may require some effort, but you can hopefully get at the metadata or the underlying contextual information captured about the image or video. Metadata includes the author, creation date, file size and other data that may provide clues about a content’s origins. But it isn’t visible on the surface, generally requiring specialized tools and technical knowledge to access it.

One thing you might try: Upload a suspect video to an online verification tool known as the Content Authenticity Initiative, located at verify.contentauthenticity.org. At the site, click “Select a file from your device” to represent the video in question or drag and drop the file into the space provided. It may then tell you a video was generated by AI. As always, be careful with any file you’re downloading from the internet.

The tool is part of a technical standard developed by the Coalition for Content Provenance and Authenticity (C2PA) and is intended to provide a digital “nutrition label.” Companies behind it include Adobe, Amazon, Google, Meta, Microsoft, OpenAI and Sony.

“We’re in the unfortunate situation where you have to do more homework,” Mahadevan says.

Resist the temptation to act on emotion and share the video. Many AI-generated videos go viral, but exercise restraint. If something makes you angry or you are asked to pull out your credit card or peer-to-peer cash app, stop. Those are likely red flags that something is amiss.

Go with your gut. Ultimately, if something seems off, it likely is. If whatever is trumpeted in a video appears to be too good to be true, assume this is indeed the case. 

This story, originally published December 16, 2025, has been updated following the news that OpenAI is shutting down Sora.

The key takeaways were created with the assistance of generative AI. An AARP editor reviewed and refined the content for accuracy and clarity.

Unlock Access to AARP Members Edition

Join AARP to Continue

Already a Member?

Join AARP for only $11 per year with a 5-year membership. Get instant access to members-only products and hundreds of benefits, a free second membership, and a subscription to AARP The Magazine.