Javascript is not enabled.

Javascript must be enabled to use this site. Please enable Javascript in your browser and try again.

Skip to content
Content starts here
CLOSE ×
Search
CLOSE ×
Search
Leaving AARP.org Website

You are now leaving AARP.org and going to a website that is not operated by AARP. A different privacy policy and terms of service will apply.

Why AI Slop Is Dangerous and What You Can Do to Detect It

Viral videos generated by artificial intelligence can appear uncannily real


generic-video-poster

For years, people have been warned not to believe everything they read on the internet or on social media, and with the emergence of so-called deepfakes, not everything they see or hear is trustworthy, either. The burgeoning, all-too-easy-to-create, short-form videos referred to as “AI slop” have elevated those concerns.

Indeed, these latest artificial intelligence-infused videos, which you can generate in minutes by merely entering text and/or photo prompts, are a stark contrast to the middling, poor-quality and sometimes grotesque content that came before, a chief reason the entire category earned the unflattering “AI slop” moniker in the first place.

But AI slop is generally becoming slicker and, while not perfect, a lot less sloppy, making it more difficult to detect what is real.

Last October, when ChatGPT developer OpenAI announced it would block “disrespectful” and vulgar user-generated artificial intelligence clips depicting the likeness of the Rev. Dr. Martin Luther King Jr. in the company’s Sora 2 video-creation app, it brought renewed attention to AI slop and internet deepfakes. 

Such fake, TikTok-like Sora videos, which for certain paid tier users on the web version are up to 25 seconds long, can appear unnervingly polished, and they’ve been flooding social media.

The Sora iOS app enables nonpaying users to easily create AI videos of up to 15 seconds long, though the number of videos a person can produce in a single day is limited. Sora jumped to the top of the free downloaded app charts in Apple’s App Store after its release at the end of September, even though it initially required an invitation code to use it. An Android version of Sora also hit the top of the free Google Play Store charts when it showed up weeks later.

The web version was released to the public last year.

Other AI tools, such as Veo 3 in Google’s Gemini, Midjourney V1 and Meta Vibes, are also lowering the bar for almost anyone to generate what can be uncannily realistic video clips. 

Deepfakes are designed to mimic a real person or situation via voice and/or imagery. Fake photos of the late Pope Francis wearing a designer puffer coat famously went viral a couple of years ago. Videos circulating of Taylor Swift promoting free cookware were also fake.

Sora and its ilk can create what amounts to high-resolution deepfakes on steroids.

Entertaining fluff

Some of the newer AI slop videos are funny, mischievous memes, entertaining or, like much of what passes through cyberspace these days, downright silly, from a cat in a mixed martial arts ring celebrating a knockout to an Elvis Presley impersonator singing about making a peanut butter and jelly sandwich. These are generally more slickly produced than what came before, even if the outrageous content should be a tip-off that what you’re seeing isn’t exactly realistic.

Mickey Mouse teams up with Sora

Lines are blurring. OpenAI just struck a deal with Walt Disney to bring Mickey Mouse, Ariel, Cinderella, Luke Skywalker and more than 200 other licensed characters, costumes and props from Disney, Marvel, Pixar and Star Wars to Sora.

Starting early 2026, you will be able to watch curated short-form Sora videos with those Disney characters, including on the Disney+ streaming service, and generate your own AI videos based on your own user prompts. Disney specified that the agreement “does not include any talent likenesses or voices” and that Disney and OpenAI “affirmed a shared commitment to maintaining robust controls to prevent the generation of illegal or harmful content.”

Concerns of misuse

But there are still potentially troubling consequences associated with copyright, hoaxes, disinformation and the increased difficulty in determining what is credible and what is not across AI-generated videos.

“The barriers to entry to create AI disinformation and scams are lower than they’ve ever been, and the technology is better than it has ever been,” says Alex Mahadevan, director of MediaWise at the Poynter Institute. “I think what you see in Sora is the convergence of AI slop and deepfakes.”

To be sure, deepfakes have been around for several years, but people could usually figure out what wasn’t real from revealing visual imperfections: an extra finger on someone’s hand, say, or misaligned pupils.

These days, it’s harder to tell.

A recent research study from The CareSide, a home care provider in Australia, examined how well a sample of nearly 3,200 older adults in English-speaking countries, including the U.S., could correctly distinguish between AI and human-generated content: text, photos, audio and, yes, video. Scores declined significantly with age, and the study found that “older adults can detect AI-generated content only moderately better than a coin flip.”

Younger-age cohorts were also far from perfect, the study found.

In October, Newsguard, an organization that rates the trustworthiness of news and information websites, found that Sora generated convincing videos that advanced “provably false claims” 80 percent of the time when prompted to do so.

“You’re seeing the ability to generate real people, and they might be historic figures,” Mahadevan adds. “You can go on [Sora] and make the creator of the technology, Sam Altman, say [something like] ‘I’m Sam Altman, I founded ChatGPT, and here is why I want you to invest in my cryptocurrency.’ In the right hands, AI is a dream. But what I’ve seen is that the technology is advancing more rapidly than I think we can help people understand the dangers of it.”

Mahadevan actually encourages people to use Sora if only to understand it. Then, he says, “Think about how the tech might be used by political operatives. It’s pretty freaky.”

Beyond having fun, there are certainly legitimate uses of the technology. You might, for example, make explainer videos to show how something is done.

Through a feature called Cameos, you can appear in your own Sora videos, or in videos from people you approve of using your likeness. They in turn can give you the OK to use their Cameos. You’re given the option to create your own cameo when you first sign up for the app and then decide whether to make it available to the entire Sora community or keep it to yourself. You can also set preferences for how your cameo behaves and looks to other people.

As an exercise, I used my own Cameo to produce an implausible clip of myself feeding M&Ms to a bear, and another where I’m doing a funky song and dance routine not quite worthy of Fred Astaire.

OpenAI’s terms of use stipulate that you cannot represent what you produce as human-generated when it is not. Additionally, when creating a video, Sora does not permit uploads of images containing photorealistic people.

Still, abuses can happen. 

OpenAI’s decision to halt the King videos came at the behest of the civil rights leader’s family. In a joint statement with the King estate, the company stated, “While there are strong free speech interests in depicting historical figures, OpenAI believes public figures and their families should ultimately have control over how their likeness is used. Authorized representatives or estate owners can request that their likeness not be used in Sora cameos.” 

The company has implemented other guardrails. Sora videos carry a visible Sora watermark identifying them as such, and OpenAI states that it can utilize reverse-image and audio search tools to trace the origins of those videos. OpenAI also says it has other measures in place to ensure that your audio and image likenesses are used only with your consent, which you can revoke at any time.

But none of the guardrails are foolproof. Tools have emerged with the specific purpose of removing Sora watermarks. Folks may also be able to crop the watermarks out of downloaded Sora videos or ones shared across social media.

Moreover, even when warning labels appear, they can be easily overlooked.

A skeptic’s checklist

While it is becoming increasingly difficult to distinguish what’s real, there are some commonsense steps you can take to help verify the authenticity of what you are watching.

Who or what is behind a video? Click on the profile of the person who shared the video to discover their political leanings or if they’re trying to sell you some bogus product. For instance, if you come across a video of a celebrity hawking a health supplement, try to determine if that person would legitimately promote the product. 

If a company is mentioned, do your due diligence by researching publicly available information and by running an old-fashioned Google search.

Examine the metadata. Granted, this may require some effort, but you can hopefully get at the metadata or the underlying contextual information captured about the image or video. Metadata includes the author, creation date, file size and other data that may provide clues about a content’s origins, but it isn’t visible on the surface, generally requiring specialized tools and technical knowledge to access it. One thing you might try: Upload a suspect video to an online verification tool known as the Content Authenticity Initiative, located at verify.contentauthenticity.org. At the site, click Select a file from your device to represent the video in question, or drag and drop the file into the space provided. It may then tell you a video was generated by AI.

The tool is part of a technical standard developed by the Coalition for Content Provenance and Authenticity (C2PA), and is intended to provide a digital “nutrition label.” Companies behind it include Adobe, Amazon, Google, Meta, Microsoft, OpenAI and Sony.

“We’re in the unfortunate situation where you have to do more homework,” Mahadevan says.

Resist the temptation to act on emotion and share the video. Many Sora or other AI-generated videos go viral, but exercise restraint. If something makes you angry or you are asked to pull out your credit card or peer-to-peer cash app, stop. Those are likely red flags that something is amiss.

Go with your gut. Ultimately, if something seems off, it likely is. If whatever is trumpeted in a video appears to be too good to be true, assume this is indeed the case.

Unlock Access to AARP Members Edition

Join AARP to Continue

Already a Member?

Red AARP membership card displayed at an angle

Get instant access to members-only products and hundreds of discounts, a free second membership, and a subscription to AARP the Magazine.