AARP Hearing Center
Key Takeaways
- AI-generated short videos are getting slicker and harder to tell from real footage.
- Easy-to-use tools let almost anyone create fake clips in minutes.
- Viewers need to exercise restraint and check sources before sharing videos.
For years, people have been warned not to believe everything they read on the internet or on social media. With the emergence of AI slop (short for AI-generated videos), not everything they see or hear is trustworthy either.
Indeed, these all-too-easy-to-create videos, which you can generate in minutes by merely entering text and/or photo prompts, are a stark contrast to the poor-quality and sometimes grotesque content that came before — a chief reason the entire category earned the unflattering “AI slop” moniker at its start.
But AI slop is generally becoming slicker and, while not perfect, a lot less sloppy, making it more difficult to detect what is real.
Deepfakes are designed to mimic a real person or situation via voice and/or imagery. Fake photos of the late Pope Francis wearing a designer puffer coat famously went viral a couple of years ago. Videos circulating of Taylor Swift promoting free cookware were also fake.
The latest AI slop can create what amounts to high-resolution deepfakes on steroids.
Last October, when ChatGPT developer OpenAI announced it would block “disrespectful” and vulgar user-generated AI slop clips depicting the likeness of Martin Luther King Jr. that were created in the company’s Sora 2 video-creation app, it brought renewed attention to AI slop and internet deepfakes.
Sora had jumped to the top of the free downloaded app charts in Apple’s App Store after its release at the end of September, even though it initially required an invitation code to use it. An Android version of Sora hit the top of the free Google Play Store charts when it showed up weeks after iOS.
By January, however, the bloom was apparently off the rose. TechCrunch reported that Sora installs plunged 45 percent month over month, citing numbers from market intelligence provider Appfigures. On March 24, seemingly out of the blue and with scant details, OpenAI announced it was saying goodbye to Sora, with more still to come on how people who created Sora videos could preserve them. No date has been given for when Sora goes away.
Such fake, TikTok-like Sora videos, which for certain paid tier users on the web version are up to 25 seconds long, could appear unnervingly polished. For a time, they flooded social media.
The Sora iOS app enabled nonpaying users to easily create AI videos of up to 15 seconds long, though the number of videos a person could produce in a single day was limited.
The web version was released to the public last year.
By shuttering its video generation tool, a $1 billion investment deal OpenAI struck with Walt Disney — to bring Mickey Mouse, Ariel, Cinderella, Luke Skywalker and more than 200 other licensed characters, costumes and props from Disney, Marvel, Pixar and Star Wars to Sora — also collapsed.
A Disney statement shared with media outlets indicated that the company “will continue to engage with AI platforms to find new ways to meet fans where they are while responsibly embracing new technologies that respect [intellectual property] and the rights of creators.”
Even with Sora’s imminent departure, other AI tools, such as Veo 3 in Google’s Gemini, Midjourney V1, Luma's Ray3, and Vibes in Meta AI, are also lowering the bar for almost anyone to generate what can be uncannily realistic video clips, though you will have to pay to use some of them.
More From AARP
Would You Let an AI Agent Shop for You?
Agentic AI can act on your behalf and pay. But guardrails are essential
Spotting Impostors: Protect Your Information
Pay attention to signs that the ‘friend’ you’re emailing or texting is fake
AI Tools: Navigating Systems with Ease
AI tools streamline customer service tasks