Did you know you can customize Google to filter out garbage? Take these steps for better search results, including adding my work at Lifehacker as a preferred source.
It’s scary how realistic AI-generated videos are getting. What’s even scarier, however, is how accessible the tools to create these videos are. Using something like OpenAI’s Sora app, people can create hyper-realistic short-form videos of just about anything they want—including real people, like celebrities, friends, or even themselves.
OpenAI knows the risks involved with an app that makes generating realistic videos this easy. As such, the company places a watermark on any Sora generation you create via the app. That way, when you’re scrolling throughout your social media feeds, if you see a little Sora logo with a cute cloud with eyes bouncing around, you know it’s AI-generated.
You can’t trust a Sora watermark
My immediate worry when OpenAI announced this app was that people would find a way to remove the watermark, sowing confusion across the internet. I wasn’t wrong: There are already plenty of options out there for interested parties who want to make their AI slop even more realistic. But what I didn’t expect was the opposite: people who want to add the Sora watermark to real videos, to make them look as if they were created with AI.
I was recently scrolling—or, perhaps, doomscrolling—on X when I started seeing some of these videos, like this one featuring Apple executive Craig Federighi: The post says “sora is getting so good,” and includes the Sora watermark, so I assumed someone made a cameo of Federighi in the app and posted it on X. To my surprise, however, the video is simply pulled from one of Apple’s pre-recorded WWDC events—one where Federighi parkours around Apple HQ.
Later, I saw this clip, which also uses a Sora watermark. At first glance, you might be fooled into thinking it’s an OpenAI product. But look closer, and you can tell the clip uses real people: The shots are too perfect, without the fuzziness or glitching that you tend to see from AI video generation. This clip is simply spoofing the way Sora tends to generate multi-shot clips of people talking. (Astute viewers may also notice the watermark is a little larger and more static than the real Sora watermark.)
This Tweet is currently unavailable. It might be loading or has been removed.
As it turns out, the account that posted that second clip also made a tool for adding a Sora watermark to any video. They don’t explain the thinking or purpose behind the tool, but it’s definitely real. And even if this tool didn’t exist, I’m sure it wouldn’t be too hard to edit a Sora watermark into a video, especially if you weren’t concerned about replicating the movement of Sora’s official watermark.
What do you think so far?
To be clear, people were already posting like this before adding the watermark tool. The joke is to say you made something with Sora, but post a popular or infamous clip instead—say, Drake’s Sprite ad from 15 years ago, Taylor Swift dancing at The Eras Tour, or an entire Sonic the Hedgehog movie. It’s a funny meme, especially when it’s obvious that the video wasn’t made by Sora.
This Tweet is currently unavailable. It might be loading or has been removed.
Real or not real?
But this is an important reminder to be constantly vigilant when scrolling through videos on your feeds. You have to be on the lookout for both clips that aren’t real, as well as clips that are actually real, but are being advertised as AI-generated. There are a lot of implications here. Sure, it’s funny to slap a Sora watermark on a viral video, but what happens when someone adds the watermark to a real video of illegal activity? “Oh, that video isn’t real. Any videos you see of it without the watermark were tampered with.”
At the moment, it doesn’t seem like anyone has figured out how to perfectly replicate the Sora watermark, so there will be signs if someone actually tries to pass a real video off as AI. But this is still all a bit concerning, and I don’t know what the solution could be. Maybe we’re heading towards a future in which internet videos are simply treated as untrustworthy across the board. If you can’t determine what’s real or fake, why bother trying?