In recent years, the internet has seen a dramatic shift in the nature and prevalence of fake content. What once consisted mainly of poorly edited images has now evolved into a sophisticated landscape of AI-generated videos and deepfakes. The line between reality and fiction is becoming increasingly blurred, making it harder than ever for users to discern what is real online.
At the center of this new wave is Sora, an advanced AI video tool developed by OpenAI. Sora has already garnered attention for its ability to produce highly realistic, AI-generated videos with remarkable detail, synchronized audio, and creative flair. However, the introduction of its latest feature—an invite-only, TikTok-style social platform called Sora 2—has raised alarm bells. On this platform, every video is 100% synthetic, made entirely by AI. The ability to insert real people’s likenesses into virtually any scenario, a feature called “cameo,” makes the content eerily lifelike.
This technological leap has heightened concerns about the potential for misuse. Experts warn that tools like Sora lower the barrier for creating convincing deepfakes, which can be used to spread misinformation, manipulate public opinion, or tarnish reputations, especially of celebrities and public figures. In response, organizations such as the Screen Actors Guild (SAG-AFTRA) have urged OpenAI to implement stronger safeguards against misuse.
For everyday users, distinguishing between authentic and AI-generated videos is a growing challenge. However, there are practical steps you can take to help identify AI-created content and avoid being misled.
**Watermarks: The First Line of Defense**
One of the most visible indicators that a video was made with Sora is its watermark. Every video exported from the Sora iOS app features a bouncing white cloud logo, similar to the watermarks seen on TikTok videos. Watermarks serve as a clear visual cue that content was generated using AI, and other companies, like Google with its Gemini model, are adopting similar practices by watermarking their AI-generated images.
However, watermarks are not foolproof. Static watermarks can be easily cropped out, and even dynamic ones can sometimes be removed using third-party apps. This means that while watermarks are helpful, they cannot be solely relied upon to verify authenticity.
**Metadata: The Hidden Information**
Beyond watermarks, metadata provides another layer of verification. Metadata is embedded information that includes details about how, when, and with what tools a piece of content was created. For AI-generated videos, metadata may specifically indicate the use of tools like Sora.
OpenAI is a member of the Coalition for Content Provenance and Authenticity (C2PA), which sets standards for attaching content credentials to digital files. Sora videos include C2PA metadata, making it possible to verify their origins. You can use online tools such as the Content Authenticity Initiative’s verification tool (https://verify.contentauthenticity.org/) to upload a video or image and see if it was AI-generated
