Curious if AI can truly create useful video content or if it’s just another flashy tech demo? Here’s what we found.
Sora is OpenAI’s first real attempt at text-to-video generation, promising high-quality, realistic footage from a simple text prompt. That means you type in something like “Person walking their dog on the beach, watching the waves with the sun setting, shot in cinematic style”, and Sora generates the video for you.
Sounds too good to be true, right? We thought so too.
Since we’re always looking for tools that can help with content creation, we tested Sora in a few different ways:
We started with something basic: “A person walking down a busy street with a coffee, with cars passing by them.”
The result? Surprisingly good; smooth motion, natural-looking lighting, and realistic detail. The people and vehicles moved in a way that felt convincing for the most part, and the coffee cup didn’t just morph into a weird blob (which, given AI’s track record with hands, was a real win).
That said, there were some uncanny AI quirks. The person’s face wasn’t always consistent, and the person holding the coffee stayed standing still in the two video options it generated. But for a first test? We were impressed.
Next, we pushed the boundaries of creativity with: “A neon-lit futuristic city with flying cars.”
Sora delivered an incredible aesthetic. The cityscape was breathtaking, towering skyscrapers glowing with vibrant neon, and a skyline that looked straight out of a sci-fi blockbuster.
But once things started moving, the illusion cracked. We ran into a common challenge with AI-generated video: physics doesn’t always behave as expected. Some cars floated in unnatural ways, drifting like balloons with lights shooting out in front of them rather than flying with purpose. Others seemed caught in an infinite spin, rotating sideways with no clear propulsion. Occasionally, objects merged in ways that defied logic, buildings that subtly shifted form or vehicles that phased through structures.
It was visually stunning, but it highlighted a key limitation: ensuring realism in movement is still a work in progress.
This is where things got tricky. We wanted to see if Sora could generate polished, professional-looking content for ads, or social media.
We tested:
Some results were usable with a bit of tweaking, adjusting length or zooming in, particularly for creative visuals or background footage. However, when it came to specific brand elements, it fell short. The map of New Zealand and logo prompts, for example, were complete misses. It also lacks the ability to edit existing footage, so you can’t fine-tune details as you would with traditional video tools.
✅ Great for concepting & inspiration – If you need quick visuals for a pitch or brainstorming session.
✅ Good for background & filler content – Abstract scenes, landscapes, and generic footage work well.
❌ Not great for highly specific brand content – You’ll still need traditional tools for logos, product videos, and anything that requires pixel-perfect accuracy.
❌ Physics & motion can be weird – AI still struggles with natural movement in complex scenes.
Is Sora the Future of Video Creation?
Not yet—but it’s getting there.
For now, Sora is more of a creative experiment than a full-fledged production tool. It’s an exciting glimpse into the future, but for serious video work, you’ll still need human editors, cameras, and proper production. That said, if OpenAI keeps improving it, we might not be too far from AI-generated videos becoming a real alternative.
So, should you try it? If you’re curious, definitely. If you need professional marketing videos tomorrow? Stick to the pros.
Let us know, what would you test Sora with?