r/aivideomaking
Upcoming Leaked Gemini Omni VS Nearly Shutting Down Sora 2
Hey everyone,
With all the hype around the leaked Gemini Omni video model, I wanted to see how it compares directly to OpenAI's Sora 2.
Just a quick heads up on Sora 2. It is currently closed off and only available through the API, and it is going to be shut down completely in the near future. I used the Bing Sora 2 video generator to make these comparison clips. I left the AI watermark on the Sora 2 generations on purpose so you can easily tell the difference between the two models at a glance.
To make the comparison as fair as possible, I tried to keep the prompts very similar to the leaked Gemini Omni videos I found on X.
Here are the sources for the original Gemini Omni clips:
https://x.com/i/status/2053824398503678108
https://x.com/i/status/2053718756799467735
https://x.com/i/status/2053857806374064496
Here are the prompts I used, in order of appearance:
1. The Spaghetti Scene "Create a scene with two men at a table seaside at an upscale restaurant on outdoor deck seating. They are at a circular table with a nice white table cloth, and all of the fancy accessories, all the spoons forks and knives, fancy napkins, centerpiece. One man is Distinguished: A mature African-American man in his 50s with a short beard and confident posture, wearing a tailored, sophisticated suit, the other is is friend, both approaching the table to eat a plate of spaghetti."
2. Anime Combat "High-energy anime combat scene in a vast meadow during sunset, featuring a black-haired boy with blue flaming markings delivering powerful punches and a kick to a stoic white-haired opponent, with dynamic blue energy effects and impact lines,"
3. Chalkboard Math Proof "A professor writes out a mathematical proof for trigonometric identities on a traditional chalkboard, explaining the step he is currently on in the equation."
Let me know which model you think handled the generations better in the comments!
Gemini Omni: New video model?
Anyone else get this pop up when opening the app for the first time? It’s been showing up every now and then the last few days. I don’t use video gen, but what makes this new model better? Or is it just a rebrand of Veo
Guys, guess what?! it's not gone completely
(i do not know how to tag this) I found that in the Bing AI image generator's video feature, if you set it to standard, it will generate a video with Sora 2! it's not gone! We are so back!
Can I create an entire 3/4-minute video entirely using Runway today?
hey
I’ve been tasked with creating an entire video ad using AI (16:9 ratio, for website and YouTube mostly). I already have most of what I need in terms of script, ideas for the scenes etc. I intend to generate images for the scenes and create short videos that will then be stitched together.
My main concerns are:
- Can I actually do this entire process using Runway? Creating videos from images, fine tuning them, editing them together, adding music etc.
- How does the voice over/speech work? Can I accomplish that using Runway too?
Anyway, this is obviously my first time trying to create a video from scratch using AI. Just wondering if anyone’s been using it like this, and if not, what would be a good workflow to use? Maybe Gemini + Flow?
I appreciate the help. Cheers
Does anyone know what these people be using to make these types of AI vids?
Best Grok alternative for image/video generation ???
Like others im fed up with the agregious moderation with grok imagine. What currently are the absolute best alternatives for image/video generation with grok level quality that has more lax controls. Most of my content would be clothed and erotic/sexually suggestive but nothing crazy. I’m willing to pay top dollar for a service that offers quality generation without the frustration. Ideally a very high data limit type of subscription or service.
Cannon Studio will support Sora 2 Video, Extend, Edit, and Remix until September!
**EDIT: Currently Sora 2 API went down after the App was shut down. I am hoping that it was a mistake on their end, but this may be invalidated by their incredulity. My sincere apologies.**
Hey everyone!
I'm Chase, the founder of Cannon Studio. Many of you may have seen my post regarding Cannon Studio's extended OpenAI Official API-based support for Sora. I worked overnight to try and recapture some of what the Sora app had to offer so that you all may continue to enjoy it until September.
Here's what I built:
- I added support for Sora-based Video Extension, Video Editing, and Video Remixing all based in the Sora 2 API provided by OpenAI officially until September.
- I extended support for High-Res 1024p Pro generations
- I added the ability to Publish your generated videos directly to Cannon Studio TV - it's no Sora but it's a place to share your work!
- GPT Image 2 Support is live!
- Lowered Prices to deliver Sora 2 at cost!
I plan to continue working to bring you direct access to the best of what Sora has to offer! If there are any features missing then please reach out to me directly and I will get back to you within the hour from 7:00 AM to Midnight CST, and I will implement your feature request within a day.
FAQ:
What else does Cannon Studio offer?
Cannon Studio is a state of the art AI filmmaking and video production platform. You can build and reuse a world across multiple organized video projects complete with Characters, Locations, Lore, and much much more. Not only does it offer the best workflow on the market, but it also provides competitively priced access to all of the latest Image, Video, and Audio Models. Seedance 2.0, Kling 3.0, GPT Image 2, Nano Banana, Suno, Elevenlabs, you name it! Everything you need to create and more is available to you on Cannon Studio in a clean organized way.
Is it free?
You get 100 credits free, 1 cent = 1 credit. You also get a 3 day free trial, but reach out to me and I can extend this for you!
Do we get daily free gens?
Unfortunately not, I am a solo-founder so any free generations, including the sign up credits, come out of my pocket.
How much does it cost?
| Sora 2 Standard | 720p |
|---|---|
| 4 Seconds | $0.41 |
| 8 Seconds | $0.81 |
| 12 Seconds | $1.22 |
| Sora 2 Pro | 720p | 1024p |
|---|---|---|
| 4 Seconds | $1.22 | $2.02 |
| 8 Seconds | $2.43 | $4.04 |
| 12 Seconds | $3.64 | $6.06 |
This is directly based on the Official OpenAI pricing with a small markup for Storage Costs on my end. https://developers.openai.com/api/docs/pricing?video-pricing=standard#:~:text=Price%20per%20second-,sora%2D2,-720p
How can I trust Cannon Studio enough to make a purchase on the platform?
Our billing is powered by Stripe, we do NOT store any billing info, we have very simple and transparent pricing (1 credit = 1 cent), and we have a community full of creators actively using the application (which you can join via the site, Disc links are against the rules here :D )! Plus, you can try it for free!
Please feel free to reach out with any questions or concerns. Thank you for taking the time to read over this! I hope to see you on the platform :D
Project link :- https://github.com/Anil-matcha/Open-Generative-AI
Open-Higgsfield-AI is an open source platform that lets you access and run cutting-edge AI models in one place. You can clone it, self-host it, and have full control over everything.
It’s a lot like Higgsfield, except it’s fully open, BYOK-friendly, and not locked behind subscriptions or dashboards.
Seedance 2.0 is already integrated, so you can generate and edit videos with one of the most talked-about models right now — directly from a single interface.
Instead of jumping between tools, everything happens in one chat:
generation, editing, iteration, publishing.
While commercial platforms gatekeep access, open source is moving faster — giving you early access, more flexibility, and zero lock-in.
This is what the future of creative AI tooling looks like.
A random 3 AM thought: What if Indian cinema's top actresses starred in a "Lush Life" music video?
How I can create this type of motion transfer video from opensource
Found this trending instagram channel where people using ai to create model and dance video, but there skin texture and movment are to good. Is there any way to make it in open source, I tried ltx 2.3 motion transfer but it falls. I played with so many strengths seting but didn't find any good results.
I you know something about it, it will be great help
So is the consensus still lthat Seedance is better than HappyHorse, despite the leaderboards?
reddit.comPlease anyone suggest a free ai video generator
How do you keep a character consistent across 10+ AI video clips
Small content studio, two people, producing short-form product ads for a handful of e-commerce clients. We need the same character face and outfit to hold across a full campaign, not just one clip.
Constraints: no dedicated GPU setup, mid-tier budget, clients expect turnaround in 2-3 days per batch.
We tried Kling for the video side and Midjourney for reference frames, but the character, drifts noticeably between shots even when we lock the seed and reuse the same image prompt. Also tried Phygital+ briefly since it has Kling and Flux in one place with some, consistency tooling, but haven't gone deep enough to know if it solves the multi-clip drift problem.
What we care most about: character stability across clips, reasonable generation time, not needing a separate, subscription for every model, and something a non-technical editor can actually run without me babysitting it.
For people doing recurring character-driven video ads, what's actually holding consistency for you at the, clip-to-clip level, and does anything break down once you're past 10 clips in a single campaign?
Hi everyone! I’m very new to AI storytelling and filmmaking, but I have an original emotional wildlife story idea that I really want to turn into a cinematic short film. The story is about a mother deer protecting her cub from a wolf, but the ending reveals the wolf is also trying to feed and protect her own cubs. I want the audience to feel sympathy for both sides instead of seeing a simple hero and villain story. I’m trying to make it feel like a real emotional animated movie experience rather than just random AI-generated scenes. I especially want help understanding: emotional pacing cinematic scene composition background music choices atmosphere and lighting how to create stronger emotional expressions Right now I mainly have access to Grok and Canva, so I’m trying to learn how to use them creatively for storytelling. If anyone has advice, tutorials, workflow tips, music suggestions, or beginner mistakes to avoid, I’d genuinely appreciate the help.
Openart has a monthly plan for $240. It gives you 106,000 credits. A 5 second long 720p Seedance 2 video on openart costs 400 credits. So that's about 264 5 second long videos you can generate for about $0.90 each. For 1080p it costs 1,000 credits for a 5 second long video, so that's 106 5 second long 1080p videos
Openart
- $240 / month plan
- 264 5 second 720p videos @ $0.90 each
- 106 5 second 1080p videos @ $2.25 each
- $15 credit pack top ups gives you an extra 5 5 second 1080p videos, and 12 5 second 720p videos. That's $3 per 1080p video and about $1.25 per 720p video
- Generation time: 2-5 minutes per video
If you're using Seedance 2.0 on another platform, please provide a similar analysis here so we can compare them
AI Video tools query
So I'm pretty good at SDXL and I'm really starting to get my bearings with Z-Image so I think I'm okay with text2image. I've made Loras for both of those models and not to blow own horn but they're pretty decent.
But video is where I fall painfully short.
I want to:
Take a short real video clip and enhance it with AI elements (add dialogue, scenes etc.)
Take my SDXL and Z-Image Loras and use them in videos.
Ultimately there's a lot of AI video services advertising but I'm unsure of who to go with. Local generation is not an option for me so it would have to be a paid provider unless someone has a fully setup easy to use runpod instance.
So....any recommendations from anyone?
I want to make an AI-generated music video with a gritty black-and-white aesthetic, visible film grain, and realistic-looking people (some inspired by famous individuals). I’m new to AI video creation and don’t really know which software or workflow would be best for this kind of project.
I’m based in the Netherlands, so the tools need to be available here. I was considering Seedance 2.0, but I’ve read that it may not be fully accessible outside China yet.
Can anyone recommend the best AI tools/software for creating cinematic, realistic music videos with this kind of style? I’d also appreciate any advice on workflows, especially for achieving a vintage 90s film look.
As the title says i was wondering if anyone of you uses an ai tool to automatically get clips from twitch and that the ai tool create a compilation of it for youtube !
I know there’s « OpusClip » but before i pay for any of those programs i wanted to ask if you use any of them and what is your experiences …