r/vfx
Biggest Mexican film budget at the time. "Venganza" 450+ VFX shots.
Supervised VFX on "Venganza," streaming on Amazon now. 450+ shots. This was the biggest movie budget in Mexico at the time, and VFX still kinda worked like it was a scrappy indie.
More than half the shots ended up being to patch production stuff that popped up.
I was on set for the 8 weeks every day, since production was so stretched, we were improvising a lot on the fly. I have worked very closely with the director before, and he has a lot of VFX experience as well, so it was great in that sense. The DP was great as well.
The chase scene plates were a last-minute scramble. The road was only closed for filming for a couple of hours, so I rigged a pickup with Komodos on the back and sent it chasing the hero cars. I was supposed to have 6 cameras, but the DP repurposed 3 of them, so I ran one pass with 3 on one side, then swapped the rig to the other side for a second pass. A lot of the shots don't line up perfectly, but it was that or no plates at all.
For the market sequence, no dedicated plate pass was scheduled either. We jerry-rigged a rig off the back of the hero truck with enough cameras to cover 180 degrees each side, we could pull plates live while the stunt team worked.
The hotel sequence was supposed to be back projection. Days before the shoot it got changed to bluescreen, too late to re-light, so we shot bluescreen with blue light coming in from outside as moonlight. Everything in frame was blue. Every keyer's nightmare.
A night scene on the last days of the shoot couldn't be shot at night. Zero prep, a couple of hours to test a night-for-day approach, then a skeleton crew of the Director, DP, a couple camera guys and me went back later to grab practical light elements to comp in and regrade.
The final scene background is fully CG. We couldn't shoot plates because of the logistics of the location. So we rebuilt the environment from drone photography, and we flew the drones inside the building to get them.
Many more stories like this.
First time supervising at this scale. Happy to get into any of it.
We had the stunt team that did one of the John Wick movies and theDungeons & Dragons: Honor Among Thieves movie. Their budget was 20% of the movie's budget, is what I heard from rumors on set, haha. Our Budget was very, very tight, not even a fraction of that.
Here's the link to the movie and some pics from the shoot Reddit has taken this post down twice I guess because of the blood pics, so not posting those
Looking for a VFX artist to convert 2d battlemaps into animated ones. More info below.
The example is one of my static 2d maps with the added effects I want. The stuff I require are, imho, pretty straightforward. Small lights flickering, energy movement, light flickering, water movement. I watched one of my friends do this using wallpaper engine and blender. With that said, I'm not entirely sure what a professionals would charge, so please drop a comment with your portfolio and I'll reach out.
Important points:
-The animations need to loop, about 8-10 seconds loop is more than enough. Audio and music may be added.
-Final file format should be mp4/webm
-I'm looking for a long, LONG term artist in this, someone who potentially could do multiple maps each month. Not looking for a one-time or on-off commissioning.
-If you have experience with TTRPGs, whether professionally or as a hobbyist, you get extra points.
Alternative to Marvelous Designer in Blender! Developed by an ex-Disney engineer, HiPhyEngine is an All-in-one high fidelity simulation Engine!
Developed by an ex-Walt Disney Animation Studio engineer, HiPhyEngine aims to provide the most powerful character simulation engine for animation and VFX! HiPhyEngine can simulate rigid body, cloth, hair, soft body all-in-one and grantees to be intersection free!
Unlike other commercial software, you just need to pay once and keep HiPhyEngine forever! We also provide a 6-months long trial period!
Checkout HiPhyEngine here: https://hiphyengine.github.io/
We have just released the series for cloth tailoring and shotwork tutorial for HiPhyEngine!
Follow our YouTube channel for more tutorials: https://www.youtube.com/@HiPhyEngine
We are constantly adding more tutorials and new features as well!
Looking for EU/UK/Nordics cloud GPU provider with Windows
I am trying to find a cloud GPU provider offering a Windows VM in Europe, ideally EU / UK / Nordics, for 3D work.
My primary target is:
- NVIDIA RTX 5090, 32 GB VRAM, Windows
Acceptable alternative:
- NVIDIA RTX 6000 Ada Generation, 48 GB VRAM, Windows
My hard requirement is that the GPU must run in WDDM mode, or otherwise expose DirectX / D3D11 / D3D12 properly inside the guest OS.
TCC-only data center GPU setups are not usable for my workload.
My use case includes:
- Autodesk Maya 2026
- Arnold / MtoA
- Unreal Engine 5
- other DirectX-dependent 3D tools
For RTX 5090 this is usually less of a concern because it is consumer-class and normally WDDM by default.
For RTX 6000 Ada, I need explicit confirmation that on the provider’s Windows image:
- WDDM is available
- DirectX works in-guest
- this is not blocked by vGPU licensing or compute-only policy
I am specifically looking for providers that can confirm:
Europe data center location
exact GPU SKU
Windows image options
WDDM / DirectX availability in the guest OS
hourly pricing
billing increment
whether the setup is bare metal passthrough, vGPU, or RTX vWS
I already know the usual names like TensorDock, Paperspace, RunPod, Vast.ai, Lambda, CoreWeave, AWS, Azure, and GCP, so I would especially appreciate lesser-known providers.
If you have first-hand experience, it would really help if you could share:
- provider name
- region / city
- exact Windows image used
- output of nvidia-smi -q showing WDDM vs TCC
- whether dxdiag sees the NVIDIA GPU
- whether D3D11 / D3D12 works
- whether Maya / Arnold / UE5 actually run properly over RDP / Parsec / NICE DCV / similar
Even a “this provider looked promising but turned out to be compute-only” reply would be useful.
Thanks.
How would you go about this complicated planar tracking?
What would be the best way to track the surface of this can so I can add simple text to it? The main problem is that the surface is reflective, the shot wasn’t done with a low shutter speed, and the can moves quite a bit. I masked the hand as it opens the can, but the overall motion and the hand blocking part of the can make it much harder to track. I’ve already tried After Effects, Blender, DaVinci Fusion, and I’m currently working in Mocha Pro. I also tested the Find Edges effect to simplify the surface and reduce reflections, but that didn’t really help in this case. If anyone wants to take a look and help, I can share the OCF.
Software for morphing image A into B by using pairs of corresponding points and interpolation
Hi all. I have a couple of images: one is an aerial view projection of another image, a landscape . I'd like to generate an animation that shows a transformation of the original image into the aerial view, by pairing corresponding pairs of points between both images.
This step must be done by hand, since some pairs are not obvious at all, and some will have to be reasonable approximations due to limitations of the projection.
Since of course it won't be realistic to pick evey pixel pair , some kind of reasonable mesh interpolation would be needed. If the list of pairs of points can be aggregated into a list to check, edit, delete and add, it would be great.
I'd expect the output to be either a video, or a sequence of images, with a certain, configurable number of frames.
Do you have any software suggestion, hopefully using free/open source software?
The mesh transform in Davinci Resolve is not a good solution, I've already tried.
Thanks!
My 5-year-old told me he could fly. Here's what it took to prove him right. 🚀
What you're seeing is a seamless transition from a real handheld shot captured on a mobile camera, into a fully AI-generated flight sequence. No studio. No budget. No crew.
18 months ago, a VFX shot like this would have required a green screen, a motion graphics team, and a post-production budget most indie filmmakers don't have. Today, an open source model running locally can close that gap in an afternoon.
I shot this on a mobile camera, composited in DaVinci Resolve, and used LTX Video 2.3 for the AI flight sequence.
What excites me isn't the wow factor of the final shot. It's what this signals for independent creators and small studios. This is what democratized filmmaking actually looks like. Not a trend. A genuine shift in who gets to tell visual stories.
I'm a fresher, i completed my VFX course from Zee Institute (India), i wanna do something in rotoscopy, but have no clue how to start. Any help will be appreciated, thanks!
Showreel- https://youtu.be/6US-VLv0OYc
I applied to bunch of top companies and its been couple of days, i have not received any response, my email wasn't opened at all. Any help or referrals would meant a lot to me, thanks in advance.
I recreated the T Rex Cutscene from Tomb Raider Anniversary in UE5
Hello all, Since the announcement of Tomb Raider Legacy of Atlantis, I've been having an itch to create some Tomb Raider content. So I made the T Rex Cutscene from Tomb Raider Anniversary in Unreal Engine 5. I hope I did justice to the original.
TIL James Cameron rejected studio notes from Fox executives about making Avatar (2009) shorter, reminding them that his previous film Titanic (1997) paid for the building they were meeting in.
variety.comHow would one do this shot? Original reel by lenny_motion on Instagram
I've been trying to figure out how one would pull off this shot, but I can't really seem to get it.
Maybe through Gaussian Splats or something? But they're relighting the background to a very drastic extent, and I feel like gaussian splats can't be relit that well right now. Also, I did notice that the red car is 3d and comped in, and the tracking of the car is a little off in the beginning, like it's sliding off the floor a bit, so the beginning environment is definitely real, and the guy's hand has been rotoscoped when he is about to jump. What do you guys think? Was this done with Gen AI?
Doubts about comfy type of workflows
Hi! lately I´ve been seeing more and more VFX artist sharing their workflow with comfy and showing how they can ¨render¨ scenes that they have previously done in 3d (maya,houdini etc) and being amazed that the render takes 1-2 minutes, and comparing that to how much would it cost to make a traditional render.
In my opinion, I don´t really understand the point of this comparision since the levels achieved are bastly different, while the IA ¨render¨took 1 minute, it also has ton of flaws, imperfections, hallucinations (sometimes it even changes elements of the original layout) etc and that makes the result far from high-end VFX standards..while with the traditional render you get exactly what you are looking for and on the highest level
I understand that tools like this would be useful for cases such as improving workflows, make time consuming stuff easier, previs, generating different ideas and iterations.. but I´m sceptical about this kind of workflow achieving final frame quality...at least for cinema quality vfx.
Meanwhile, realisticlly I see real-time rendering more like the future, since there you have the 3D quality and precise control with the speed of real time rendering
I don´t know why are we ignoring this tech that is also advancing in big steps, achieving every year the render quality needed but in real time..
Whats the point of scartching your head using a tool like comfy, to try to make something similar of what we can do in 3d but worse? Is not bringing anything new to the table, I even find it Inefficient for production.
Added 6 new free marble PBR textures this week
Added 6 new free marble PBR textures this week
This week I added 6 new free marble materials to my texture library.
The focus was on realistic seamless stone surfaces for archviz, games, interior visualization, and 3D rendering.
The new set includes a mix of white, black, and green marble styles, from clean soft veining to more dramatic natural patterns.
All textures are seamless and include full PBR maps:
Albedo, Normal, Roughness, Height, and AO
They’re ready for use in Blender, Unreal, Unity, Corona, V-Ray, and other 3D workflows.
Free download:
https://polyscann.com/assets
I’m also creating more seamless photogrammetry PBR textures, so if there’s a material you want next, let me know.
I’m curious too, which photogrammetry textures do you think are most in demand in the market right now?
Feedback is always welcome.
Anyone else feel burned by Foundry’s shift from perpetual to subscription?
I’m trying to get a sense of how widespread this is and whether others feel the same way.
A couple years ago, Foundry moved Nuke to a subscription model, but they told existing perpetual license holders we could continue paying for maintenance. They also encouraged people to buy additional perpetual licenses before a cutoff date to “lock them in.”
Now, not long after that, they’re ending maintenance for perpetual licenses entirely. If you want updates or new versions, you have to switch to subscription. That feels like a pretty sharp reversal from the earlier messaging.
What makes this worse is how tied these licenses are to maintenance. Moving licenses between machines has already been a pain without active maintenance, and it raises a big question: what happens long-term if your hardware dies? Are these ~$10k perpetual licenses effectively on a timer?
I’m curious:
• Did anyone else buy additional licenses based on their messaging at the time?
• How are you planning to handle this shift?
• Has anyone already run into issues moving or preserving their licenses without maintenance?
If enough people feel misled here, I’d be interested in exploring options for pushing back in a more organized way.
Would appreciate hearing other experiences—good or bad.
Came across Streamable Gaussian Splatting. You'll never guess the use case though. (Open with Caution)
How is Apple pulling off these screen replacements? (capture vs full VFX?)
Just watched Apple Education: Ready for Every Learning Opportunity and I’m genuinely curious how they’re pulling off some of the screen replacements.
A few things I’m trying to wrap my head around:
Are they just doing high-res capture of the UI (via capture card, etc.) and comping it in? Fully animating all of that feels like such a massive amount of work that still wouldn’t get near a high-res capture. Or is this just insanely thorough pre-pro? As in, pre-built UI/graphics (Keynote, motion files, etc.) that are designed specifically for the shoot and then matched in post?
In some shots, the tracking feels so insanely precise. I know they’re definitely using robotic camera arms so they can replicate moves and dial in lens data, but even then it seems like sooooooo much effort. Especially since you can clearly see real typing in some moments and real reflections. Feels risky if they ever needed to swap UI later since it’s kind of baked in.
The optical details are what really sell it for me. The chromatic aberration, subtle blur, distortion all feel super natural. The only thing that occasionally gives it away (to me at least) is a bit of that “venetian blinds” effect, but even that’s so minor.
Would love to hear how people think this was approached, or if anyone’s worked on something similar!
Motion trackers turn out flat
This is my first attempt in motion tracking. I have seen a dozen tutorials.
I want to to a classic cloth roll out on a clip with low movement. (this is the clip https://www.pexels.com/video/majestic-view-of-trevi-fountain-in-rome-32175566/)
I have put the setting on tripod since the is no parallax, I have a solve of 0.25, but the trackers are completely flat and I don't know how to procceed since there is no depth to make my scene.
I feel stuck on something that should be easy :P