r/GaussianSplatting

Can I rent a remote or virtual high-spec Windows machine with a GPU and CPU for 3D Gaussian Splat?

Hi Guys,

Now having reached to a conclusion, I’d like to share my experience. My Mac Mini, equipped with 16GB RAM and a 512GB hard drive, which uses a swap technology looks like even that’s not ideal or equipped for handling projects like 3D Gaussian Splat. Someone here suggested I try RunPod, a command-based computing service. I paid the minimum $10 fee and used it for a project. The resulting .PLY file which was created using NerfStudio-ColMap-Splatfacto, and the results were very bad.

I’ve actually started wondering about my specific requirement which is to scan human face. Is this technology even ready yet? I’m curious to know just like RunPod, are there any GUI-based proper Windows desktop rental services that provide access to a high-powered GPU with plenty of RAM? I’d like to now try some GUI-based apps that many independent developers have listed here.

reddit.com
u/augustya15 — 3 hours ago

ALIGN VOLUMETRIC 360 GAUSSIAN SPLATS WITH RAW FISHEYE IMAGES USING METASHAPE

This is a video guide on how to train full volumetric scenes from the insta360 X5 Camera to a full GS Scene. In this case, I am using ERP to create 4 Synthetic FS Images.

-----------------------------------------------------
VIEW LEVEL HERE
-----------------------------------------------------
https://playcanv.as/b/e376e1ad

-----------------------------------------------------
TUTORIAL
-----------------------------------------------------

  1. Export the video out of Insta360 Studio
  2. Cut the video into frames using FFMPEG
  3. Convert the frames into Fisheye Images using Batch Process Script Here
  4. https://drive.google.com/file/d/1mz8jvsniE85lpqzcRjm53z1SCUJDBV30/view?usp=drive_link
  5. Align in Metashape Using Fisheye as Camera Calibration. Very important that you align them in that RIG fashion. Front images align first, then back images then left and then right
  6. Align the Chunks together
  7. Merge the Chunks
  8. Export Cameras and DO NOT CLICK TRANSFORM TO PINHOLE. Leave that blank or it will break export
  9. COMAP Undistort: Undistort the dataset so you can train in and view it in regular GS players
  10. Train in Brush

-----------------------------------------------------
SAMPLE DATASET
-----------------------------------------------------

FISHEYE: Fisheye dataset here
-------------------------------------------------------------------------
If you want to align it using raw fisheye: Here is the dataset

Left:
https://drive.google.com/file/d/1lmxidnfD84ZoKYEcgUiRKnzfryWAZ3CF/view?usp=sharing

Right
https://drive.google.com/file/d/1_W_lq9Cf-gQaIscusJi8n55XH-gxZPdb/view?usp=sharing

ERP: Equirectangular Dataset Here
-------------------------------------------------------------------------
This is already stitched. Ready to convert to synthetic FS for alignment https://drive.google.com/file/d/1Y_u9C3X0jXaon14rnn90IGC2hsUEQagh/view?usp=sharing

-----------------------------------------------------
IMPROVEMENTS
-----------------------------------------------------
This workflow is gamechanging. So many things to improve. Please comment here with all the improvements and how it helped you.

-----------------------------------------------------
VIEW TERMS AND CONDITIONS
-----------------------------------------------------

TERMS: https://drive.google.com/file/d/1tyEpK_4n53HK4VS9FlWm_2e09bmv6JWQ/view?usp=sharing

README:
https://drive.google.com/file/d/1bSbGcWX9ZswNA_tIEer96qhPcOLPtjch/view?usp=sharing

LICENSE:
https://drive.google.com/file/d/10hLGbOCWyPLoVxgcf-o_r-T21oM9jg_b/view?usp=sharing

-----------------------------------------------------
REQUIREMENTS
-----------------------------------------------------

-Strong PC to train splats
-Anaconda & Python Environment, Numpy, OpenCV
-FFMPEG
-Sharp Frames: https://github.com/cansik/sharp-frame-extractor
-MetaShape Pro
-COLMAP

youtu.be
u/BicycleSad5173 — 17 hours ago
🔥 Hot ▲ 57 r/GaussianSplatting

HY-World-2.0 vs World Labs / Apple Sharp (Gaussian splats)

Original Image || Hy World || Apple Sharp images are in order

Tried the new HY-World-2.0 demo on Hugging Face for 3D world reconstruction (Gaussian splats, depth, normals, camera poses).

To me it feels closer to Apple Sharp than to World Labs’ world model right now, maybe mostly limited by compute.

What do you think?

u/Moist_Tonight_3997 — 1 day ago
🔥 Hot ▲ 86 r/GaussianSplatting

Gaussian Splats in Creative Website

I'm a Blender artist and animator, and I decided to do a few animated scenes using 3DGS. Just wanted to share with folks who may appreciate these splats.

u/Potential_Drop7593 — 1 day ago

Advice on iPhone capture to 3D pipeline

Hi everyone, I've been working on a iPhone capture to 3d pipeline and have seen little success. I almost alway get my .ply full of floaters and or completely sharded.

This is what the pipeline looks like

RGB + IMU collection on iPhone

Frame extraction and blur filter

GTSAM slam pose refinement

Depth anything v3 to generate depth priors

Train a 3DGS model with splatfacto normal

Any help would be much appreciated!

reddit.com
u/Glum-Phase9397 — 1 day ago
🔥 Hot ▲ 76 r/GaussianSplatting+2 crossposts

We built a 3D AI filmmaking tool because prompting felt like gambling

AI filmmaking right now feels… weirdly passive.

You write a prompt, hit generate, and hope the model gives you something close. If it doesn’t, you tweak words and try again.

That’s not directing. That’s guessing.

So we hacked together something we’ve personally wanted for a long time:

Sequent 3D

  • Drop in a 2D image
  • It becomes a 3D environment
  • You place characters
  • Move the camera exactly where you want
  • Frame your shot
  • Render it into video

The idea is simple:
Instead of prompting shots, you direct them.

We’re still early, but it already feels way more like filmmaking than prompt iteration.

Would love feedback from folks here especially people experimenting with AI video.

(early access link in comments)

u/Moist_Tonight_3997 — 2 days ago

SHARP-ML for stereoscopic images?

I really like the depth results of SHARP-ML and how it cleans up the background from foreground occlusions, but on my Quest 3 viewing splats means reducing the quality quite a bit compared to the original image.

I'm wondering if there's an easy process to turn that splat back into an image for the other eye and just have a traditional 3D stereoscopic image without the 6dof and render it at the best resolution possible on the computer.

I've tried 3D conversion tools like Owl3D and I don't like the results so I'm hoping with SHARP I can get something better.

reddit.com
u/5R_real — 23 hours ago

Can we talk about Hyperscape ?

Hi,

I own a Quest 3 and I recently discovered Gaussian Splatting (GS) through Hyperscape. Given that the service seems unreliable and problematic on many levels (is it shutting down? No way to download the scans?), I’ve been looking into other ways to do GS.

But honestly, isn't the technology Meta managed to implement just insane? The scanning process is incredibly simple and even fun to use. The entire workflow is fully automated even without considering the headset itself, this solution alone would be an incredible service. The rendering quality is genuinely impressive.

I don't understand why they aren't looking to develop the service further. Especially since, in reality, it’s only missing one feature: the ability to download the renders. I think many of us would be willing to pay to use this service. Instead, it feels like it's being half-shut down with an uncertain future, even though it seems like the most polished and user-friendly system out there right now.

What’s your take on this?

reddit.com
u/Djodei — 2 days ago

Using FFMPEG, how many images frame per second should I extract for a high-qualitySplat?

Hi Guys,

Just wanted to have a common opinion. Public opinion when you’re trying to extract images out of frames from let’s a 4K video shot at 60 frame per second, how many frames per second do you extract images three or five? What is ideal in terms of getting good quality? I’m talking about when you run programs like any NERFStudio and Colmap in the background and FFMPEG to extract images.

reddit.com
u/augustya15 — 2 days ago

Mobile Gaussian Splatting capture that’s easy + produces clean splats? Looking for recommendations

I’ve been on the hunt for a mobile 3DGS solution that’s straightforward to use on-location, without needing a powerful desktop for processing or hours of cleanup.

Most mobile tools I’ve tried either have clunky capture workflows, produce noisy splats, or require you to export to another program just to view the final model. I’m looking for something that lets you capture with your phone, process quickly, and view/export splats right from a web platform or app.

Ideally, it works well for small to medium scenes (like historic details, small buildings, or objects) and keeps the splats stable when navigating the 3D space.

Curious what this community has found — any mobile 3DGS setups that check these boxes? Bonus points if it has both a phone app and web access for processing/exporting.

reddit.com
u/Low_Breadfruit_7781 — 1 day ago

Is LichtFeld-Studio new versions paid, or just the binaries builds?

I check the main page in case of new versions. Then when i came to download a "newer" version i get directed to the LichtFeld Studio Portal, but to create an account to download new versions you need, well, pay for any access tier.

That affects only the pre-built versions? New versions can still be built from code from github?

u/kikooooo2 — 3 days ago

Dark parts transparent

I made this sunset version of the ISS but the dark parts of it came out quite transparent. What can I do to help this? This is a synthetic scene made in Blender. I used Lichtfeld studio with IGS+ strategy, bilateral grid and mip filter on.

superspl.at
u/Flame_Python — 22 hours ago

International Space Station

This synthetic scene was built in Blender. Cameras were distributed around the station to capture comprehensive coverage, and surface points were sampled directly from the station geometry to serve as a point cloud. Both were then exported into Lichtfeld Studio for reconstruction. Trained for 30,000 iterations with 10 million Gaussians using the IGS+ strategy. The background panorama was also rendered in Blender, simulating the correct orbital altitude of the space station.

Probably the biggest issue with it is that the solar panels are quite transparent.

View: https://superspl.at/scene/cc026f6a

u/Flame_Python — 3 days ago
🔥 Hot ▲ 54 r/GaussianSplatting

Gaussian splatting for interiors: where does the standard workflow start failing?

I’ve had a few good private exchanges on this, but it would be interesting to gather a wider range of experienced opinions in one discussion around scanning complex interiors.

Attached is the kind of space I mean: long, narrow, multiple rooms, changing light, lots of detail.

Curious where people think the usual workflow starts breaking down:

1. Capture

stills vs video?

360 vs DSLR / mirrorless?

iPhone apps? Tried scaniverse > polycam > luma > airvis > teleport > lumina

2. Alignment

Tips for correct Frame extraction from video? is RealityScan enough for this kind of interior?

3. Training

LFS or Brush? Enjoying lichtfeld studio at the moment.

  1. Publishing

Spark 2.0 or krpano?

Not really looking for theory. More interested in where people disagree, what they’ve seen fail, and what actually holds up in practice.

Especially interested in hearing from people with real hands-on experience.

u/peeeerf — 3 days ago

How can I improve my gaussian splat, its fine, but some things have been duplicated especially the balcony and the back entrance (not the one on the balcony but on the backyard)

How can I improve my gaussian splat, its fine, but some things have been duplicated especially the balcony

Camera used: Go pro hero 11 Black

Software used:

Shutter encoder
Reality Scan 2.1
Brush (Resolution 2048) 30K Steps
Currently viewing final splat on SuperSplatt

I know some objects look muddy, but that's because I did not go around the object as I should

Computer parts:
RTX 3090
128Gb Ramm
AMD Ryzen 7 5800X

u/DXProductions — 4 days ago

G.Splatting — native Gaussian Splatting plugin for Final Cut Pro

Native Gaussian Splatting plugin for Final Cut Pro.

Real-time Metal GPU rendering with depth sorting directly in the FCP timeline. Supports .PLY, .SPLAT and .KSPLAT.

Also built a free PLY2SPLAT converter with SH degree downsampling and splat count reduction (40/60/80/100%) — useful for optimizing dense point clouds before import into FCP.

Tested on M1 through M4.

🎬 Demo: https://www.youtube.com/watch?v=RploIBgL9F8 🔗 gphyx.com

u/Mean-Draft5365 — 3 days ago