r/AntigravityA1

This is lowkey irritating

There is crease that I cannot get off the lens and the cloth just pushes it around and then when I wear it there’s a lot of haze. It’s not bad. I noticed it disappears when I looked towards the edge of the lens. I was wondering other people feeling the same thing.

u/Memes-makerx — 2 days ago

Antigravity Care?

I've had my trusty Mavic Air 2 since it's launch and it's been quite reliable. I'm looking at the Infinity Bundle and lastly the Antigravity Care options which are $200-$300 and then started digging into the details.

I have Best Buy Total Tech which would presumably cover the drone, but it would not cover the flyaway issues. The "discounted" fee for AG Care is $129 for accidental damage and $559 to replace the drone for flyaways.

Do most buy the Antigravity Care or go with no coverage or other options?

reddit.com
u/awraynor — 1 day ago
▲ 15 r/AntigravityA1+2 crossposts

My robot pal and I came up with this. I barely understand the technical details, but it does seem to work. Paste this into some app like Claude Code or ChatGPT Codex, and they'll take it from there.

The end result I'm working on is: Toss Antigravity or Insta360 native video or photo files into a folder, and end up with a splat a few hours later.

=====

If you own an Antigravity A1 360° drone and want to feed its footage into a Gaussian Splat trainer (or any other downstream tool that expects equirectangular video/images), the entire pipeline is now makesplat <folder_name> — drop your .insv and .insp files into a folder, run one command, get a .ply splat out the other end.

The hard part wasn't the splat training — that's well-trodden ground (COLMAP for SfM, Brush for the actual training, runs natively on Apple Silicon). The hard part was the first step: the public Insta360 MediaSDK refuses to stitch A1 files because the A1's lens isn't in its dispatcher. Without the SDK, no batch processing. Without batch processing, no automated pipeline. Antigravity Studio's GUI was the only path, one file at a time.

This post is a recipe for the unlock + a brief tour of the rest of the pipeline.

The full pipeline at a glance

  1. A1 .insv / .insp files
  2. → Insta360 MediaSDK in Docker (the byte patch unlocks this for A1 footage)
  3. → Equirectangular .mp4 / .jpg
  4. → ffmpeg cubemap split (6 perspective faces per frame, 90° FOV)
  5. → COLMAP automatic_reconstructor (SfM, runs CPU-only on Mac)
  6. → Brush splat training (Apple Silicon native, WGPU/Metal)
  7. → .ply Gaussian Splat

All open source. All Mac-native (Brush + COLMAP + ffmpeg) or Mac-via-Docker (MediaSDK runs as x86_64 Linux under Rosetta — no GPU needed). No CUDA, no Linux box, no cloud.

Before you start: get the Insta360 SDK

The MediaSDK isn't a free public download — you have to apply for access through Insta360's developer portal. The process is light: visit insta360.com/sdk, fill out the application form (it'll ask what platform you want, what you're building, and basic contact info — a personal/research project description is fine), and wait. Approval took me about 12 hours, but it could be longer. They email you a link to download the SDK package — the Mac/Linux flavor is the one you want for this pipeline (specifically, libMediaSDK-dev-X.Y.Z-amd64.deb for Linux x86_64, which is what runs inside our Docker container).

You don't need to mention the A1 in your application — applying as an Insta360 SDK developer is sufficient, and the byte patch in this post handles the A1-specific part.

What was blocking step 1: A1 isn't in the public Insta360 SDK

Drop an A1 .insv into the public MediaSDK (libMediaSDK-dev-3.1.1.0-amd64.deb, latest as of November 2025) and you get:

CameraName is empty. CameraLensType is Unknown.
Origin Offset: ..._10496_5248_155_...
no implemention!
ErrorCode:1; ErrorDescr: offset is not support

A1 reports lens type 155. The SDK supports lens types 41, 71, 113, 283, etc. (X3, X4, X5, ONE-RS variants, etc.). 155 isn't in the dispatcher — Antigravity is a partner-OEM camera and its lens profile only lives inside Antigravity-branded software (the Studio app, the Reframe Premiere plugin, the Android app — confirmed by tearing all three apart, more on that below).

The unlock: a 16-byte byte-patch per file

A1 optics are nearly identical to the X4 (8K dual-fisheye, square sensor per lens, ~2.4% sensor-size delta). If you tell the SDK "this is X4 footage", it applies X4's geometry math to A1's actual per-unit calibration values (which are stored in the same offset string and remain correct), and the result is a clean stitch with no perceptible distortion at the seam.

For .insv (video):

Find:    "_10496_5248_155_"
Replace: "_10496_5248_113_"     # X4 video lens type
Count:   4 occurrences in the file's trailer
Length:  same — preserves all MP4 box offsets

Then a 180° rotation post-process (ffmpeg -vf "vflip,hflip") to fix orientation — the A1 is drone-mounted upside-down vs the X4's handheld-upright assumption.

For .insp (photo): the SDK's image-stitcher path enforces stricter trailer integrity than the video path, and the same byte patch alone gets rejected. The trick: rename .insp  .insv so the file routes through the video stitcher. The SDK demuxes the file as a 1-frame "video", produces a 1-frame equirect MP4, and you extract the JPG with ffmpeg. Same 155→113 patch, same vflip+hflip.

That's the entire unlock. Both A1 file formats handled by one byte-replacement + one orientation correction.

The rest of the pipeline (briefly)

Once stitching works, the rest is conventional:

  1. Cubemap split — ffmpeg's v360=e:flat:h_fov=90:v_fov=90:yaw=N:pitch=N:w=1536:h=1536 filter splits each equirect frame into 6 perspective faces. Cubemap faces are easier for SfM than equirect because COLMAP doesn't natively support equirectangular cameras — it wants pinhole-style perspective views.
  2. COLMAP SfM — automatic_reconstructor with SIMPLE_PINHOLE intrinsics (focal=cx=cy=768 for 1536² faces at 90° FOV). On Apple Silicon CPU, ~30 minutes for ~600 cubemap images.
  3. Brush splat training — point it at the COLMAP workspace, default 30K steps trains in ~30 minutes on Apple Silicon GPU. Outputs .ply consumable by any standard splat viewer.

What makes a good A1 splat

A 360° camera captures every direction in every frame, so what matters is camera-body trajectory, not orientation. For splats:

  • Drone hovering in one spot — ❌ no parallax
  • Linear flyover — ❌ each scene point seen from a narrow angle range
  • Orbit around a target — ✅ ideal — convergent multi-view
  • Multiple-altitude orbits + figure-8 — ✅ best for outdoor scenes / buildings

I learned this by training a splat from a handheld walk-through-multiple-rooms test capture. It was a needle-storm. Don't walk; orbit.

Nitty-gritty deep dive (for humans and LLMs)

File format

A1 .insv and .insp files are MP4 containers with a proprietary Insta360 trailer. The trailer ends with:

[ ... trailer body (length = N bytes) ... ]
[ <length:u32 LE> ][ 0x03 0x00 0x00 0x00 ][ "8db42d69...026bf" (32 ASCII hex chars) ]
^---- file end

The magic UUID 8db42d694ccc418790edff439fe026bf is the same across all Insta360-format files (X3, X4, X5, A1).

Inside the trailer, calibration is stored in TLV-like records. Three header types observed:

  • ba 03 ?? ?? — 13-float entry (image stitcher format)
  • b2 03 ?? ?? — 16-float entry (video stitcher format, primary)
  • c2 03 ?? ?? — 16-float entry (variant/duplicate)

Each entry's payload is an ASCII offset string of the form:

2_<float>_<float>_..._W_H_<lens_type>_<float>_..._W_H_<lens_type>_<entry_trailer>

Where 2_ indicates dual-lens calibration, the floats are per-lens calibration values (focal, principal point, rotation, optionally translation + distortion coefficients), W × H is the sensor resolution, and <lens_type> is the integer the SDK dispatches on.

Camera ↔ lens-type mapping (observed in real files)

  • Insta360 X3 — sensor 6080×3040, image lens type 41
  • Insta360 X4 — sensor 11904×5952, image lens type 71, video lens type 113
  • Insta360 X5 — sensor 11904×5952, no .insp (uses .dng + camera-stitched .jpg)
  • Antigravity A1 — sensor 10496×5248, image lens type 112, video lens type 155

X5 dispenses with the .insp format entirely — saves DNG (raw) + camera-stitched JPG instead. So the X5 photo workflow is "use the camera's JPG directly, no SDK involvement needed."

What was tried that didn't work (to save others time)

  • Brute-force lens-type integers 100–600 against .insp — all rejected. The image stitcher dispatcher uses a different lens-type table than the video one.
  • Patch image-format lens 112 → known supported (41 X3 image, 71 X4 image) — rejected with ErrorCode:11.
  • Corrupt the b2 03 TLV header bytes to make SDK skip the entry — rejected. The SDK validates trailer structural integrity.
  • Delete the entire 16-float TLV entries, update the top-level trailer length pointer — rejected. There are nested length/offset references in the binary metadata block at the trailer end that also need updating.

Why the .insp rename trick works

The MediaSDK CLI dispatches by file extension (visible in the public SDK's example main.cc):

if (suffix == "insp" || suffix == "jpg") {
    auto image_stitcher = std::make_shared<ImageStitcher>();
    // ... image stitcher path with strict trailer validation
} else if (suffix == "insv") {
    auto video_stitcher = std::make_shared<VideoStitcher>();
    // ... video stitcher path with looser validation
}

Renaming forces the looser-validation path. The single JPEG inside the .insp gets demuxed as a 1-frame video, stitched with X4 geometry (because we patched 155→113), and emitted as a 1-frame MP4. Extract the JPG. Done.

u/TomMooreJD — 10 days ago

Hi everyone

Installed the new update last night and noticed it mentioned Bluetooth connectivity, I've always connected all devices, goggles, controller, drone via WiFi, so is this just another way of transferring files?

I was kind of hoping they would implement a way to stream your goggles view to your phone, which could be done over Bluetooth, as I do struggle seeing the menus on the goggles, have been able to connect them to my phone via capture card though but would be easier to do wirelessly.

Neil

reddit.com
u/Nij2021 — 14 days ago