r/Spectacles

▲ 299 r/Spectacles+1 crossposts

I replaced my project planning process with a … rabbit

Not the usual post… This month I thought I’d try something new.

I’ve always got too many ideas and I can never pick between them, so I’ve decided I’m not going to pick anymore.

I’ll let something else handle it. More advanced than any AI system.

Deploy rabbit- Observe selection- Commit.

He already controls most of my personal life anyway, so it was only a matter of time before he took over my professional life too.

u/Far-Temporary6630 — 3 days ago

**Developer Program Applications Update**

Hey everyone — quick update: we're no longer accepting applications for the Spectacles Developer Program, so you won't find the application in Lens Studio anymore.

We'll be sharing more information on what's next later this year. Keep an eye on this subreddit for updates.

In the meantime, if you have questions, drop them below.

reddit.com
u/Spectacles_Team — 2 days ago

Routines - A Home OS Spectacles well-being lens

Routines - A Home OS to help create good habits

This lens contains a journal entree, body profile, calorie counter, exercise progression system and daily calorie intake. It also has an AI assistant which you can ask questions about anything and knows all about your inputs.

This is going to be a multi-month project from me as i explore utility on Spectacles. Mostly for myself but hopefully you guys can benefit from this experiment as well!<3

Please provide any feedback or new ideas you can 😄

It's a good use-case in a way as you'd use it for a few minutes in the morning and afternoon so battery life isn't an issue and value is provided by accessing your offline information and stringing it together with the digital world.

Lens link: https://www.spectacles.com/lens/af09b256ae8348bb935e3927350132d4?type=SNAPCODE&metadata=01

u/Large_Possible_8209 — 8 hours ago

Snap Blog: Inside the First Spectacles Developer Bootcamp

Unfortunately I wasn't able to make it myself, but judging from the posts I've seen about it from people attending, must have been a really nice event!

newsroom.snap.com
u/siekermantechnology — 13 hours ago

Been building in Unity for a while and recently started exploring Snapchat Spectacles. The moment I opened Lens Studio I realised all my existing scenes, environments and projects were basically useless there. No way to bring any of it across.

So I spent some time building Unity2Snap. You hit one button in the Unity Editor and your entire scene gets exported and reconstructed inside Lens Studio automatically. Everything comes across including your objects, hierarchy, transforms, lights, primitives and player spawn points.

If you have been sitting on Unity projects and wanted to try Spectacles without starting from zero, give it a look.

Drop any questions below, happy to help anyone get it set up.

Repo Link : https://github.com/Pratik77221/Unity2Snap

u/itxpratik — 7 days ago
▲ 56 r/Spectacles+4 crossposts

It’s now official the first AR glasses in the world will be announced with pre-orders on June 16th get ready, Zuck is not ready for that 👀

u/Matcorp456 — 14 days ago

Need longer recordings

Hello! I’m new to the subreddit and haven’t deeply searched so apologies for if this has been covered already…

I was wondering if there has been any progress made on longer recordings than 30 seconds or the ability to livestream my views from spectacles to another source (other than my phone)

I’m still past the prototyping stage and pretty deep into the MVP stage for an AR first travel experience but I would like to be able capture long running videos of the app or stream it to another source where I can save it and accept some compression.

reddit.com
u/stevejabs — 1 day ago

Hello there!

I know that QR Code detection will be available later on specs, but for those who are waiting and want to try it out right now, here is a tiny example project that require no server or remote api, all computations are done on spectacles.

Here I do realtime detection during the one-handed crop like gesture. Then, when the QR Code is detected, the url is displayed and the website is opened in the webview. Remember also that there are constraints around size and constrast on the qr code, so don't forget to adjust luminosity on computer screen and display it big enough.

I will very probably add other examples in this project, like loading images or 3d models directly, or establishing secure connection between two devices.

Let me know in comments what would be the most interesting use case you think of ;)

https://github.com/HyroVitalyProtago/CropQR

u/HyroVitalyProtago — 9 days ago

Made it to the XRCC Berlin finals!

We're working on this Spectacles Lens for showing and interacting with public transport data, and submitted it to the XRCC Berlin hackathon. Got through to the finals!! Looking forward to seeing some of you there in June!

We're working towards publishing the Lens, so hopefully you can all try it soon.

youtube.com
u/siekermantechnology — 7 days ago

After about four months since the last public drop, I shipped MiNiMIDI v3 as a proper upgrade — not just new visuals, but problems I’d been turning over in my head: how to keep a BPM slider honest when playback is DynamicAudioOutput + huge Lyria buffers, and how to make the crossfader feel smooth on Spectacles instead of fine in Editor and dead on device. Preview on PC was hiding how mean those paths are to the wearable. There’s also a spectrum ring up top — pinch around it for a theremin-style layer and extra control over the vibe while you’re mixing.

What’s new in this version

  • Audio engine (Spectacles): Per-layer level via AudioComponent.volume next to DynamicAudioOutput where possible — avoids full-buffer PCM scaling on every fader move.
  • Stability: Serialized DynamicAudioOutput updates (max one heavy pump per frame) + debounced / gated slider paths so crossfader + BPM don’t stack lethal work in one tick.
  • Crossfader: SIK-aware (real min/max), owner-resolved layers, lifecycle-safe queues (no delayed pumps after release / replace).
  • BPM: Slider path tuned for large stems — debounced resample, commit on release so scrubbing stays usable on device.
  • UX / build: Auto-generated pad + control layout from config (far less manual Inspector wiring).
  • Expression: Spectrum ring — pinch navigation + theremin-style bed layered on the mix.

Repo: https://github.com/urbanpeppermint/MiniMIDI-v3

Design credit:  https://www.reddit.com/user/AurixEmberwave/ fully redesigned the MIDI console layout

u/Urbanpeppermint — 13 days ago

I made a script that simplifies raycasting with custom meshes!

It works with 3D world-space lines as well as screen positions.

In review in the Asset Library, and you can find this example project here. Hope you find this helpful! :)

u/Max_van_Leeuwen — 10 days ago

Hi, all!

I posted my demo SpecsRider a week ago (thank you all so much for your support!), and I'm trying to make a change to the speedometer on my application. If you aren't familiar with the use case, you can find the post here. Currently, the Speedometer module is a text component with 3D coordinates that is parented to the main camera. This works fine when I'm riding my scooter, but it 'jumps around' quite a bit due to the fact it's an element that is tracked in 3D space.

I'd like to create a 2D canvas that sits on top of the user's view, and is tightly anchored to the camera (like a HUD). However, because this element is 2D, I need to calculate how to render this 2D canvas correctly in the left eye vs. the right eye so it has depth in the headset. Is there a native Specs script to handle the stereoscopic displacement for this?

Any help is greatly appreciated!

reddit.com
u/jcbauerxr — 9 days ago

I'm planning on creating 2 lenses, one is Creator lens, and another is Customer lens

Q1. Can you load a Custom Location scan ID dynamically at runtime in a lens?

Is there any scripting API to load a LocationAsset from a scan ID string at runtime without baking it into the lens project at design time in Lens Studio? My goal is, where people scan different locations with a custom location creator lens and an experience creator lens to add objects in the scanned place, and another user with the Customer lens can enter the ID or just go to the location, and at runtime the user can access different locations' experiences under single lens instead of manually setting it up in lens studio as seperate lenses each time a creator is scanning and adding object.

Q2. Can retrieveLocation() Access a scan created by a different lens or different spectacles from another one? I saw this on docs, so I wanted to clarify

If Creator Lens scans a space using MappingSession, calls storeLocation(), and gets back a persistedLocationId Can Customer Lens call retrieveLocation() With that same ID and localize against it?

Or is the persisted location private to the lens that created it, and should the device be the same tp access the persisted location? My goal here is to create something like a Custom location creator lens and add some objects there on runtime, and use it as an alternative for my Q1.

Q3. Best way to share spatial anchor positions between two different lenses?

I know Spatial Anchors are per-lens scoped. My plan is to save the raw vec3 positions of anchors to Snap Cloud from Creator Lens , then have the Customer Lens read those coordinates and reconstruct objects at those positions after localizing against the same scan. so if above Q1 and Q2 don't work, will this?

My Ultimate goal is to create one lens that scans and adds objects to the scanned location and the location can be accessed by another lens. The main thing is that the Lens Scanner will be used by multiple people to scan different locations and place objects there, and the Lens Customer can experience the objects created by the Lens Scanner users as soon as they reach the Location. The difficulty in manually creating lens for each location is a nightmare to manage. I need a dynamic way.
Hope I'm making sense with my post. Please show me a way I can navigate with these blocks. Thank you.

reddit.com
u/rust_cohle_1 — 9 days ago

I've been researching UI development on Spectacles, and figured I'd start releasing related open-source packages gradually, since smaller pieces make for more focused discussions.

The first piece is a Text Reflow utility: https://github.com/a-sumo/spatial-flex/tree/main/packages/text-reflow

Alongside it, I started a user voice thread where we can collectively figure out the best approaches for UI implementation on Spectacles, which should help us hand the Spectacles team more focused feedback and granular feature requests: https://snap.uservoice.com/forums/967346-spectacles/suggestions/51256216-ui-development-on-spectacles-friction-points-and

u/S-Curvilinear — 12 days ago

BullsAI turns any dartboard into an AR coach and gaming platform. A phone watches the board through computer vision and detects every dart in real time. Spectacles overlay coaching prompts, target highlights, and games onto the actual physical board. .

The phone is the input device. Mount it on a tripod, run the webapp, and OpenCV handles detection, HSV colour masking finds the board, ellipse fitting locates the centre, and a 4-dart calibration locks in a perfect perspective transform from any camera angle. Player places darts at Double 20, 6, 3, and 11. Four known points on a circle yield a matrix that maps the camera view to ideal board coordinates. The simplest solution beat every clever auto-detection approach I tried. Detection runs every 200ms with position-based deduplication so the same dart never sends twice, only when it actually moves more than 4% of the board does it trigger a new event. Pull the dart out, throw another, the next one fires immediately.

The Spectacles app is built around a single input system. Communicating with Supabase, listneing to dart events. Every game is a self-contained TypeScript file that subscribes to a callback and reacts. Adding a new game is one new script plus a button in the lobby.

Ive made 5 games modes as is; Dart Assist is the coaching mode: structured lessons on board layout, throw technique, and 12 progressive drills with smart feedback after every miss. The feedback isn't generic, it analyses the actual drift vector between target and dart, then gives technique advice. "Pulled hard right, relax your grip and throw straighter." "Right number, pulled inside. Trust the throw, aim slightly outward." "Drifting right, tighten your wrist on release."

Bubble Pop is Puzzle Bobble but on the dartboard. 12-20 bubbles spawn in clusters around the rim in three colours. Your dart is assigned a random colour each throw, only matching bubbles pop. Hit a cluster and a flood fill finds every connected same-colour bubble, then they all burst outward with explosive physics, gravity, and spin. Score is bubbles cleared per dart taken.

Apple on Head is William Tell a Bitmoji holds an apple, hit the apple to launch it sideways with physics, hit the face and the Bitmoji shakes.

Tic Tac Toe runs on a 3x3 grid mapped to the board with cell takeover (land on opponent's X and it becomes your O), auto-resets after wins.

Free Throw is a heat map every hit position spawns a marker so you can see your shot pattern build up.

Multiple Spectacles can join the same game code and see darts in sync. Multiplayer is "shared view" right now, both wearers see every dart, both react, but each runs its own game state. Proper turn-locking is on the roadmap.

Supabase powers the entire backend. Two tables dart_games for sessions and dart_throws for individual hits. Phone writes detections, all Spectacles read. No custom server, no socket setup, just polling at 200ms.

A reactive slime-face character watches every throw and reacts with surprise on bullseyes, sad on misses, and idle blinking the rest of the time.

Optimisation: detection on the phone, rendering on Spectacles. The Lens itself runs lean because the heavy CV work is offloaded entirely. Every game uses object pooling for spawned markers, the slime face is unlit shaded, and the dartboard reuses a single anchor disc that all game prefabs parent to.

I know having to set up a phone as the camera is awkward, that part isn't where this is meant to live long term. The thinking is BullsAI works best at venues. Some pubs and bars already have cameras around their dart boards, and Spectacles can potentially pair with that existing setup so the player just walks up, puts the glasses on, and it works. The phone is the dev kit. The venue install is the product.

I've actually built something in this direction before. A few years back I worked with a company in the UK called Axed making smart lanes for axe throwing. Object tracking was rough at the time so we shipped projectors and tablets instead of AR, but the principle was the same: take a traditional physical sport and layer a digital counterpart on top of it. Scoring, games, leaderboards, all driven by what's actually happening on the target. AR finally makes that experience personal instead of shared on a screen, and Spectacles are the right form factor for it.

Looking ahead, voice coaching with ElevenLabs, form analysis using the Spectacles forward camera to grade throw motion, LLM-powered personalised coaching that learns your weak zones over time, more games (Battleships, Around the World, Zombies), tournament mode with proper turn-locking, and safety warnings using depth sensing if someone walks in front of the board.

Was a tonne of fun building and showing this off at the Spectacles bootcamp.

Ive also submitted to the XRCC hackathon- "Kill The Manual".

Repo: https://github.com/ohistudio/BullsAI

Try the phone detector live: https://ohistudio.github.io/BullsAI/web/darts.html

Lens Link: https://www.spectacles.com/lens/e2cccc74d86949fc84ea5f084188a5df?type=SNAPCODE&metadata=01

Trailer: https://www.youtube.com/watch?v=wr63v62yR7k

u/Far-Temporary6630 — 13 days ago

Developing for Spectacles within a mobile-first IDE creates unnecessary friction. Many mobile features are redundant or broken on the device, while essential Spectacles functions lack native integration.
Key Issues
Feature Redundancy: Tools like Segmentation remain visible in the UI despite being unsupported, leading to developer confusion.
Lack of Native Parity: "Device Tracking - Surface Mode" doesn't work natively for Spectacles; we shouldn't need a separate placement package for a core spatial feature.
API Fragmentation: The shared API list is cluttered with mobile-only methods, making it difficult to identify Spectacles-compatible logic.
Proposed Solution
I am requesting a Spectacles-specific version of Lens Studio (or a dedicated "Native Specs Mode") that includes:

  1. A Streamlined UI: Automatically hides all non-functional mobile features.
  2. Native Surface Support: Built-in surface placement that works out-of-the-box for world-locked content.
  3. Dedicated Documentation: A standalone API reference strictly for Spectacles-supported features.
    At this stage, Spectacles development has evolved enough to deserve its own native environment and optimized workflow.

Uservoice : https://snap.uservoice.com/forums/954406-lens-studio-desktop/suggestions/51254347-feature-request-dedicated-lens-studio-ide-for-spe

snap.uservoice.com
u/KrazyCreates — 12 days ago