u/21joacole

I’ve spent 16+ years in video/production, and one thing has never changed: Lighting is still trial and error.

Move a light → test → adjust → repeat → waste hours.

So, I started building something to fix that. It’s called WATT-IF, and the idea is simple:

👉 What if you could control lighting before you ever set it up?

Right now, it’s a scrappy but working beta, and I’m building it into a full system with a few core pieces:

⚡ M/mk ("Mimic" AI Lighting Director)
Real-time voice + AI feedback that actually tells you what’s wrong with your lighting and how to fix it.

🎯 Light Coach
Breaks down your setup and gives you actionable adjustments (not theory… actual “move this here” guidance).

🔥 Light Fight
Compare setups, challenge others, and see who's lighting actually holds up.
(yes… lighting battles 😅)

🧪 LumaTik (AR Lighting Engine)
Drop lights into your real environment using AR and see how everything looks before touching gear.

The goal isn’t just a tool.

👉 It’s turning lighting into something you can control like software instead of guessing in the dark.

I’ve got early users testing it now, but I’m at the point where I need a strong tech/dev co-founder to really push this forward.

Currently:

  • Equity-based
  • Early stage
  • Real product already in motion

If you’re into AR / 3D / creative tools / AI systems… this might be worth a conversation.

Even if you’re not a dev, I’d genuinely love feedback. Tear it apart. Tell me what sucks. That’s how this gets better.

u/21joacole — 17 days ago

I’m building WATT-IF, a mobile AR platform that turns lighting from a manual, unpredictable process into something creators can place, control, and execute before they ever touch a physical light.

After 16+ years in production, one problem has stayed constant: lighting is slow, inconsistent, and heavily dependent on experience. Most creators either waste hours testing setups or rely on tutorials that don’t translate to their environment.

WATT-IF removes the need to guess lighting entirely by overlaying lighting setups directly into the real world. Users can place virtual lights in real time, preview cinematic results, and build repeatable setups instead of guessing.

For advanced users, WATT-IF also includes a dedicated 3D lighting environment where full multi-light rigs can be built from scratch, refined, and then deployed into real-world AR scenes.

The current beta includes:
• Real-time AR lighting placement with gesture controls
• Cinematic multi-light presets (not filters, full rigs)
• AI-driven lighting feedback and scoring system
• Competitive “Light Fight” mode for skill-based comparison
• Exportable lighting setups for repeatable workflows
• Entire 3D sandbox system for building storyboards/workflows
• Early bridge into controlling real-world lights via wireless integration

The core shift is this:
Lighting becomes a controllable overlay instead of a physical guessing process.

Long-term, this evolves into infrastructure for how lighting is learned, planned, and executed across photography, film, and creator workflows.

I’m currently looking to connect with investors who understand creator tools, AR/AI, spatial computing, or workflow automation and possible who might want to continue to build this with me.

If this space resonates, I’m happy to share the beta and what I’m building.

reddit.com
u/21joacole — 17 days ago

I’m building WATT-IF, a mobile AR platform that turns lighting from a manual, unpredictable process into something creators can place, control, and execute before they ever touch a physical light.

After 16+ years in production, one problem has stayed constant: lighting is slow, inconsistent, and heavily dependent on experience. Most creators either waste hours testing setups or rely on tutorials that don’t translate to their environment.

WATT-IF removes the need to guess lighting entirely by overlaying lighting setups directly into the real world. Users can place virtual lights in real time, preview cinematic results, and build repeatable setups instead of guessing.

For advanced users, WATT-IF also includes a dedicated 3D lighting environment where full multi-light rigs can be built from scratch, refined, and then deployed into real-world AR scenes.

The current beta includes:
• Real-time AR lighting placement with gesture controls
• Cinematic multi-light presets (not filters, full rigs)
• AI-driven lighting feedback and scoring system
• Competitive “Light Fight” mode for skill-based comparison
• Exportable lighting setups for repeatable workflows
• Entire 3D sandbox system for building storyboards/workflows
• Early bridge into controlling real-world lights via wireless integration

The core shift is this:
Lighting becomes a controllable overlay instead of a physical guessing process.

Long-term, this evolves into infrastructure for how lighting is learned, planned, and executed across photography, film, and creator workflows.

I’m currently looking to connect with dev co-founders (currently equity only) who understand creator tools, AR/AI, spatial computing, or workflow automation and possible who might want to continue to build this with me.

If this space resonates, I’m happy to share the beta and what I’m building.

reddit.com
u/21joacole — 17 days ago

I know this is a long shot, but I’m building WATT-IF, a mobile AR platform that turns lighting from a manual, unpredictable process into something creators can place, control, and execute before they ever touch a physical light. This build is roughly 80-85% completed & there is currently a "clunky" working beta version.

After 16+ years in production, one problem has stayed constant: lighting is slow, inconsistent, and heavily dependent on experience. Most creators either waste hours testing setups or rely on tutorials that don’t translate to their environment.

WATT-IF removes the need to guess lighting entirely by overlaying lighting setups directly into the real world. Users can place virtual lights in real time, preview cinematic results, and build repeatable setups instead of guessing.

For advanced users, WATT-IF also includes a dedicated 3D lighting environment where full multi-light rigs can be built from scratch, refined, and then deployed into real-world AR scenes.

The current beta includes:
• Real-time AR lighting placement with gesture controls
• Cinematic multi-light presets (not filters, full rigs)
• AI-driven lighting feedback and scoring system
• Competitive “Light Fight” mode for skill-based comparison
• Exportable lighting setups for repeatable workflows
• Entire 3D sandbox system for building storyboards/workflows
• Early bridge into controlling real-world lights via wireless integration

The core shift is this:
Lighting becomes a controllable overlay instead of a physical guessing process.

Long-term, this evolves into infrastructure for how lighting is learned, planned, and executed across photography, film, and creator workflows.

I’m currently looking to connect with dev co-founders (currently equity only + future salary possibility) who understand Swift & AR/AI integration creator tools, spatial computing, or workflow automation and possible who might want to continue to build this with me.

If this space resonates, I’m happy to share the beta and what I’m building.

reddit.com
u/21joacole — 18 days ago

I’m building ARO, a mobile AR platform that turns lighting from a manual, unpredictable process into something creators can place, control, and execute before they ever touch a physical light.

After 16+ years in production, one problem has stayed constant: lighting is slow, inconsistent, and heavily dependent on experience. Most creators either waste hours testing setups or rely on tutorials that don’t translate to their environment.

Aro removes the need to guess lighting entirely by overlaying lighting setups directly into the real world. Users can place virtual lights in real time, preview cinematic results, and build repeatable setups instead of guessing.

For advanced users, ARO also includes a dedicated 3D lighting environment where full multi-light rigs can be built from scratch, refined, and then deployed into real-world AR scenes.

The current beta includes:
• Real-time AR lighting placement with gesture controls
• Cinematic multi-light presets (not filters, full rigs)
• AI-driven lighting feedback and scoring system
• Competitive “Light Fight” mode for skill-based comparison
• Exportable lighting setups for repeatable workflows
• Entire 3D sandbox system for building storyboards/workflows
• Early bridge into controlling real-world lights via wireless integration

The core shift is this:
Lighting becomes a controllable overlay instead of a physical guessing process.

Long-term, this evolves into infrastructure for how lighting is learned, planned, and executed across photography, film, and creator workflows.

I’m currently looking to connect with investors who understand creator tools and a dev who knows AR/AI, spatial computing, or workflow automation that might be interested in equity-based co-founder status.

If this space resonates, I’m happy to share the beta and what I’m building.

reddit.com
u/21joacole — 18 days ago