u/CamThinkAI

We’ve been testing low-power behavior on an STM32N6-based camera board, and I keep coming back to one number that is easy to misread: 6.1 uA in deep sleep.

That is not the NPU “running at 6.1 uA”.

In that state, the vision side is basically out of the picture. The STM32N6 is not doing inference, the camera rail is off, Wi-Fi is off, and storage is off. What stays alive is the low-power controller and the wake path.

The rough setup is:

  • STM32N6 for image capture and on-device inference
  • STM32U0 for low-power control / wake management
  • PIR, button, external IO, or scheduled wake as possible triggers
  • camera, Wi-Fi, storage, and other rails powered only when needed

The part I find interesting is that the sleep number only matters if the system is actually allowed to stay asleep.

For a battery-powered camera, it feels like the real problem is not just “how efficient is the NPU?”

It is more like:

sleep -> trigger -> power camera -> capture -> maybe run inference -> store/upload -> shut rails back down

If that loop runs too often for useless events, the deep sleep number stops being the thing that matters.

A few tradeoffs we are thinking through:

  • False wakeups can cost more than shaving a little time off inference
  • Wi-Fi / Cat-1 upload time can dominate if every event gets sent out
  • PIR-first triggering is cheap, but it can miss some visual events
  • Image-first triggering gives better control, but keeps more of the system awake
  • Scheduled wake is great for things like metering or environmental monitoring, but not for fast events

So the question I’m trying to frame is:

How do you avoid waking the full vision pipeline unless the event is actually worth it?

For people who have built battery-powered cameras, sensors, or other embedded vision prototypes, what trigger strategy has worked best in practice?

Do you usually start with PIR/external triggers, or do you keep the image sensor active enough to do visual pre-filtering?

Context: I work with CamThink, and this comes from testing on our STM32N6 edge camera hardware. Sharing here because the power-state tradeoff seems more useful to discuss than the headline sleep-current number.

A simplified view of the power states. The 6.1 uA number only applies to the deep-sleep baseline, not capture/inference/upload.

reddit.com
u/CamThinkAI — 7 days ago

We’ve been testing low-power behavior on an STM32N6-based edge camera board, and I keep coming back to one number that is easy to misread: 6.1 uA in deep sleep.

That is not the NPU “running at 6.1 uA”.

In that state, the vision side is basically out of the picture. The STM32N6 is not doing inference, the camera rail is off, Wi-Fi is off, and storage is off. What stays alive is the low-power controller and the wake path.

The rough setup is:

- STM32N6 for image capture and on-device inference

- STM32U0 for low-power control / wake management

- PIR, button, external IO, or scheduled wake as possible triggers

- camera, Wi-Fi, storage, and other rails powered only when needed

The part I find interesting is that the sleep number only matters if the system is actually allowed to stay asleep.

For a battery-powered camera, it feels like the real problem is not just “how efficient is the NPU?”

It is more like:

sleep -> trigger -> power camera -> capture -> maybe run inference -> store/upload -> shut rails back down

If that loop runs too often for useless events, the deep sleep number stops being the thing that matters.

A few tradeoffs we are thinking through:

- false wakeups can cost more than shaving a little time off inference

- Wi-Fi / Cat-1 upload time can dominate if every event gets sent out

- PIR-first triggering is cheap, but it can miss some visual events

- image-first triggering gives better control, but keeps more of the system awake

- scheduled wake is great for things like metering or environmental monitoring, but not for fast events

So the question I’m trying to frame is:

How do you avoid waking the full vision pipeline unless the event is actually worth it?

For people who have built battery-powered cameras, sensors, or edge vision prototypes, what trigger strategy has worked best in practice?

Do you usually start with PIR/external triggers, or do you keep the image sensor active enough to do visual pre-filtering?

Context: this is from testing we’ve been doing on CamThink’s STM32N6 edge camera hardware. Sharing because the power-state tradeoff seems more useful to discuss than the headline sleep-current number.

u/CamThinkAI — 7 days ago

We’ve successfully implemented pedestrian crossing detection using our NE301 Edge AI camera combined with sensors!

With our latest open-source software platform NeoMind, we’re now able to unlock many more real-world AI applications. Pedestrian crossing detection is just our first experimental scenario.

We’ve already outlined many additional scenarios that we’re excited to explore, and we’ll be sharing more interesting use cases soon.

If you have any creative ideas or application scenarios in mind, feel free to share them in the comments — we’d love to hear them!

u/CamThinkAI — 2 months ago

We’re excited to share a recent customer project that demonstrates how an Edge AI camera can be used to automatically monitor the Insect trap box‘s performance, then send the alert to the maintenance team.

https://reddit.com/link/1quns4t/video/4mjyml2k39hg1/player

The system delivers the following capabilities:

  1. Demand-Driven Efficiency: Shifts from rigid "scheduled checks" to real-time, "need-based" cleaning, slashing unnecessary labor costs.
  2. 24/7 Continuous Compliance: Moves beyond periodic manual audits to constant, automated monitoring, ensuring QSC standards are met every second.
  3. Visual Traceability: Every alert is backed by real-time image evidence in HomeAssistant, creating a transparent and indisputable digital audit trail.
  4. Precision at the Edge: Leverages STM32N6 for high-accuracy small object detection locally, ensuring data privacy and zero cloud dependency.

Project Motivation
Manual insect trap monitoring is a massive "labor leak" in the QSR industry.

Relying on staff to manually check every trap is expensive, inefficient, and creates dangerous "blind periods" between inspections.

This antiquated process leads to inconsistent compliance and audit anxiety

Technology Stack
Edge AI Camera: CamThink NeoEyes NE301
AI Model: YOLO (deployed and executed on-device)
AITool Stack
Automation & Visualization: Home Assistant

The complete implementation process for this project has now been published on Hacksterhttps://www.hackster.io/CamThink2/smart-pest-monitoring-boosting-qsc-compliance-operational-93cb11[). ](https://www.hackster.io/camthink2/industrial-edge-ai-in-action-smart-warehouse-monitoring-7c4ffd). If you’re interested, feel free to check it out — you can follow the steps to recreate the project or use it as a foundation for your own ideas and extensions)

If you’re interested, feel free to check it out — you can follow the steps to recreate the project or use it as a foundation for your own ideas and extensions!

This case highlights the flexibility of Edge AI for intelligent warehouse and automation scenarios. 

We look forward to seeing how this approach can be adapted to additional use cases across different industries.

If this video inspires you or if you have any technical questions, feel free to leave a comment below — we’d love to hear from you!

reddit.com
u/CamThinkAI — 3 months ago

We’re excited to share a recent customer project that demonstrates how an Edge AI camera can be used to automatically monitor the trash bin status for smart city, which helped to manager it with high accuracy.

Project Motivation
In city management, "guessing" is expensive. 💸 Sending a truck to verify a "Full" sensor that turns out to be false? That's wasted tax dollars. 

So they need a solution with traceable, visual intelligence to go beyond the "Guessing".

The system delivers the following capabilities:
✅ Precision: Distinguishes actual waste from obstructions (no false positives).
✅ Traceability: Every alert can be verified visually, creating a perfect audit trail for municipal services.
✅ Integration: Feed this precise state directly into HomeAssistant for a centralized, transparent management dashboard. Trust, but verify. That’s the new standard for demand-driven city services.

https://reddit.com/link/1qqx1ft/video/job5912tdfgg1/player

Technology Stack
Edge AI Camera: CamThink NeoEyes NE301
AI Model: YOLO (deployed and executed on-device), Camthink AITool Stack for Model training
Automation & Visualization: Home Assistant

The complete implementation process for this project has now been published on Hackster

Including the Quantified firmware of NE301 for this project.

If you’re interested, feel free to check it out — you can follow the steps to recreate the project or use it as a foundation for your own ideas and extensions!

This case highlights the flexibility of Edge AI for intelligent Smart City Scenarios.

We look forward to seeing how this approach can be adapted to additional use cases across different industries.

If this video inspires you or if you have any technical questions, feel free to leave a comment below — we’d love to hear from you!

reddit.com
u/CamThinkAI — 3 months ago