r/embedded

Mongoose: 3 critical security vulnerabilities discovered

Mongoose: 3 critical security vulnerabilities discovered

Are you using Mongoose in your embedded device? If so, you might want to read:

Vulnerabilities Discovered in Mongoose

if you don't know what Mongoose is, quoting from the first paragraph of the writeup:

If you’ve never heard of it, you’ve almost certainly used a device that runs it. It’s a single-file, cross-platform embedded network library written in C by Cesanta that provides HTTP/HTTPS, WebSocket, MQTT, mDNS and more, designed specifically for embedded systems and IoT devices where something like OpenSSL would be way too heavy. Their own website claims deployment on hundreds of millions of devices by companies like Siemens, Schneider Electric, Broadcom, Bosch, Google, Samsung, Qualcomm and Caterpillar. They even claim it runs on the International Space Station. We’re talking everything from smart home gateways and IP cameras to industrial PLCs, SCADA systems and, apparently, space.

u/SecureEmbedded — 11 hours ago
High-Speed Data Transfer on ZynqMP: Moving PL Data to NVMe at ~12 Gbps
▲ 22 r/embedded+1 crossposts

High-Speed Data Transfer on ZynqMP: Moving PL Data to NVMe at ~12 Gbps

​Hey everyone,

​This week, I tackled a data transfer challenge on a Zynq UltraScale+ MPSoC paired with a Gen-2 NVMe SSD. The goal was to stream image data continuously from a reserved PS DDR region (populated by PL) to persistent storage at 8+ Gbps.

​After experimenting with different approaches—navigating generic-uio vs. udmabuf, O_DIRECT EFAULT headaches, and Linux CMA panics—I finally achieved near 12 Gbps transfer speeds in my pipeline! For context, my raw fio benchmarks showed a slightly higher maximum capability, so this real-world implementation is pushing very close to the hardware limits.

​I've compiled my benchmarks, the pitfalls I encountered, and the final working architecture into a short Gist. I hope it saves some debugging time for anyone building high-throughput pipelines on embedded Linux:

https://gist.github.com/CaglayanDokme/9646e12533fe9ba84ef7f79906940956

​I'd be glad to hear your feedback or learn how you folks handle similar zero-copy pipelines. Have a great weekend out there!

Special thanks to the author of udmabuf driver, Ichiro Kawazome. Without his driver, this work would be more cumbersome on my side.

u/cdokme — 14 hours ago

Least grating dev environment for ESP32 devices

After many years of hating the ESP32 family (largely on principle, not for any good reasons!), I decided make a start getting to know the platform better. I've done a few things on it and it was pretty easy to get started.

Some time on from the starting point and I still can't work out what the best development environment to work in is. Arduino IDE is not a serious contender so let's exclude that. The remaining two which are fully supported are ESP-IDE and VS Code. I generally work in Linux - Ubuntu or Fedora.

Personally, I favour ESP-IDE because it's a real IDE but it doesn't seem to work very well! Although it wasn't recently, I was using Eclipse 20 years ago commercially so it's got a very slim path of resistance. That said, VS Code is very popular these days although I don't personally like it. It's hard to say why, but I think it comes down to disliking the "black magic" that happens behind the scenes which drives these plug-ins that I depend on but which I just don't understand. The plug-in marketplace seems to be a mess of things that all do the same thing and that Microsoft could have just written themselves to avoid it. It's sort of like Amazon returning the Chineseum brand of toilet paper instead of the one that you want and is popular in your country.

That said, I'm not averse to learning a new environment and may eventually understand these things or alternatively, I'll work out what's going wrong in Eclipse. I did wonder what anyone else's thoughts were or if there's something secret that I've been missing all along!

reddit.com
u/MerlinEmbedded — 7 hours ago
Can I use SIMCOM A7672S + ESP32 to make calls/SMS remotely via SSH?

Can I use SIMCOM A7672S + ESP32 to make calls/SMS remotely via SSH?

I’m planning to buy an Edgehax SIMCOM A7672S + ESP32 board and had a random idea.

Can I hook it up to my home server, then SSH into that server from another device and use the SIM to send/receive SMS and maybe even make calls?

Rough idea: ESP32 talks to the SIM module using AT commands, exposes something over WiFi, and my server just sends commands to it. Then I control everything remotely through SSH.

SMS seems simple enough, but I’m not sure how calls would work. How do you even deal with audio in a setup like this?

Also wondering if I’m overthinking it and should just connect the SIM module directly to the server instead of going through the ESP32.

Has anyone tried something like this? Or is this a dumb approach?

u/bendo_verson — 8 hours ago
On-device speech pipeline with a C API — VAD + STT + TTS for Yocto/automotive, runs on Qualcomm SoCs

On-device speech pipeline with a C API — VAD + STT + TTS for Yocto/automotive, runs on Qualcomm SoCs

Built a speech processing pipeline that runs on embedded Linux with a minimal C API. Targeting automotive and edge devices — currently running on Qualcomm SoCs with QNN acceleration.

The C API is 6 functions:

  speech_config_t config = speech_config_default();                                                                       
  config.model_dir = "/opt/models";
  config.use_qnn = true;        // Qualcomm QNN delegate                                                                  
  config.use_int8 = true;       // INT8 quantized models                                                                  

  speech_pipeline_t pipeline = speech_create(config, on_event, NULL);                                                     
  speech_start(pipeline);
  speech_push_audio(pipeline, samples, count);                                                                            
  speech_resume_listening(pipeline);
  speech_destroy(pipeline);

Events come back through a single callback:

  void on_event(const speech_event_t* event, void* ctx) {                                                                 
      switch (event->type) {
          case SPEECH_EVENT_TRANSCRIPTION:
              printf("heard: %s\n", event->text);                                                                         
              break;
          case SPEECH_EVENT_RESPONSE_AUDIO:                                                                               
              play(event->audio_data, event->audio_data_length);
              break;                                                                                                      
      }
  }                                                                                                                       

Pipeline stages:

  • Silero VAD — voice activity detection, triggers STT only on speech
  • Parakeet TDT v3 — multilingual STT (114 languages, ~150ms on Snapdragon)
  • Kokoro 82M — text-to-speech synthesis
  • DeepFilterNet3 — noise cancellation (STFT/ERB processing)

All inference through ONNX Runtime. Models are INT8 quantized ONNX files (~1.2 GB total). No Python, no Java, no runtime
dependencies beyond ONNX RT and libc.

Build:

  cmake -B build -DORT_DIR=../ort-linux -DUSE_QNN=ON                                                                      
  cmake --build build

C++17 core, C API surface. The same C++ engine also powers the Android SDK (via JNI), so the models and inference paths are shared.

Apache 2.0 · GitHub: https://github.com/soniqo/speech-android (Linux API under linux/)

Anyone running speech processing on edge devices? Curious what hardware/RTOS combos people are using.

u/ivan_digital — 9 hours ago

STM32 low power design : what’s actually draining your battery when everything looks right?

Working through a LoRaWAN sensor node design and hit the classic problem - sleep current looks perfect on paper, but real world consumption is 3-4x higher than expected.

Usual suspects I’ve been through:

∙	GPIO states during sleep , floating pins pulling current through internal resistors

∙	Peripheral clocks not fully disabled before entering stop mode

∙	LSE startup time causing the MCU to stay in a higher power state longer than expected

∙	IWDG keeping certain regulators alive

The one that got me - SPI flash not entering deep power down before sleep. Datasheet said 1µA standby, reality was 80µA because the CS line wasn’t being driven high explicitly before the sleep sequence.

What are the non-obvious power leaks that have burned you on low power STM32 or similar designs? Particularly interested in anything related to LoRaWAN duty cycle management and sleep/wake timing.

reddit.com
u/Medtag212 — 20 hours ago

How to structure a simple firmware with a GUI?

This is a question that's been bothering me for quite I while. I'm not talking about complex user interfaces that warrant a RTOS and a GUI framework, it's about something simple: like a clock with a few setup screens or a configurable thermostat.

Most projects I've seen use something like a big switch-case statement in a loop. However, this approach seems to descend into spaghetti madness really quickly, especially when something needs to run with a frequency not matching the GUI loop frequency.

I've currently settled on a more event-driven approach: I have a simple timer scheduler that runs function callbacks and I have a simple button handling thing that runs a callback whenever a button is pressed. This way, changing a GUI screen means removing older callbacks and registering a few new ones, and running something in the background means just registering another function in the scheduler. This approach works better for me, but I still feel like I'm halfway to an actually decent architecture.

So here the question: how do you structure embedded projects of this kind? Is there any publicly available code which you believe completely nailed it? Any input is welcome.

reddit.com
u/silicagel777 — 18 hours ago

Best detection sensor to pair with TCS3200?

I’m working on a conveyor belt project with a color sorting mechanism, and I’m trying to choose the right combination of sensors.

Right now, I’m planning to use a TCS3200 color sensor, but I’m not fully sure what the best detection sensor to pair with it would be. The idea is to detect the presence of an object and then trigger the TCS3200 to read its color accurately

My main concerns is avoiding interference with the TCS3200 (since it uses light for sensing)

reddit.com
u/a_HoonterMustHont — 7 hours ago

what do real wifi access points use internally?

like routers / access points from TP-Link or Ubiquiti

obviously not esp32 type stuff

so what are they actually built on? i keep seeing Qualcomm / MediaTek mentioned but no idea what exact chips or boards people use

if i wanted to build something like

ethernet in ,wifi out

what would i even start with?

also how painful is the antenna/rf part in real life

is this doable or one of those “looks easy but actually very hard” things?

reddit.com
u/Akki-1993 — 23 hours ago

Is HAL I2C Driver Code will work on RTOS Environment?

Actually what i want to know is, whether the HAL I2C Driver code will work reliable in Multitasking RTOS Environment (Even after adding the mutex to avoid simultaneous port access).

Will the I2C driver handle heavy task Switching, while updating crucial hardware registers. will it survive, and work reliable, without any issue? or do I need to make the i2c transaction as atomic, to avoid task switching happening at i2c mid transaction(start, address , write, stop )step??

Chip am using is STM32F4 series

reddit.com
u/Intelligent-Error212 — 10 hours ago

How can I build a microcontroller from scratch just for educational purposes? An educational model of a microcontroller’s internal architecture on a breadboard

The purpose is simply to show the components; they don’t need to be connected or work properly—it’s just for educational purposes. Please help me correct any mistakes:

"Scale model"

CPU:

-arithmetic logic unit (ALU)

SN74LS181

-Registers

SN74HC273

-Program Counter (PC)

SN74HC273 + 74HC163 +Logic gate... , I’m not sure how best to represent the PC here

-Instruction Register (IR)

SN74HC273 +SN74HC574 ,2 × SN74LS173A, I’m not sure how best to represent the IR here

-Control Unit

SN74HC138 +SN74HC161 + SN74HC273 + SN74HC00 / SN74HC04,AT28C64B(EEPROM)

-Instruction Decoder
SN74HC138

-Accumulator

SN74HC273

-Status Register / Flag Register

flip-flops

-Stack/Stack Pointer (SP)

...

POWER:

**-**7805

-electrolytic capacitor

-ceramic capacitor

Clock:

Crystal oscillator

2 small capacitors

internal feedback resistor (RF)

CI SLEEP

Reset:

1 push button

1 pull-up/pull-down resistor

1 capacitor

Program memory:

..

AT28C64B

RAM:

CY62256N or AS6C62256

Timer/Counter:

555

74HC

serial communication:

UART / SPI / I2C

ADC

...

reddit.com
u/Strikewr — 11 hours ago

Where does AI-generated embedded code fail?

AI-generated code is easy to spot in code review these days. The code itself is clean -- signal handling, error handling, structure all look good. But embedded domain knowledge is missing.

Recent catches from review:

  • CAN logging daemon writing directly to /var/log/ on eMMC. At 100ms message intervals. Storage dies in months
  • No volatile on ISR-shared variables. Compiler optimizes out the read, main loop never sees the flag change
  • Zero timing margin. Timeout = expected response time. Works on the bench, intermittent failures in the field

Compiles clean, runs fine. But it's a problem on real hardware.

AI tools aren't the issue. I use them too. The problem is trusting the output because it looks clean.

LLMs do well with what you explicitly tell them, but they drop implicit domain knowledge. eMMC wear, volatile semantics, IRQ context restrictions, nobody puts these in a prompt.

I ran some tests: explicit prompts ("declare a volatile int flag") vs implicit ("communicate via a flag between ISR and main loop") showed a ~35 percentage point gap. HumanEval and SWE-bench only test explicit-style prompts, so this gap doesn't show up in the numbers.

I now maintain a silent failure checklist in my project config, adding a line every time I catch one in review. Can only write down traps I already know about, but at least the same failure types don't recur.

If you've caught similar failures, I'd like to hear about them.

reddit.com
u/0xecro1 — 13 hours ago

Hacking the "Surveillance Wrist": Seeking open-source wearable strategies to bridge the "Somatic Gap" in University Students 🇨🇱

Hi everyone!

I’m part of Tuküyen (formerly project Sentinel), an interdisciplinary research team (Sociology, Engineering, and Psychology) at Universidad Alberto Hurtado (Chile). We are currently developing a "White Box AI" platform to foster self-regulation and resilience in university students, moving away from the extractive models of Surveillance Capitalism.

The Challenge: We want to integrate a smartwatch as a sociotechnical device to validate the physiological impact of digital overstimulation. We’ve identified a "Somatic Gap"—the disconnect between a student's digital behavior (addictive UI/UX, infinite scroll) and their body’s stress response (cortisol spikes, low HRV, sleep deprivation).

The Goal: We want to provide students with a "Kit of Resistance": a wearable that isn't spying on them for a corporation, but rather helping them reclaim their agency. We are on a research budget (~$3,500 USD for the whole project) and aim to give these watches to students as a permanent tool for autonomy.

I need your expert advice on:

  1. Hackable Hardware: Which open-source or "hacker-friendly" smartwatches would you recommend for research? We are looking at PineTime (Pine64) or Bangle.js (Espruino). We need sensors for HRV (Heart Rate Variability), EDA (Electrodermal Activity), and high-quality Sleep Tracking.
  2. Data Extraction & Logic: What is the best way to programmatically correlate phone-side telemetry (app usage, screen time) with watch-side biometrics (HRV dips) in real-time? Any specific APIs or local processing frameworks to avoid sending raw biometric data to the cloud?
  3. The "Habitus" Hack: We want to detect repetitive motor patterns (the "zombified" scroll gesture) using the watch’s accelerometer/gyroscope to trigger a haptic "nudge" (breathing exercises). Has anyone worked on gesture recognition for digital addiction?
  4. Privacy at the Edge: Since we are dealing with sensitive mental health indicators (GAD-7/PHQ-9 proxies), we want to implement Differential Privacy directly on the device. Any lightweight libraries for on-device data anonymization?
  5. Branding the "Resistance": We want to re-flash/re-brand these devices. Does anyone have experience custom-casing or deep-modding firmware for a "movement" feel rather than a "medical device" feel?

Theoretical Background: We are grounded in Shoshana Zuboff (behavioral surplus) and Jonathan Haidt (attention fragmentation and sleep deprivation harms). We believe the body is the ultimate site of resistance against the "Habitus Maquinal".

Any repos, specific sensor modules, or hardware "gotchas" would be immensely helpful. We want these devices to be a memory of the students' empowerment, not another link in the chain of heteronomy.

Thanks from Santiago, Chile! 🇨🇱

reddit.com
u/Spare-Customer-506 — 22 hours ago

Hacking the "Surveillance Wrist": Seeking open-source wearable strategies to bridge the "Somatic Gap" in University Students 🇨🇱

Hi everyone!

I’m part of Tuküyen (formerly project Sentinel), an interdisciplinary research team (Sociology, Engineering, and Psychology) at Universidad Alberto Hurtado (Chile). We are currently developing a "White Box AI" platform to foster self-regulation and resilience in university students, moving away from the extractive models of Surveillance Capitalism.

The Challenge: We want to integrate a smartwatch as a sociotechnical device to validate the physiological impact of digital overstimulation. We’ve identified a "Somatic Gap"—the disconnect between a student's digital behavior (addictive UI/UX, infinite scroll) and their body’s stress response (cortisol spikes, low HRV, sleep deprivation).

The Goal: We want to provide students with a "Kit of Resistance": a wearable that isn't spying on them for a corporation, but rather helping them reclaim their agency. We are on a research budget (~$3,500 USD for the whole project) and aim to give these watches to students as a permanent tool for autonomy.

I need your expert advice on:

  1. Hackable Hardware: Which open-source or "hacker-friendly" smartwatches would you recommend for research? We are looking at PineTime (Pine64) or Bangle.js (Espruino). We need sensors for HRV (Heart Rate Variability), EDA (Electrodermal Activity), and high-quality Sleep Tracking.
  2. Data Extraction & Logic: What is the best way to programmatically correlate phone-side telemetry (app usage, screen time) with watch-side biometrics (HRV dips) in real-time? Any specific APIs or local processing frameworks to avoid sending raw biometric data to the cloud?
  3. The "Habitus" Hack: We want to detect repetitive motor patterns (the "zombified" scroll gesture) using the watch’s accelerometer/gyroscope to trigger a haptic "nudge" (breathing exercises). Has anyone worked on gesture recognition for digital addiction?
  4. Privacy at the Edge: Since we are dealing with sensitive mental health indicators (GAD-7/PHQ-9 proxies), we want to implement Differential Privacy directly on the device. Any lightweight libraries for on-device data anonymization?
  5. Branding the "Resistance": We want to re-flash/re-brand these devices. Does anyone have experience custom-casing or deep-modding firmware for a "movement" feel rather than a "medical device" feel?

Theoretical Background: We are grounded in Shoshana Zuboff (behavioral surplus) and Jonathan Haidt (attention fragmentation and sleep deprivation harms). We believe the body is the ultimate site of resistance against the "Habitus Maquinal".

Any repos, specific sensor modules, or hardware "gotchas" would be immensely helpful. We want these devices to be a memory of the students' empowerment, not another link in the chain of heteronomy.

Thanks from Santiago, Chile! 🇨🇱

reddit.com
u/Spare-Customer-506 — 22 hours ago
Week