u/Altruistic_Tomato162

Learning programming languages is just like playing a new Zelda game and learning specific mechanics

Just started to learn Rust. And I get that same feeling like when you play a Zelda game. First you learn common mechanics to all games (just walking, sprinting, using your sword, talking to people, crouching, talk to people by "aiming at them"...) (note : this applies a bit less for botw and totk). And then you have game specific mechanics, like the wolf mechanics from Twilight princess, Navigating and using the wand from Windwaker, using the ocarina, talking to fay.

And this same behaviour transposes so well to code. You have common grammar for all programming languages (you first learn hello world, the types, operations, functions, lists, handling strings and stuff). And THEN come the parts where you learn about the language's specific mechanics (like manual memory management with C, object orientation for C++, asynchronous behavior for JavaScript, or Rust's strict compiler checks.

And then I just started trying to associate each game with different languages to see where each language fits the most :

  • Python : Botw : you can do a lot and roam free, of the most frictionless.
  • C : Ocarina of time : the big classic that revolutionnized the game.
  • C++ : Majora's mask : The second iteration that goes hard, but is also much more darker
  • JavaScript : Skyward Sword: one of the most controversials (lol).

Haven't played every Zelda game, haven't learned all the languages, complete the list :

reddit.com
u/Altruistic_Tomato162 — 4 days ago

Why is firmware development a constant battle ? What was the single most painful moment in the firmware you developped for the last time ?

Mine was realizing HAL layers weren't properly hooked up by my colleague. I spent 3 whole days understanding that he made a beamforming algorithm impossible to parameter (because it was taylor made for our hardware accelerator module), and my boss was asking to make the algorithms "agile". So I went up to see my colleague to ask him to put a parametrization feature on the algorithm (I could have done it myself, but I had a huge workpile at that moment). Bro said "I could, but then again the boss doesn't know what he's talking about. The algorithm is optimized to run at it's finest, it's not my problem now".

The problem is that bro had no actual notions of RF whatsoever. Beamforming has to be parametrable because when you use it, it's usually to track objects that are moving in space, and if you cannot adapt your algorithm to "look wide", and "focus thin" on the observation space, your device will be dogshit. My boss, on the other hand didn't understand the time it took to actually developp a full HAL for the board we were using : memory segfaults, compile time, testing, and debugging all that took time.

I was sandwiched. What would you have done ?

reddit.com
u/Altruistic_Tomato162 — 14 days ago

Tired of this cycle: edit > manually flash > open serial monitor > copy-paste logs > repeat.

pip install nff closes the loop. Your board becomes a target in a proper iteration cycle.

Here's what a run looks like:

you: "Make the LED blink every 200 ms and print the state to serial"

Claude: [writes sketch] > [compiles] > [uploads to ESP32] > [reads serial] > done

Two modes: flash to real hardware over USB, or headless Wokwi simulation without touching a physical board. Claude can call serial_read(), flash(), reset_device() and iterates on its own until the output matches what you asked for.

Works today on: ESP32 (CP210x / CH340), ESP8266 (FTDI), Arduino Uno, Mega, Nano, Leonardo

Known limitations:

- No library management, built-in ESP32 APIs and core Arduino libs only

- arduino-cli only, no PlatformIO, no Zephyr, no bare-metal ARM

- Single board per config file

- Wokwi-only simulation backend

- Clone boards with wrong VIDs need a manual --board flag

Adding a board is two lines of code. PlatformIO support, multi-board configs, better serial parsing, openOCD integration, all genuinely open if anyone wants to dig in.

MIT licensed: https://github.com/GLechevalier/nff

Happy to answer questions about the architecture or where it breaks. This is supposed to be a light project, don't expect ultra high production ready code. Open for feedback, and contributions.

u/Altruistic_Tomato162 — 14 days ago

https://reddit.com/link/1szctjo/video/8yjiw2e097yg1/player

Every software project has CI. Hardware projects still have someone manually hitting "Upload" in the IDE and squinting at the serial monitor.

nff fixes that. It's a Python tool + MCP server that exposes your board as callable functions:

  • nff flash ./sketch.ino — compile and upload from the CLI
  • nff monitor — scripted serial capture
  • nff init — auto-detects your board by USB vendor ID, no config needed

You can script it, pipe it, schedule it. Uno, Mega, Nano, ESP32, ESP8266 supported out of the box. Adding a board is two lines.

pip install nff

Fully open source @ https://github.com/GLechevalier/nff, and full roadmap on https://nanoforgeflow.com . What would make this useful in your workflow?

reddit.com
u/Altruistic_Tomato162 — 15 days ago