u/Dazzling_Respond_209

I built a visual editor where you can see what each layer in a PyTorch model actually does, open to feedback from people learning ML

When I was learning ML I had a hard time mapping the prose in papers ("a 3-layer transformer with 8 heads...") to actual PyTorch code. I ended up redrawing every architecture by hand before I trusted I understood it.

So I built a tool that does that step for you. You can:

  • type a description ("a small CNN for CIFAR-10") and watch the layers appear on a canvas
  • paste an arXiv link and see the paper's architecture parsed into editable nodes
  • load a HuggingFace model (bert-base, vit, etc.) and inspect its real layer graph
  • click any layer to see the params, the output shape, and the PyTorch code that generated it

The goal is to make the "this is what a ResNet actually is" moment faster. It's free to try, no signup needed for the visual editor (the AI assist part asks you to sign in because it costs us API tokens).

Short demo (no audio, ~3 min): https://neurarch.com/landing
Try it directly (free): https://neurarch.com

Open to any feedback — especially:

  • which architecture or paper would you most want to see decomposed this way?
  • what's confusing when you're learning a new model architecture, and could a visual layer-by-layer view help?

Not trying to sell anything in the comments, just want to know if this is actually useful for people who are still building intuition.

reddit.com
u/Dazzling_Respond_209 — 3 days ago