
Hi everybody,
I wanted to share a small project I’ve been working on: tiny-torch, a very minimal, work-in-progress reimplementation of some core PyTorch ideas from scratch.
The goal is not to replace PyTorch, obviously, but to better understand what’s happening under the hood: tensors, autograd, backward passes, modules, layers, and neural networks.
Right now it’s still very basic, but I’ve been using it as a learning project to explore things like:
- building a tiny
Tensorobject - implementing automatic differentiation
- writing common tensor ops
- supporting linear and convolution layers
- understanding how gradients actually flow through computation graphs
I’ve found that recreating even a tiny slice of PyTorch makes a lot of deep learning concepts feel much less magical. Things like broadcasting, matmul gradients, reshape/view semantics, masking, and attention internals suddenly become much more concrete when you have to implement them yourself.
The repo is here: https://github.com/drkleena/tiny-torch
If you're trying to grasp machine learning, I recommend checking it out to see how things work under the hood
Thanks!