Synrix Kernel
Synrix originally started as a memory add-on for a larger system.
I needed something that could hold a large amount of structured state, survive crashes, and stay fast under heavy access patterns without dragging in a database server or slowing down like graph-style approaches often did at scale.
The more I solved those constraints, the deeper in the stack the design had to go.
Eventually it stopped making sense as an add-on and became its own standalone in-process memory kernel.
Synrix stores state as fixed-size nodes in a memory-mapped lattice file with a WAL-backed write path. Instead of loading the whole dataset into RAM, the OS pages in only the working set.
Example: a 500k-node lattice where the workload only touches ~1k nodes can stay around ~1.2MB resident instead of ~580MB fully loaded.
Core properties:
- Fixed 1,216-byte nodes (cache-line aligned layout)
- O(1) exact-name lookup via in-memory hash index
- Prefix traversal via trie
- Automatic crash recovery on next open
- Two files on disk: structural lattice + vector sidecar
- No server process
- No network dependency
It also ended up being a strong fit for AI agents and local autonomous systems, so I embedded vector search directly into the kernel:
- 512-dimensional float32 similarity search
- self-calibrating IVF pipeline
- ARM64 NEON SDOT fast path on supported hardware
Measured on Jetson Orin Nano:
- Prefix query P50/P99: 0.6 / 0.7 ms
- Vector search @ 50k vectors (99.9% recall): ~2 ms
- ARM64 SDOT path vs scalar: 8.2× faster
Current limitations worth knowing:
- Single-writer model (not concurrent multi-writer)
- Fixed node size means large variable payloads are chunked
- ARM64 gets the fastest SIMD path; x86 currently falls back to scalar in some paths
Cross-platform builds are green on Linux x86_64, Linux aarch64, Windows, and macOS.