TL;DR: I’ve developed a novel dynamic tree algorithm that completely abandons pointer-chasing in favor of a flat-buffer Structure of Arrays (SoA) layout. I'm trying to map out the best real-world use cases for it before I publish the whitepaper.
The "Black Box" Specs:
I designed this for read-heavy hierarchies where cache-misses are the main bottleneck. The properties are:
• Reads: True O(1) ancestor resolution.
• Mutations: Moving a subtree is strictly O(M) (where M is the subtree size). However, because it uses zero-allocation stack constraints and contiguous memory, it brute-forces the O(M) rewrite at memory-bandwidth speeds.
• Benchmarks: At n=10,000 (keeping it L3 cache resident), this architecture beats a highly optimized Link-Cut Tree by roughly 800x on read-heavy random topologies (113 µs vs 90.1 ms).
The Question:
I know the conventional wisdom says O(M) mutation is a dealbreaker, but the microbenchmarks prove that mechanical sympathy (cache locality) destroys Big-O notation for these specific workloads.
Where is this trade-off an absolute no-brainer?
I’m currently looking at compiler IRs (dominator trees) and rapid state-tracking like orchestrating finite state machines or command and control (C2) hierarchies.
If you had a dynamic tree that read in O(1) and mutated purely in contiguous memory, where would you drop it in?
What systems are currently bottlenecked by O(log n) pointer-chasing?