Why HST is the missing puzzle piece
To put it simply for those who find the math a bit dense:
For years, the 3D industry has had incredible magnifying glasses (algorithms like Functional Maps and ZoomOut). They have insane resolution and can match tiny skin pores between two models.
The problem? If you point a magnifying glass at the wrong spot on a table, you see nothing. These methods were "blind" at the start—they often got lost, flipped left for right, or got stuck in "local minima" because they didn't have a map of the whole object.
HST is the "Map and Compass":
- HST doesn't care about the tiny pores yet. It looks at the internal volume to understand the global structure.
- It creates a stable intermediate state—a perfect "bridge" between two shapes.
- It then hands this "bridge" over to the magnifying glasses (FM/ZoomOut).
The Result: Before, you had to manually guide the "glass" or pray the random start would work. Now, HST solves the global orientation automatically, and the other methods just do the fine-tuning.
By putting the whole pipeline on the GPU, we turned a 2.5-hour "guessing game" into a 13-minute automated certainty. I’ve finally completed the puzzle by adding the stabilization layer that was missing for decades.
https://orcid.org/0009-0003-9680-3333
https://zenodo.org/records/20074328
https://github.com/sel8888/harmonic-shape-transform-2026-koncept