
https://reddit.com/link/1t5w6lk/video/kexb2fyt9mzg1/player
I’ve been experimenting with OMK’s project ontology workflow, and it’s actually more straightforward than I expected.
The basic flow is just prompting it with something like:
>
From there, OMK scans the current workspace context, picks out the important components as nodes, and maps out dependencies as edges. So instead of manually building some graph from scratch, you basically let it infer the project structure first.
The output gets saved as a .json ontology map. Raw JSON graphs are not exactly fun to inspect, so the useful part is that OMK has a built-in skill that converts that ontology JSON into an interactive .html visualization.
That HTML view makes it much easier to sanity-check whether the model actually understood your architecture instead of just vaguely summarizing files.
To be clear, I’m not claiming this is a perfect semantic knowledge graph implementation. Dynamic node updates, complex relationships, and schema correction still have rough edges. You’ll probably still hit cases where the graph needs manual cleanup.
But pragmatically, it does seem useful for reducing hallucinations.
Instead of letting the Kimi model rely only on a long, messy, decaying context window, the ontology JSON gives it a more stable reference point. That seems to help prevent the usual “invented variable / forgotten architecture rule / fake dependency” problem.
Has anyone else tried rendering their workspace into the HTML ontology view yet?
I’m especially curious how well the JSON-to-HTML skill holds up on larger enterprise-scale codebases.