
I built a small structural gate for LLM outputs. It does not check truth.
I am working on a small project called OMNIA.
The idea is simple:
OMNIA does not try to decide whether an answer is true.
It checks whether an output is structurally admissible.
Example:
- incomplete answer
- expression instead of final answer
- wrong output format
- partial answer
- instability under small variations
These are not the same thing as semantic correctness.
A well-formed wrong answer can still pass OMNIA.
That is intentional.
The current boundary is:
structural validity != semantic correctness
So the intended pipeline is:
LLM -> OMNIA -> semantic evaluator -> decision
I tested this through several small validation stages.
The most useful result so far is that V9 catches structural incompleteness and malformed outputs, while keeping the scope explicitly limited.
Repo:
https://github.com/Tuttotorna/OMNIA
DOI:
https://doi.org/10.5281/zenodo.19739481
This is still early work.
I am looking for criticism on the boundary, not hype.