u/Bhumi1979

Most agent systems trust the agent.

If the agent says “task complete”, the system accepts it.

I’ve been experimenting with the opposite idea:

what if the system treated the agent as untrusted?

Built a small kernel that does one thing:

→ the agent can propose an outcome

→ the kernel decides if it’s true

Example:

Process declares SUCCESS (without sufficient evidence)

Kernel:

REJECTED

reason: outcome does not match recorded events

Process adds more evidence (still insufficient)

Kernel:

REJECTED

Process provides required evidence

Kernel:

ACCEPTED

The key difference:

the outcome is derived from recorded state, not the agent’s claim.

The kernel maintains its own event log and evaluates outcomes independently.

Curious if others have explored systems where agent output is treated as untrusted input?

reddit.com
u/Bhumi1979 — 12 days ago

Been working on something for a while and wanted to share it early with people who might have opinions.

The core idea: AI agents need a substrate the same way software needs an operating system. Not a framework on top of a model. A layer underneath everything that enforces how cognition is allowed to behave.

Shakun is that layer. The kernel enforces a small set of laws every cognitive act is owned by an identity, memory is separated into types with strict rules, outcomes are adjudicated by the kernel not declared by the agent, habits only form from verified success. The model reasons freely within those laws. The kernel doesn't touch reasoning at all.

The result: a system where everything is traceable, auditable, and rebuildable from an append-only event log. Agents accumulate real memory across sessions. Two agents can interpret the same evidence differently without corrupting each other.

Python reference implementation. Foundation is tested and solid.

Curious if anyone else has been thinking about AI infrastructure at this level below the agent, below the framework, at the substrate.

reddit.com
u/Bhumi1979 — 15 days ago
▲ 6 r/opencode+1 crossposts

Been working on something for a while and wanted to share it early with people who might have opinions.

The core idea: AI agents need a substrate the same way software needs an operating system. Not a framework on top of a model. A layer underneath everything that enforces how cognition is allowed to behave.

Shakun is that layer. The kernel enforces a small set of laws every cognitive act is owned by an identity, memory is separated into types with strict rules, outcomes are adjudicated by the kernel not declared by the agent, habits only form from verified success. The model reasons freely within those laws. The kernel doesn't touch reasoning at all.

The result: a system where everything is traceable, auditable, and rebuildable from an append-only event log. Agents accumulate real memory across sessions. Two agents can interpret the same evidence differently without corrupting each other.

Python reference implementation. Foundation is tested and solid.

Curious if anyone else has been thinking about AI infrastructure at this level below the agent, below the framework, at the substrate.

reddit.com
u/Bhumi1979 — 15 days ago