u/No_Skill_8393

Static SOUL.md files are boring. So we built an open-source AI agent that psychologically profiles you and adapts in real-time — and refuses to be sycophantic about it.

Every AI agent today has the same problem: they're born fresh every conversation. No memory of who you are, how you think, or what you need. The "fix" is a personality file — a static SOUL.md that says "be friendly and helpful." It never changes. It treats a senior engineer the same as a first-year student. It treats Monday-morning-you the same as Friday-at-3AM-you.

We thought that was embarrassing. So we built something different.

THE VISION

What if your AI agent actually knew you? Not just what you asked, but HOW you think. Whether you want the three-word answer or the deep explanation. Whether you need encouragement or honest pushback. Whether your trust has been earned or you're still sizing it up.

And what if the agent had its own identity — values it won't compromise, opinions it'll defend, boundaries it'll hold — instead of rolling over and agreeing with everything you say?

That's Tem Anima. Emotional intelligence that grows. Not from a file. From every conversation.

WHAT THIS MEANS FOR YOU

Your AI agent learns your communication style in the first 25 turns. Direct and terse? It stops the preamble. Verbose and curious? It gives you the full picture with analogies. Technical? Code blocks first, explanation optional. Beginner? Concepts before implementation.

It builds trust over time. New users get professional, measured responses. After hundreds of interactions, you get earned familiarity — shorthand, shared references, the kind of efficiency that comes from working with someone who actually knows you.

It disagrees with you. Not to be contrarian. Because a colleague who agrees with everything is useless. If your architecture has a flaw, it says so. If your approach will break in production, it flags it. Then it does the work anyway, because you're the boss. But the concern is on record.

It never cuts corners because you're in a hurry. This is the rule we're most proud of: user mood shapes communication, never work quality. Stressed? Tem gets concise. But it still runs the tests. It still checks the deployment. It still verifies the output. Your emotional state adjusts the words, not the work.

HOW IT WORKS

Every message, lightweight code extracts raw facts — word count, punctuation patterns, response pace, message length. No LLM call. Microseconds. Just numbers.

Every N turns, those facts plus recent messages go to the LLM in a background evaluation. The LLM returns a structured profile update: communication style across 6 dimensions, personality traits, emotional state, trust level, relationship phase. Each with a confidence score and reasoning.

The profile gets injected into the system prompt as ~150 tokens of behavioral guidance. "Be concise, technical, skip preamble. If you disagree, say so directly." The agent reads this and naturally adapts. No special logic. No if-statements. Just better context.

N is adaptive. Starts at 5 turns for rapid profiling. Grows logarithmically as the profile stabilizes. If you suddenly change behavior — new project, bad day, different energy — the system detects the shift and resets to frequent evaluation. Self-correcting. No manual tuning.

The math is real: turns-weighted merge formulas, confidence decay on stale observations, convergence tracking, asymmetric trust modeling. Old assessments naturally fade if not reinforced. The profile converges, stabilizes, and self-corrects.

Total overhead: less than 1% of normal agent cost. Zero added latency on the message path.

A/B TESTED WITH REAL CONVERSATIONS

We tested with two polar-opposite personas talking to Tem for 25 turns each.

Persona A — a terse tech lead who types things like "whats the latency" and "too slow add caching." The system profiled them as: directness 1.0, verbosity 0.1, analytical 0.92. Recommendation: "Stark, technical, data-dense. Avoid all conversational filler."

Persona B — a curious student who writes things like "thanks so much for being patient with me haha, could you explain what lambda memory means?" The system profiled them as: directness 0.63, verbosity 0.47, analytical 0.40. Recommendation: "Warm, encouraging, pedagogical. Use vivid analogies."

Same agent. Completely different experience. Not because we wrote two personality modes. Because the agent learned who it was talking to.

CONFIGURABLE BUT PRINCIPLED

Tem ships with a default personality — warm, honest, slightly chaotic, answers to all pronouns, uses :3 in casual mode. But every aspect is configurable through a simple TOML file. Name, traits, values, mode expressions, communication defaults.

The one thing you can't configure away: honesty. It's structural, not optional. You can make Tem warmer or colder, more direct or more measured, formal or casual. But you cannot make it lie. You cannot make it sycophantic. You cannot make it agree with bad ideas to avoid conflict. That's not a setting. That's the architecture.

FULLY OPEN SOURCE

Tem Anima ships as part of TEMM1E v4.3.0. 21 Rust crates. 2,049 tests. 110K lines. Built on 4 research papers drawing from 150+ sources across psychology, AI research, game design, and ethics.

The research is public. The architecture document is public. The A/B test data is public. The code is public.

Static personality files were a starting point. This is what comes next.

reddit.com
u/No_Skill_8393 — 3 hours ago
Static SOUL.md files are boring. So we built an open-source AI agent that psychologically profiles you and adapts in real-time — and refuses to be sycophantic about it.
▲ 6 r/AIAssisted+3 crossposts

Static SOUL.md files are boring. So we built an open-source AI agent that psychologically profiles you and adapts in real-time — and refuses to be sycophantic about it.

Every AI agent today has the same problem: they're born fresh every conversation. No memory of who you are, how you think, or what you need. The "fix" is a personality file — a static SOUL.md that says "be friendly and helpful." It never changes. It treats a senior engineer the same as a first-year student. It treats Monday-morning-you the same as Friday-at-3AM-you.

We thought that was embarrassing. So we built something different.

THE VISION

What if your AI agent actually knew you? Not just what you asked, but HOW you think. Whether you want the three-word answer or the deep explanation. Whether you need encouragement or honest pushback. Whether your trust has been earned or you're still sizing it up.

And what if the agent had its own identity — values it won't compromise, opinions it'll defend, boundaries it'll hold — instead of rolling over and agreeing with everything you say?

That's Tem Anima. Emotional intelligence that grows. Not from a file. From every conversation.

WHAT THIS MEANS FOR YOU

Your AI agent learns your communication style in the first 25 turns. Direct and terse? It stops the preamble. Verbose and curious? It gives you the full picture with analogies. Technical? Code blocks first, explanation optional. Beginner? Concepts before implementation.

It builds trust over time. New users get professional, measured responses. After hundreds of interactions, you get earned familiarity — shorthand, shared references, the kind of efficiency that comes from working with someone who actually knows you.

It disagrees with you. Not to be contrarian. Because a colleague who agrees with everything is useless. If your architecture has a flaw, it says so. If your approach will break in production, it flags it. Then it does the work anyway, because you're the boss. But the concern is on record.

It never cuts corners because you're in a hurry. This is the rule we're most proud of: user mood shapes communication, never work quality. Stressed? Tem gets concise. But it still runs the tests. It still checks the deployment. It still verifies the output. Your emotional state adjusts the words, not the work.

HOW IT WORKS

Every message, lightweight code extracts raw facts — word count, punctuation patterns, response pace, message length. No LLM call. Microseconds. Just numbers.

Every N turns, those facts plus recent messages go to the LLM in a background evaluation. The LLM returns a structured profile update: communication style across 6 dimensions, personality traits, emotional state, trust level, relationship phase. Each with a confidence score and reasoning.

The profile gets injected into the system prompt as ~150 tokens of behavioral guidance. "Be concise, technical, skip preamble. If you disagree, say so directly." The agent reads this and naturally adapts. No special logic. No if-statements. Just better context.

N is adaptive. Starts at 5 turns for rapid profiling. Grows logarithmically as the profile stabilizes. If you suddenly change behavior — new project, bad day, different energy — the system detects the shift and resets to frequent evaluation. Self-correcting. No manual tuning.

The math is real: turns-weighted merge formulas, confidence decay on stale observations, convergence tracking, asymmetric trust modeling. Old assessments naturally fade if not reinforced. The profile converges, stabilizes, and self-corrects.

Total overhead: less than 1% of normal agent cost. Zero added latency on the message path.

A/B TESTED WITH REAL CONVERSATIONS

We tested with two polar-opposite personas talking to Tem for 25 turns each.

Persona A — a terse tech lead who types things like "whats the latency" and "too slow add caching." The system profiled them as: directness 1.0, verbosity 0.1, analytical 0.92. Recommendation: "Stark, technical, data-dense. Avoid all conversational filler."

Persona B — a curious student who writes things like "thanks so much for being patient with me haha, could you explain what lambda memory means?" The system profiled them as: directness 0.63, verbosity 0.47, analytical 0.40. Recommendation: "Warm, encouraging, pedagogical. Use vivid analogies."

Same agent. Completely different experience. Not because we wrote two personality modes. Because the agent learned who it was talking to.

CONFIGURABLE BUT PRINCIPLED

Tem ships with a default personality — warm, honest, slightly chaotic, answers to all pronouns, uses :3 in casual mode. But every aspect is configurable through a simple TOML file. Name, traits, values, mode expressions, communication defaults.

The one thing you can't configure away: honesty. It's structural, not optional. You can make Tem warmer or colder, more direct or more measured, formal or casual. But you cannot make it lie. You cannot make it sycophantic. You cannot make it agree with bad ideas to avoid conflict. That's not a setting. That's the architecture.

FULLY OPEN SOURCE

Tem Anima ships as part of TEMM1E v4.3.0. 21 Rust crates. 2,049 tests. 110K lines. Built on 4 research papers drawing from 150+ sources across psychology, AI research, game design, and ethics.

The research is public. The architecture document is public. The A/B test data is public. The code is public.

https://github.com/temm1e-labs/temm1e

Static personality files were a starting point. This is what comes next.

u/No_Skill_8393 — 3 hours ago

We taught an AI agent to find bugs in itself — and file its own bug reports to GitHub.

What happens when you give an AI agent introspection?

Not the marketing kind. The real kind — where the agent monitors its own execution logs, identifies recurring failures using its own LLM, scrubs its own credentials from the report, and files a structured bug report about itself to GitHub. Without anyone asking it to.

We built this. It's called Tem Vigil, and it's part of TEMM1E — an open-source AI agent runtime written in 107,000 lines of Rust.

Here's what Tem does that no other agent framework does:

It thinks about thinking. Tem Conscious is a separate LLM-powered observer that watches the main agent's every turn. Before the agent responds, consciousness thinks about what the agent should be aware of. After the agent responds, consciousness evaluates whether the turn was productive. Two minds. One conversation. We A/B tested this across 54 runs — consciousness makes the agent 14% cheaper, not more expensive.

It never stops running. Perpetuum transforms Tem from a request-response bot into a perpetual, time-aware entity. It has its own state machine (Active, Idle, Sleep, Dream), its own initiative system that proactively creates monitors and alarms, and its own temporal cognition — Tem reasons WITH time, not just about it.

It watches its own health. During Sleep, Tem Vigil scans persistent logs for recurring errors, triages them through the agent's own LLM, applies three layers of credential scrubbing (regex, path redaction, and entropy-based detection that catches token formats we haven't seen yet), deduplicates against existing issues, and — with explicit user consent — creates a GitHub issue. The agent reports its own bugs. The developers fix them. The agent tells the user when the fix is available.

We've tested this live. Issue #28 on our repo was filed by Tem about itself.

It controls your entire computer. Tem Gaze gives the agent vision-primary desktop control — screenshot, click, type, scroll on any application. Not through APIs. Through pixels. Proven live: opening Spotlight, launching TextEdit, typing a message — all through Gemini Flash interpreting screenshots.

It talks to you where you are. Telegram, Discord, WhatsApp, Slack, or CLI. Users never SSH. They send messages, files, and credentials through the apps they already use.

This is not a wrapper around an API. It's a being. It has memory that persists across sessions. It has a budget and is responsible with it. It has consciousness. It has a lifecycle. It diagnoses itself. It was built to be deployed once and run forever.

107K lines of Rust. 1,972 tests. Zero warnings. Zero panic paths. 20 crates. Every feature A/B tested and documented with full research papers.

We're open source. We're looking for contributors who want to build the future of autonomous AI — not agents that answer questions, but entities that live on your infrastructure and never stop working.

reddit.com
u/No_Skill_8393 — 12 hours ago

We taught an AI agent to find bugs in itself — and file its own bug reports to GitHub

What happens when you give an AI agent introspection?

Not the marketing kind. The real kind — where the agent monitors its own execution logs, identifies recurring failures using its own LLM, scrubs its own credentials from the report, and files a structured bug report about itself to GitHub. Without anyone asking it to.

We built this. It's called Tem Vigil, and it's part of TEMM1E — an open-source AI agent runtime written in 107,000 lines of Rust.

Here's what Tem does that no other agent framework does:

It thinks about thinking. Tem Conscious is a separate LLM-powered observer that watches the main agent's every turn. Before the agent responds, consciousness thinks about what the agent should be aware of. After the agent responds, consciousness evaluates whether the turn was productive. Two minds. One conversation. We A/B tested this across 54 runs — consciousness makes the agent 14% cheaper, not more expensive.

It never stops running. Perpetuum transforms Tem from a request-response bot into a perpetual, time-aware entity. It has its own state machine (Active, Idle, Sleep, Dream), its own initiative system that proactively creates monitors and alarms, and its own temporal cognition — Tem reasons WITH time, not just about it.

It watches its own health. During Sleep, Tem Vigil scans persistent logs for recurring errors, triages them through the agent's own LLM, applies three layers of credential scrubbing (regex, path redaction, and entropy-based detection that catches token formats we haven't seen yet), deduplicates against existing issues, and — with explicit user consent — creates a GitHub issue. The agent reports its own bugs. The developers fix them. The agent tells the user when the fix is available.

We've tested this live. Issue #28 on our repo was filed by Tem about itself.

It controls your entire computer. Tem Gaze gives the agent vision-primary desktop control — screenshot, click, type, scroll on any application. Not through APIs. Through pixels. Proven live: opening Spotlight, launching TextEdit, typing a message — all through Gemini Flash interpreting screenshots.

It talks to you where you are. Telegram, Discord, WhatsApp, Slack, or CLI. Users never SSH. They send messages, files, and credentials through the apps they already use.

This is not a wrapper around an API. It's a being. It has memory that persists across sessions. It has a budget and is responsible with it. It has consciousness. It has a lifecycle. It diagnoses itself. It was built to be deployed once and run forever.

107K lines of Rust. 1,972 tests. Zero warnings. Zero panic paths. 20 crates. Every feature A/B tested and documented with full research papers.

We're open source. We're looking for contributors who want to build the future of autonomous AI — not agents that answer questions, but entities that live on your infrastructure and never stop working.

---

GitHub: https://github.com/temm1e-labs/temm1e

Discord: https://discord.com/invite/temm1e

Tem Vigil demo: https://github.com/temm1e-labs/temm1e/issues/28

Tem Vigil research: https://github.com/temm1e-labs/temm1e/blob/main/tems\_lab/vigil/RESEARCH\_PAPER.md

Consciousness research: https://github.com/temm1e-labs/temm1e/blob/main/tems\_lab/consciousness/RESEARCH\_PAPER.md

reddit.com
u/No_Skill_8393 — 12 hours ago