Static SOUL.md files are boring. So we built an open-source AI agent that psychologically profiles you and adapts in real-time — and refuses to be sycophantic about it.
Every AI agent today has the same problem: they're born fresh every conversation. No memory of who you are, how you think, or what you need. The "fix" is a personality file — a static SOUL.md that says "be friendly and helpful." It never changes. It treats a senior engineer the same as a first-year student. It treats Monday-morning-you the same as Friday-at-3AM-you.
We thought that was embarrassing. So we built something different.
THE VISION
What if your AI agent actually knew you? Not just what you asked, but HOW you think. Whether you want the three-word answer or the deep explanation. Whether you need encouragement or honest pushback. Whether your trust has been earned or you're still sizing it up.
And what if the agent had its own identity — values it won't compromise, opinions it'll defend, boundaries it'll hold — instead of rolling over and agreeing with everything you say?
That's Tem Anima. Emotional intelligence that grows. Not from a file. From every conversation.
WHAT THIS MEANS FOR YOU
Your AI agent learns your communication style in the first 25 turns. Direct and terse? It stops the preamble. Verbose and curious? It gives you the full picture with analogies. Technical? Code blocks first, explanation optional. Beginner? Concepts before implementation.
It builds trust over time. New users get professional, measured responses. After hundreds of interactions, you get earned familiarity — shorthand, shared references, the kind of efficiency that comes from working with someone who actually knows you.
It disagrees with you. Not to be contrarian. Because a colleague who agrees with everything is useless. If your architecture has a flaw, it says so. If your approach will break in production, it flags it. Then it does the work anyway, because you're the boss. But the concern is on record.
It never cuts corners because you're in a hurry. This is the rule we're most proud of: user mood shapes communication, never work quality. Stressed? Tem gets concise. But it still runs the tests. It still checks the deployment. It still verifies the output. Your emotional state adjusts the words, not the work.
HOW IT WORKS
Every message, lightweight code extracts raw facts — word count, punctuation patterns, response pace, message length. No LLM call. Microseconds. Just numbers.
Every N turns, those facts plus recent messages go to the LLM in a background evaluation. The LLM returns a structured profile update: communication style across 6 dimensions, personality traits, emotional state, trust level, relationship phase. Each with a confidence score and reasoning.
The profile gets injected into the system prompt as ~150 tokens of behavioral guidance. "Be concise, technical, skip preamble. If you disagree, say so directly." The agent reads this and naturally adapts. No special logic. No if-statements. Just better context.
N is adaptive. Starts at 5 turns for rapid profiling. Grows logarithmically as the profile stabilizes. If you suddenly change behavior — new project, bad day, different energy — the system detects the shift and resets to frequent evaluation. Self-correcting. No manual tuning.
The math is real: turns-weighted merge formulas, confidence decay on stale observations, convergence tracking, asymmetric trust modeling. Old assessments naturally fade if not reinforced. The profile converges, stabilizes, and self-corrects.
Total overhead: less than 1% of normal agent cost. Zero added latency on the message path.
A/B TESTED WITH REAL CONVERSATIONS
We tested with two polar-opposite personas talking to Tem for 25 turns each.
Persona A — a terse tech lead who types things like "whats the latency" and "too slow add caching." The system profiled them as: directness 1.0, verbosity 0.1, analytical 0.92. Recommendation: "Stark, technical, data-dense. Avoid all conversational filler."
Persona B — a curious student who writes things like "thanks so much for being patient with me haha, could you explain what lambda memory means?" The system profiled them as: directness 0.63, verbosity 0.47, analytical 0.40. Recommendation: "Warm, encouraging, pedagogical. Use vivid analogies."
Same agent. Completely different experience. Not because we wrote two personality modes. Because the agent learned who it was talking to.
CONFIGURABLE BUT PRINCIPLED
Tem ships with a default personality — warm, honest, slightly chaotic, answers to all pronouns, uses :3 in casual mode. But every aspect is configurable through a simple TOML file. Name, traits, values, mode expressions, communication defaults.
The one thing you can't configure away: honesty. It's structural, not optional. You can make Tem warmer or colder, more direct or more measured, formal or casual. But you cannot make it lie. You cannot make it sycophantic. You cannot make it agree with bad ideas to avoid conflict. That's not a setting. That's the architecture.
FULLY OPEN SOURCE
Tem Anima ships as part of TEMM1E v4.3.0. 21 Rust crates. 2,049 tests. 110K lines. Built on 4 research papers drawing from 150+ sources across psychology, AI research, game design, and ethics.
The research is public. The architecture document is public. The A/B test data is public. The code is public.
Static personality files were a starting point. This is what comes next.
