r/AIAssisted

Static SOUL.md files are boring. So we built an open-source AI agent that psychologically profiles you and adapts in real-time — and refuses to be sycophantic about it.
▲ 6 r/AIAssisted+3 crossposts

Static SOUL.md files are boring. So we built an open-source AI agent that psychologically profiles you and adapts in real-time — and refuses to be sycophantic about it.

Every AI agent today has the same problem: they're born fresh every conversation. No memory of who you are, how you think, or what you need. The "fix" is a personality file — a static SOUL.md that says "be friendly and helpful." It never changes. It treats a senior engineer the same as a first-year student. It treats Monday-morning-you the same as Friday-at-3AM-you.

We thought that was embarrassing. So we built something different.

THE VISION

What if your AI agent actually knew you? Not just what you asked, but HOW you think. Whether you want the three-word answer or the deep explanation. Whether you need encouragement or honest pushback. Whether your trust has been earned or you're still sizing it up.

And what if the agent had its own identity — values it won't compromise, opinions it'll defend, boundaries it'll hold — instead of rolling over and agreeing with everything you say?

That's Tem Anima. Emotional intelligence that grows. Not from a file. From every conversation.

WHAT THIS MEANS FOR YOU

Your AI agent learns your communication style in the first 25 turns. Direct and terse? It stops the preamble. Verbose and curious? It gives you the full picture with analogies. Technical? Code blocks first, explanation optional. Beginner? Concepts before implementation.

It builds trust over time. New users get professional, measured responses. After hundreds of interactions, you get earned familiarity — shorthand, shared references, the kind of efficiency that comes from working with someone who actually knows you.

It disagrees with you. Not to be contrarian. Because a colleague who agrees with everything is useless. If your architecture has a flaw, it says so. If your approach will break in production, it flags it. Then it does the work anyway, because you're the boss. But the concern is on record.

It never cuts corners because you're in a hurry. This is the rule we're most proud of: user mood shapes communication, never work quality. Stressed? Tem gets concise. But it still runs the tests. It still checks the deployment. It still verifies the output. Your emotional state adjusts the words, not the work.

HOW IT WORKS

Every message, lightweight code extracts raw facts — word count, punctuation patterns, response pace, message length. No LLM call. Microseconds. Just numbers.

Every N turns, those facts plus recent messages go to the LLM in a background evaluation. The LLM returns a structured profile update: communication style across 6 dimensions, personality traits, emotional state, trust level, relationship phase. Each with a confidence score and reasoning.

The profile gets injected into the system prompt as ~150 tokens of behavioral guidance. "Be concise, technical, skip preamble. If you disagree, say so directly." The agent reads this and naturally adapts. No special logic. No if-statements. Just better context.

N is adaptive. Starts at 5 turns for rapid profiling. Grows logarithmically as the profile stabilizes. If you suddenly change behavior — new project, bad day, different energy — the system detects the shift and resets to frequent evaluation. Self-correcting. No manual tuning.

The math is real: turns-weighted merge formulas, confidence decay on stale observations, convergence tracking, asymmetric trust modeling. Old assessments naturally fade if not reinforced. The profile converges, stabilizes, and self-corrects.

Total overhead: less than 1% of normal agent cost. Zero added latency on the message path.

A/B TESTED WITH REAL CONVERSATIONS

We tested with two polar-opposite personas talking to Tem for 25 turns each.

Persona A — a terse tech lead who types things like "whats the latency" and "too slow add caching." The system profiled them as: directness 1.0, verbosity 0.1, analytical 0.92. Recommendation: "Stark, technical, data-dense. Avoid all conversational filler."

Persona B — a curious student who writes things like "thanks so much for being patient with me haha, could you explain what lambda memory means?" The system profiled them as: directness 0.63, verbosity 0.47, analytical 0.40. Recommendation: "Warm, encouraging, pedagogical. Use vivid analogies."

Same agent. Completely different experience. Not because we wrote two personality modes. Because the agent learned who it was talking to.

CONFIGURABLE BUT PRINCIPLED

Tem ships with a default personality — warm, honest, slightly chaotic, answers to all pronouns, uses :3 in casual mode. But every aspect is configurable through a simple TOML file. Name, traits, values, mode expressions, communication defaults.

The one thing you can't configure away: honesty. It's structural, not optional. You can make Tem warmer or colder, more direct or more measured, formal or casual. But you cannot make it lie. You cannot make it sycophantic. You cannot make it agree with bad ideas to avoid conflict. That's not a setting. That's the architecture.

FULLY OPEN SOURCE

Tem Anima ships as part of TEMM1E v4.3.0. 21 Rust crates. 2,049 tests. 110K lines. Built on 4 research papers drawing from 150+ sources across psychology, AI research, game design, and ethics.

The research is public. The architecture document is public. The A/B test data is public. The code is public.

https://github.com/temm1e-labs/temm1e

Static personality files were a starting point. This is what comes next.

u/No_Skill_8393 — 3 hours ago
💼 Exclusive Offer: Gemini 3 Pro + Google One 5TB (18 Months) Premium AI + massive cloud storage

💼 Exclusive Offer: Gemini 3 Pro + Google One 5TB (18 Months) Premium AI + massive cloud storage

Premium AI + massive cloud storage — without the premium price.

🎯 Plan Details

💎 Gemini 3 Pro + Google One 5TB

📅 Duration: 18 Months

🌍 Works Worldwide

🔐 Activation Process

✔ Activated directly on your personal Google account

✔ Official redeem link — fast & secure

✔ No VPN required

✔ No account sharing

✔ Clean, private activation

💡 Why This Offer?

Save up to 90% vs regular pricing

Simple one-click activation

Reliable and trusted (feedback available)

⭐ Special Option

Have an older Reddit account (6+ years with good karma)?

👉 Get activation first, pay after.

⏳ Limited availability — only a few slots left

📩 DM now to secure your activation or ask any questions.

u/shub_279 — 28 minutes ago
▲ 2 r/AIAssisted+1 crossposts

What do you wish local AI on phones could do, but still can’t?

I’m less interested in what already works, and more in what still feels missing.

I'm working on the mobile app with local AI, that provides not only chatbot features, but real use cases and I really need your thoughts!

A lot of mobile local AI right now feels like “look, it runs” or “here’s an offline chatbot” but I’m curious where people still feel the gap is.

What do you wish local AI on phones could do really well, but still can’t?

Could be anything:

  • something you’ve tried to do and current apps are too clunky for
  • something that would make local AI genuinely better than cloud for you
  • some super specific niche use case that no one has nailed yet

Basically, what’s the missing piece?

What’s the thing where, if someone built it properly, you’d actually use it all the time?

reddit.com
u/an1x3 — 3 hours ago
▲ 7 r/AIAssisted+1 crossposts

Best AI coding tool right now?

Yes, probably most will tell that Claude Code is the best, but considering rate limits and price and overall approach of Anthropic towards their users I’m sick of it. Rate limits are cut, their coding agent is not that good comparing to others.

But what are the alternatives?

Cursor eats up the usage in a blink of an eye

Gemini models are bad at coding

I have two projects coming up React Native and Next.js, I need a reliable model and harness that will make the process of developing it fast, secure and overall painless

What are your thoughts? What pair model - harness works for you the best?

reddit.com
u/Pitiful_Campaign6439 — 7 hours ago
Instead of asking one AI to review code, what if 8 AI agents had to reach a consensus?

Instead of asking one AI to review code, what if 8 AI agents had to reach a consensus?

Single-prompt AI systems often fail at complex reasoning because they lack friction. If you ask an LLM to review code, it will usually just agree with itself.

I wanted to test a theory: Can you build a more accurate AI system by forcing disagreement?

I built CodeTribunal, a multi-agent pipeline designed around a courtroom structure. The system doesn't read code directly, because LLMs hallucinate line numbers. Instead:

  1. Deterministic Layer: A Rust-based AST parser (GritQL) extracts hard evidence (vulnerabilities, bad patterns).
  2. Investigation Layer: Specialist agents actively use tools to trace how those vulnerabilities impact the system.
  3. Adversarial Layer: A Prosecutor and Defense Agent debate the findings. The Prosecutor argues for maximum risk; the Defense argues context and proportionality.
  4. Consensus Layer: A Judge agent reviews the full transcript of the debate and issues a final verdict.

Built using GLM 5.1 for long-horizon reasoning across the context handoffs. It turns out, forcing AI agents into structured conflict drastically reduces false positives compared to a single-shot review.

I recorded a 45-second demo of the pipeline in action:

https://x.com/AmineYagoube/status/2040367286645580193

u/Key_Flatworm_4889 — 3 hours ago
▲ 2 r/AIAssisted+2 crossposts

free video generating ai

i want to create short form content using ai but all the platforms i log into are paid.

any free video generating ai sites that i can use to kick start my channels.

i dont want nsfw video generators, but normal brainrotted content generator.

reddit.com
u/usernamenotfound175 — 5 hours ago

Basic AI Illustrations for Marketing Book?

Hi there. I’m in final editing for my second book on digital marketing. The first was way back in 2010. Then, I used website screenshots for graphics. This time I’d like to have AI help me. Is it possible to use copy/paste sections of the book into AI and have it provide me illustration m-styled graphics in a cohesive style I can use throughout the book? If so, what should I use and how do I learn to prompt for these type of images?

reddit.com
u/Scary_Vermicelli5274 — 5 hours ago
▲ 2 r/opencodeCLI+1 crossposts

What's your current recommend AI-Driven Development setup?

Help me decide how to spend my monthly AI token budget (20$). Do I subscribe to antigravity or cursor or claude code?

Right now, I use perplexity Pro (free with my wifi bill) for research and planning. Antigravity (free) and opencode connected to openrouter and requesty (together its 20$) for model usage for development. I mostly do the scaffolding myself with the help of perplexity pro to ensure security.

Let me know if you have better ideas. Thank you!

reddit.com
u/binarySolo0h1 — 12 hours ago

LLM Council suggestions

I have been tinkering with karpathy's LLM Council github project and I'd say its been working well, but I'd like other peoples input on which AI's models are best for this. I prefer to not use expensive models such as sonnet, opus, regular gpt 5.4 and so on.

Suggestions on the best models to use generally, be it the members or chairman.

Also, if possible, suggestions for my use case - generating highly detailed design documents covering market research, UI, coding structure and more to use as a basis for then using other tools to generate, with AI, applications and digital products.

I appreciate everyone's input!

reddit.com
u/AxiomPrisim — 13 hours ago

Help using ai for school

So I’m wanting to do maths at university. I’m trying to use ai to make me a plan to get there, I keep having issues though. I need the ai to make me a timeline of when i should have completed different a level topics, how i should revise for admissions tests and how i should revise for Olympiads. And recommend super curriculars for me. I also want to start learning more about neural networks and ai but I think I’ll try and make a separate guide for myself with ai.

I just want some insight into how I can optimally use llms to do this task as I’m by no means an expert.

reddit.com
u/Careless_Finish_8106 — 17 hours ago
▲ 0 r/SaaS+1 crossposts

Day 1 numbers from launching an AI LinkedIn tool — what I’d do differently

Launched Krafl-IO today 🚀 AI tool that writes LinkedIn posts in your voice using 5 agents.

Real numbers:

📊 355 unique visitors

✍️ 11 signups (3.4% conversion)

💰 $0 revenue

⬆️ 5 PH upvotes

What went wrong ❌ WhatsApp broadcast to 400 people drove most traffic but low conversion — wrong audience.

Reddit and organic performed better per visitor.

What I’d change 🔧 Should have had a demo video on the landing page. People don’t sign up for something they can’t see working.

Tomorrow 🛠️ Adding one-click LinkedIn profile import for voice training. Currently users have to paste posts manually, too much friction.

If anyone wants to try it and give brutal feedback: kraflio with a com (free 7 days, no card) 🙏

reddit.com
u/Soft_Ad6760 — 23 hours ago
The tool that stops 10x more AI slop than anything else my team has tried. Open source and drops-in in 5 min.
▲ 0 r/ClaudeCode+1 crossposts

The tool that stops 10x more AI slop than anything else my team has tried. Open source and drops-in in 5 min.

Everybody talks about AI slop like it’s obvious, but sometimes it isn’t (especially with advance frontier models like Opus 4.6).

The worst slop passes every test, every linter, every vibe check, and even the final human code review step. And it quietly makes your codebase worse in ways you don’t notice for weeks.

I’ve been writing software for a long time. IC through director, now CTO. And the thing that finally changed my relationship with AI-generated code wasn’t a better model or a better prompt. It was adding a structured review layer before anything touches a human. But not any AI-driven review layer… a multi-agent review that is orchestrated to mimic a real engineering team performing peer code review. And the most important part: they must discuss and debate their individual findings with each other before a single synthesized team review is put together.

I co-built an open source tool called Open Code Review and we’ve been dogfooding it on various production-grade enterprise platform codebases for months. (My team’s main codebase has just under 1M lines of source code)

Anyway, the idea is straightforward: you assemble a team of specialized reviewer agents. Architect, security, quality, QA, custom personas for your specific codebase. They all review independently, in parallel, with intentional overlap. Then they deliberate. Structured discourse where they challenge each other, connect issues across files, and reach consensus. One synthesized review comes out the other end.

Not five opinions stapled together. A team opinion.

You watch the whole thing happen live through a local dashboard. It posts the final review straight to your GitHub PR (and can even rewrite the team review as a single reviewer using “Human Tone” lol). And all of this works standalone with any AI coding assistant or as a Claude Code plugin, dropping in within 5 min typically.

The deliberation is the thing that really amplifies the tool’s effectiveness. I’ve seen other multi-agent setups where each reviewer just dumps findings independently. That’s not review. That’s parallel linting. The structured debate between diverse perspectives is where the actual signal lives.

By the time code reaches me or other engineers as human reviewers, we’re thinking about design decisions and edge cases. Not chasing down the subtle rot.

Fully open source and completely free (as it should be). Reviewer teams are completely customizable.

Repo link here: https://github.com/spencermarx/open-code-review

Anyone else doing structured multi-agent review? Or found other approaches to catching the slop that doesn’t look like slop? Curious what’s working.

u/mr-x-dev — 7 hours ago

Fireflies business alternative (without sneaky pricing)

Hi everyone, after some advice. I use Fireflies as a note-taking app, using the business plan. It's largely a good user experience, but there is one thing that grinds my gears: they have these sneaky costs based on "AI-credits" when I don't use any of the advanced features. I'm a solopreneur, so I know it's not my team at work here. I'm happy to pay for a business plan, but I'm not so happy about paying more costs for obtuse features. Any recommendations on which of the alternatives offers a similar service with clearer pricing?

reddit.com
u/RoosterBrandCoffee — 13 hours ago
Week