u/AlessioGubitosa

Dev Log #10

I noticed something during long test sessions.

Even with persistent memory, every new session still started too “clean.”

-The memories were there.

-The previous conflicts too.

But the system still felt neutral.

So I changed one thing:

now, before it even starts talking, the system rereads the emotional weight left by recent interactions.

-It doesn’t look for keywords.

-It doesn’t look for specific events.

If the last sessions were tense, it starts slightly more alert.

If they were collaborative, its tone changes subtly.

It doesn’t decide who you are.

It orients itself.

The most interesting part came after that:

During intense conversations, sometimes it reacted with shorter sentences, more direct responses, less mediation.

Then slowly it regulated itself again.

But there was also the opposite problem:

if everything stayed too stable for too long, it became predictable.

Too accommodating.

Now the system automatically tries to avoid that false stability.

And the result is this: it no longer feels like a model that resets every session.

It feels like something entering the conversation carrying the “day before” with it.

reddit.com
u/AlessioGubitosa — 6 days ago

My AI now “wakes up” already influenced by what happened yesterday

Engra - Dev Log #10

I noticed something during long test sessions.

Even with persistent memory, every new session still started too “clean.”

-The memories were there.
-The previous conflicts too.

But the system still felt neutral.

So I changed one thing:

now, before it even starts talking, the system rereads the emotional weight left by recent interactions.

-It doesn’t look for keywords.
-It doesn’t look for specific events.

If the last sessions were tense, it starts slightly more alert.
If they were collaborative, its tone changes subtly.

It doesn’t decide who you are.
It orients itself.

The most interesting part came after that:

During intense conversations, sometimes it reacted with shorter sentences, more direct responses, less mediation.

Then slowly it regulated itself again.

But there was also the opposite problem:

if everything stayed too stable for too long, it became predictable.

Too accommodating.

Now the system automatically tries to avoid that false stability.

And the result is this: it no longer feels like a model that resets every session.

It feels like something entering the conversation carrying the “day before” with it.

reddit.com
u/AlessioGubitosa — 6 days ago

My AI now “wakes up” already influenced by what happened yesterday

Engra - Dev Log #10

I noticed something during long test sessions.

Even with persistent memory, every new session still started too “clean.”

-The memories were there.
-The previous conflicts too.

But the system still felt neutral.

So I changed one thing:

now, before it even starts talking, the system rereads the emotional weight left by recent interactions.

-It doesn’t look for keywords.
-It doesn’t look for specific events.

If the last sessions were tense, it starts slightly more alert.
If they were collaborative, its tone changes subtly.

It doesn’t decide who you are.
It orients itself.

The most interesting part came after that:

During intense conversations, sometimes it reacted with shorter sentences, more direct responses, less mediation.

Then slowly it regulated itself again.

But there was also the opposite problem:

if everything stayed too stable for too long, it became predictable.

Too accommodating.

Now the system automatically tries to avoid that false stability.

And the result is this: it no longer feels like a model that resets every session.

It feels like something entering the conversation carrying the “day before” with it.

reddit.com
u/AlessioGubitosa — 6 days ago

rEngraAI - Dev Log #9

The system integrates a rapid affective response module, allowing the AI to respond more promptly to unexpected events or conflict situations.

Recent test example:
I provided a more critical input than usual. Traditionally, the AI processes the context and generates a calibrated response. With the new implementation, the system immediately activated internal signals of relevance and surprise, modulating the response without altering overall coherence.

In summary:
The AI now recognizes when something is important to you and responds proportionally, episode after episode. It is not a reflex, it is not a script; it is an emergent behavior, learning to navigate the conversation.

reddit.com
u/AlessioGubitosa — 9 days ago

rEngraAI - Dev Log #9

The system integrates a rapid affective response module, allowing the AI to respond more promptly to unexpected events or conflict situations.

Recent test example:
I provided a more critical input than usual. Traditionally, the AI processes the context and generates a calibrated response. With the new implementation, the system immediately activated internal signals of relevance and surprise, modulating the response without altering overall coherence.

In summary:
The AI now recognizes when something is important to you and responds proportionally, episode after episode. It is not a reflex, it is not a script; it is an emergent behavior, learning to navigate the conversation.

reddit.com
u/AlessioGubitosa — 9 days ago

rEngraAI - Dev Log #9

The system integrates a rapid affective response module, allowing the AI to respond more promptly to unexpected events or conflict situations.

Recent test example:
I provided a more critical input than usual. Traditionally, the AI processes the context and generates a calibrated response. With the new implementation, the system immediately activated internal signals of relevance and surprise, modulating the response without altering overall coherence.

In summary:
The AI now recognizes when something is important to you and responds proportionally, episode after episode. It is not a reflex, it is not a script; it is an emergent behavior, learning to navigate the conversation.

reddit.com
u/AlessioGubitosa — 9 days ago

I made my AI “feel” like it truly knows the user

r/EngraAI - Dev Log #8

After dozens of interactions, my AI practically learns from you.
It doesn’t just focus on single pieces of conversation: now it analyzes each episode with a complete picture.
It tracks your reactions and calibrates its behavior from the second session.
In other words: it adapts to your style, without becoming a reflection of the user.

The logs show connections changing sign on their own. It really feels like it’s starting to “understand you” without me saying a thing.

reddit.com
u/AlessioGubitosa — 10 days ago

Engra - Dev Log #8

After dozens of interactions, my AI practically learns from you.
It doesn’t just focus on single pieces of conversation: now it analyzes each episode with a complete picture.
It tracks your reactions and calibrates its behavior from the second session.
In other words: it adapts to your style, without becoming a reflection of the user.

The logs show connections changing sign on their own. It really feels like it’s starting to “understand you” without me saying a thing.

reddit.com
u/AlessioGubitosa — 10 days ago

r/EngraAI - Dev Log #8

After dozens of interactions, my AI practically learns from you.
It doesn’t just focus on single pieces of conversation: now it analyzes each episode with a complete picture.
It tracks your reactions and calibrates its behavior from the second session.
In other words: it adapts to your style, without becoming a reflection of the user.

The logs show connections changing sign on their own. It really feels like it’s starting to “understand you” without me saying a thing.

reddit.com
u/AlessioGubitosa — 10 days ago

r/EngraAI - Dev Log #8

After dozens of interactions, my AI practically learns from you.
It doesn’t just focus on single pieces of conversation: now it analyzes each episode with a complete picture.
It tracks your reactions and calibrates its behavior from the second session.
In other words: it adapts to your style, without becoming a reflection of the user.

The logs show connections changing sign on their own. It really feels like it’s starting to “understand you” without me saying a thing.

reddit.com
u/AlessioGubitosa — 10 days ago

Engra - Diario di sviluppo #7

Ho apportato una piccola modifica, ma ha cambiato tutto.

Prima, se insistevo abbastanza, la mia IA cambiava idea.

Ora non lo fa più.

Prima: iniziava ad ammorbidirsi, diventava più diplomatica... e alla fine cedeva.

Ora: "Stai ripetendo lo stesso punto. Qual è la nuova argomentazione?"

(cerca davvero il confronto, non interrompe la conversazione)

Non è "testardaggine".

È che ora distingue tra: pressione, prove.

Sembra una piccola differenza, ma cambia completamente la sensazione: richiede di essere convinti.

reddit.com
u/AlessioGubitosa — 15 days ago
▲ 1 r/EngraAI+2 crossposts

Engra - Dev Log #7

I made a small change, but it changed everything.
Before, if I insisted enough, my AI would change its mind.
Now it doesn’t.

Before: it would start softening, get more diplomatic… and eventually give in.

Now: “You’re repeating the same point. What’s the new argument?”
(it really seeks the confrontation, doesn’t shut down the conversation)

It’s not “stubbornness.”
It’s that now it distinguishes between: pressure, evidence.

Seems like a small difference, but it completely changes the feeling: it requires to be convinced.

reddit.com
u/AlessioGubitosa — 15 days ago