u/Substantial-Cost-429

We built an AI that acts as a digital twin of each employee, plugged into all their tools and answering on their behalf

Something we have been thinking about a lot: the average employee burns roughly 3 hours every single day just reading and responding to messages. Most of it is stuff that a well trained AI, with the right context, could handle just as well.

So we built Dolly (getdolly.ai).

Dolly is not a general purpose assistant. It creates a personalized AI clone of each individual employee. It connects to all their tools, learns their communication style and domain knowledge, and responds to incoming messages on their behalf, in their voice.

Think of it as giving every person on your team an AI version of themselves that never sleeps and never falls behind on their inbox.

We are opening access to the first 20 organizations. 17 spots remaining.

Curious what this community thinks about the concept. Is per-employee AI cloning the right framing for workplace AI, or is there a better mental model?

reddit.com
u/Substantial-Cost-429 — 4 days ago

I built Dolly — an AI clone of each employee that plugs into their tools and handles messages on their behalf (open to first 20 orgs)

Hey r/SideProject — sharing something I've been working on.

Research shows the average employee spends ~3 hours a day just reading and responding to messages. A huge chunk of that is repetitive, context-heavy stuff that a well-trained AI could handle.

So I built Dolly (getdolly.ai).

The idea: Dolly creates a digital clone of each person on your team. It connects to all their tools (Slack, email, docs, etc.), learns how they think and communicate, and can respond to incoming messages on their behalf — in their voice, with their context.

It's not a generic chatbot. It's trained per employee, so it actually sounds like them and knows what they know.

We're onboarding the first 20 organizations. 17 spots left.

Happy to answer any questions — still early, would love raw feedback.

reddit.com
u/Substantial-Cost-429 — 4 days ago
▲ 1 r/OpenAI

Been thinking through this while building a product where an AI handles internal workplace communication for each employee.

The phrase "act on your behalf" gets used a lot in the agentic AI space, but the design decisions underneath it vary enormously. A few that feel important:

Who decides what qualifies as acting on your behalf? If the AI sends a message in your name without you seeing it, that is a very different thing from drafting and letting you approve it first. Both are "acting on your behalf" but they have totally different trust profiles.

What does the recipient know? If someone receives a message and does not know an AI wrote it, they are being deceived. Even if the content is accurate, the relationship context is not. We think the recipient needs to see that the message came from someone's AI. That changes the social contract but makes it honest.

What happens when the AI is wrong? In a traditional workflow you can undo. In communication, you often cannot. A badly timed message or wrong commitment lives on. The system needs to be designed for this failure mode from the start, not bolted on later.

How does the AI know when it is at the edge of its competence? This is probably the hardest design problem. You can define categories, but the model needs to know when a message looks like one category but is actually another.

Building through these questions at getdolly.ai. Curious how others in the space are thinking about the agentic communication problem.

reddit.com
u/Substantial-Cost-429 — 6 days ago

Your AI assistant probably should not respond to every message. Here is how we are thinking about which ones to automate.

We are building Dolly, a personal AI that handles internal workplace communication on behalf of each employee. Not a chatbot. Not a shared team tool. Each person gets their own instance that learns how they think and communicate.

One of the questions we hear a lot: is every message really worth automating?

No. And we think that framing is actually important to get right.

Here is how we categorize the message landscape:

High automation value. Routine status updates. Requests for information that already exists somewhere the employee has written or documented. Meeting confirmations. Acknowledgment replies. Standard coordination within a project the employee is actively running. These can often be handled without your active involvement.

Low automation value. Anything that involves making a new commitment. Anything where context is ambiguous. Relationship-sensitive conversations where tone really matters. Feedback that someone will act on. These should stay with you.

The goal is not to replace you in communication. It is to get the low-stakes, high-volume layer off your plate so you can actually focus on the parts that matter.

We let each employee define their own categories and decide which ones Dolly can handle autonomously. Some people unlock almost nothing. Others unlock quite a bit. It is personal.

Building at getdolly.ai. Limited rollout to the first 20 organizations, 17 spots remaining.

reddit.com
u/Substantial-Cost-429 — 6 days ago

The hardest part of building an AI that responds to messages on your behalf is not the model. It is the tone.

We are building Dolly, a personal AI agent that handles internal workplace communication for each employee. One thing that has come up constantly in user feedback:

People care more about their voice than their time.

You can show someone that Dolly saves them 2 hours a day. But if the first reply it drafts sounds slightly off, they disengage immediately. The productivity argument does not matter anymore. What matters is: does this sound like me?

So a lot of our model work has gone into voice fidelity, not just response accuracy.

A few things we learned:

You need a lot of signal. A handful of emails is not enough. Dolly needs to see how someone writes across different contexts: to their manager, to a direct report, to a peer they are close to, to someone they barely know. Tone shifts substantially across these.

Punctuation and structure matter as much as word choice. Some people use short punchy sentences. Others write in paragraphs. Some never use exclamation points. Others always do. Getting these wrong breaks trust faster than getting content wrong.

Review mode is actually helpful for training, not just safety. When users see drafts and correct them, those corrections are the highest-value training signal we get. The edit tells you more than the original ever could.

Still a hard problem. Getting this right is the core technical challenge of the product.

If you are building in this space or have tackled voice modeling, would love to compare notes. Building at getdolly.ai.

reddit.com
u/Substantial-Cost-429 — 6 days ago
▲ 1 r/digitalnomad+2 crossposts

Sharing a side project I have been building: Dolly, a personal AI agent for each employee that handles internal workplace communication on their behalf.

The most common pushback I get from technical people is about IT security. Specifically: how do you expect a company to let this touch their internal message systems?

Fair concern. Here is how we are thinking about it:

OAuth only. Dolly connects to tools like Slack and email via standard OAuth flows. The same mechanism your Google Calendar integration or your CRM plugin uses. No credential storage. No custom auth.

Scoped permissions. Dolly requests only the permissions it needs. Read access to relevant message threads. Send access only in channels or threads you have unlocked. It does not request admin-level access.

Audit logs. Every action Dolly takes is logged. You can see exactly what it read and what it sent. Your IT team can too, if you grant them access to the org-level dashboard.

Revocation. The company can revoke Dolly access centrally. Individual employees can revoke it themselves. No persistent access after either revocation.

We are not trying to sneak this past IT. We are trying to build something IT can actually approve.

Still in limited rollout. getdolly.ai if you want to follow the build or talk through the architecture.

u/Substantial-Cost-429 — 6 days ago
▲ 2 r/nocode

Should your employer be able to see what your AI assistant is doing on your behalf?

This question came up a lot when we started showing people what we are building with Dolly, a personal AI that handles internal workplace communication on behalf of each employee.

The assumption most people had: if the company is paying for it, the company can see it.

We pushed back on that. Here is our reasoning:

Dolly learns your communication style, your working context, your preferences, your relationships with colleagues. That is deeply personal. If your employer has full visibility into that layer, it is not really your assistant. It is a monitoring tool with a productivity wrapper.

So our model is:

The employee owns their Dolly instance. The company does not have access to individual instances or the training data behind them. When you leave, your Dolly leaves with you.

For compliance, employers can see interaction logs at the category level, not the content level. They can know that Dolly handled 12 status update requests this week. They cannot read what was said.

We recognize this is a line some companies will push against. But we think building employee trust into the product from day one is more important than making the product easier to sell to HR.

Building this now with limited org spots at getdolly.ai. Curious how others think about the employer vs employee tension in AI tools.

reddit.com
u/Substantial-Cost-429 — 6 days ago

We automated everything at work except the one thing that takes up the most time: actually reading and responding to messages

Deployments are automated. CI/CD pipelines run without anyone touching them. Data flows through systems with zero human involvement. But somehow, the average knowledge worker still spends close to three hours a day manually reading and responding to messages.

Not because no one thought to automate it. Because the trust bar is different.

When a deployment fails, you roll it back. When an AI responds to your colleague with something wrong or off-tone, the damage is immediate and relational. So people are right to be cautious.

We built Dolly around this specific tension. The answer we landed on:

You do not have to trust it fully upfront. You start in review mode. Dolly drafts, you decide. Over time, as you see how it handles things, you unlock specific categories for auto-send. Routine internal updates. Status pings. Standard acknowledgments. The stuff that does not need your full attention.

The heavier things stay in review. Commitments. Anything emotionally charged. Anything that needs actual judgment.

The confidence threshold is not a product feature. It is a trust calibration mechanism. And it should be in every agentic communication tool.

Building in this space at getdolly.ai. Genuinely curious how others in the automation community think about this problem.

reddit.com
u/Substantial-Cost-429 — 6 days ago

Disclosure: I'm a founder. This is a real project, not a concept.

**What it is:** Dolly (https://getdolly.ai) is a personal AI agent that lives inside each employee's email and Slack. It learns their communication style and context, then responds on their behalf — either auto-sending or surfacing a draft for review, depending on confidence level.

Key thing: it's not a shared team bot. Each employee gets their own individual clone.

**The problem it solves:** ~3 hours/day of async messaging that mostly follows predictable patterns. We think this is automatable in a way that's transparent and actually good for the person, not just the company.

**Tech:** Fine-tuning per user on communication history + RAG over their personal knowledge base + confidence scoring to decide auto-send vs. draft. LangChain for orchestration.

**What I'm honestly unsure about:**

  1. We've validated that people spend the time. We haven't fully validated that they *want* to automate it. Some people feel weirdly possessive of their inbox even when they hate it. Have you felt that?

  2. Who's the buyer here? The employee who wants 3 hours back, or the employer who wants faster response times? We're conflicted.

  3. "Digital clone" sounds cool in a tech context. Does it sound creepy in an HR context?

We're opening to 17 more orgs right now (first 20 total). But I care more about honest takes on whether this is a real thing or not than I do about signups.

What would you change, kill, or focus on if this were yours?

u/Substantial-Cost-429 — 6 days ago
▲ 2 r/nocode

Full disclosure: I'm a founder of Dolly (getdolly.ai). Sharing for feedback and discussion, not just promo.

We built an AI agent that acts as a personal digital clone for each employee. Not a team bot. Not a shared assistant. One agent per person, trained on their individual communication style, plugged into their email and Slack.

The question I keep getting from people when I explain this: "That's either amazing or terrifying. Which is it?"

I genuinely don't know how the market will land on this. So I want to hear from people outside our bubble.

**If your employer handed you this tool tomorrow:**

- Would you use it?

- Would you trust it to send messages in your name?

- Would it creep you out that it knows your communication patterns?

- Would you worry your employer is using it to monitor how you write?

**And from the employer/ops side:**

- Does this solve a real productivity problem or does it create new liability?

- Is the "one agent per employee" model the right unit, or should this be team-level?

We're in early access — 20 org cap, 17 remaining: https://getdolly.ai

But honestly I care more about your reaction to the concept than getting signups right now. Tell me what's wrong with this.

reddit.com
u/Substantial-Cost-429 — 6 days ago

Hey — looking for early users who are willing to break things and give us honest, unfiltered feedback. Not a polished sales pitch.

**What is Dolly?**

Dolly is an AI agent that acts as your personal digital clone for workplace messaging. It plugs into your email and Slack, learns how you specifically communicate, and can respond on your behalf based on your tone, style, and knowledge base.

This is not a shared team bot. Every individual employee gets their own Dolly.

**Who it's for:**

Knowledge workers who spend a significant chunk of their day on async messages that, honestly, they've answered 50 times before. Managers, leads, cross-functional connectors, anyone whose inbox is a bottleneck.

**What we need from you:**

- Does the core concept make sense to you, or does something feel fundamentally off?

- Would you actually trust an AI to send messages in your name? If not, what would need to be true for you to trust it?

- What's the first thing you'd try to break?

- Is there an obvious use case or obvious problem we're ignoring?

Site: https://getdolly.ai

We're doing a limited early rollout — first 20 orgs, 17 spots left. Happy to give access to anyone here who wants to actually try it and give feedback.

reddit.com
u/Substantial-Cost-429 — 6 days ago

I want to share where we are and genuinely hear what we're getting wrong. Not a pitch — a real check-in.

**What we built:** Dolly is an AI that acts as an individual clone for each employee. Every person gets their own agent, trained on their communication history, that can respond to emails and Slack messages on their behalf. Not a shared team bot. One per person.

**Why:** Employees spend ~3 hours/day on async messages. Most of those replies follow patterns they've already established hundreds of times. We believe that load can be automated without anyone noticing a difference.

**What's working:**

- Fine-tuning on individual communication history produces much higher voice fidelity than prompt engineering alone

- Users in our pilot orgs are getting genuinely comfortable delegating routine replies after ~2-3 weeks

- Our confidence scoring model (decides auto-send vs. surface for review) is getting better with each iteration

**What I'm genuinely worried about:**

- The "employer buys, employee uses" dynamic is weird. Who is this for? Does the employee want this or does the employer?

- Are we solving a real bottleneck or just a pet peeve? 3 hours sounds big but is the problem really message volume, or is it context-switching?

- Can we keep the privacy guarantees strong enough for enterprise IT to stop flinching?

We're at 3 pilot orgs, about to open to 17 more (capped at 20 total for this round, site is getdolly.ai).

I'm more interested in hearing where we're wrong than where we're right. What are we missing?

reddit.com
u/Substantial-Cost-429 — 6 days ago

We've been heads-down building and I think we've lost perspective. Genuinely asking for the harshest feedback you can give.

What we built: Dolly (https://getdolly.ai) — an AI that acts as a digital clone for each employee. It plugs into email and Slack, learns how each person communicates, and can respond on their behalf.

The pitch: employees spend ~3 hours/day on messages that follow patterns they've already established. Dolly automates that load so they can focus on actual work.

Here's what I think our biggest weaknesses are — tell me if I'm wrong, or if there's something worse I'm missing:

  1. **The trust problem**: Will people ever actually let AI send emails in their name? We're seeing 2-3 weeks before users get comfortable, but maybe they never fully get there?

  2. **The "good enough" problem**: If Dolly drafts well but the person reviews everything anyway, what have we actually saved?

  3. **Enterprise security objections**: Plugging into email/Slack at the individual level raises eyebrows with IT. We handle this with on-prem options but still.

  4. **The value prop for employers vs. employees**: Employees might love it. Will employers pay for it?

What else am I missing? What's the thing we're most wrong about?

(17 spots left in our early access for the first 20 orgs if anyone actually wants to try it rather than just roast it)

reddit.com
u/Substantial-Cost-429 — 6 days ago

The future of work question I keep thinking about: not "will AI replace jobs" but "what happens when AI handles the coordination overhead of jobs?"

Because right now, a huge chunk of knowledge work isn't the work itself — it's the communication around it. Status updates, async Q&A, inbox triage, follow-ups. Patterns people have answered 50 times before.

We built Dolly to tackle exactly this. It's an AI agent that acts as a digital clone for each employee individually — not a shared team bot, but one per person. Dolly plugs into their email and Slack, learns their communication style and context, and can respond on their behalf.

What we've observed in early pilots:

- The employees who benefit most aren't the ones getting the most messages — they're the ones whose messages require the most context to answer (managers, leads, cross-functional connectors).

- The cultural shift is real. There's psychological friction in letting an AI respond for you, even when the draft is perfect. That trust takes time to build.

- It changes meeting culture. When async messages actually get answered quickly and correctly, the "let's just jump on a call" fallback disappears.

We're doing a limited rollout to 20 organizations. 17 spots left.

https://getdolly.ai

Curious what others here think about where this fits in the broader future-of-work picture.

reddit.com
u/Substantial-Cost-429 — 7 days ago

Wanted to share a project and some of the interesting architecture decisions we had to make — curious what this community thinks.

**The problem:** employees spend ~3 hours/day on async messages. The vast majority are patterned responses that don't need the person's full attention. We wanted to automate those.

**What we built:** Dolly — a per-employee AI agent. Not a shared org bot. One agent per person, each with:

- Fine-tuning on that employee's communication history (tone, style, recurring answers)

- RAG layer over their personal knowledge base (docs, past replies, internal wikis)

- LangChain orchestration for tool routing across email and Slack APIs

- A confidence scoring system that determines whether to auto-respond or surface a draft

**Some decisions worth discussing:**

  1. **Fine-tune vs. prompt-engineer the persona**: We initially tried heavy system prompting for persona. It worked okay but degraded on edge cases. Per-user fine-tuning produced much more consistent voice fidelity, at the cost of more infra complexity.

  2. **Confidence gating**: We use a combination of semantic similarity to past responses + LLM self-assessment to determine confidence. Still not perfect — curious if anyone has better approaches.

  3. **RAG scope per employee**: How much context is too much? We found that scoping RAG to the last 90 days of their communications + their active docs gave the best precision/recall tradeoff.

We're in early rollout — 20 orgs, 17 spots left.

https://getdolly.ai

Happy to go deep on any part of the stack.

reddit.com
u/Substantial-Cost-429 — 7 days ago

**The idea:** Dolly is an AI that acts as each employee's digital clone — plugged into their tools, trained on how they communicate, and able to respond to messages on their behalf. One AI per person, not one bot for the whole team.

The problem we kept seeing: the average employee spends ~3 hours a day just on messages. Not strategy, not deep work — message triage. Repetitive, patterned responses that a well-trained system should be able to handle.

**What we shipped:**

- Per-employee model fine-tuning on communication history

- Tool integrations (email + Slack first)

- Confidence-gating: Dolly auto-responds above threshold, drafts for review below

- Admin dashboard per org

**What broke in pilot:**

- Voice fidelity was harder than we expected. Getting it to sound like *that person specifically* — not just "a professional" — required way more signal than we initially scoped.

- The confidence threshold UX was confusing. People didn't know how to calibrate it. We had to redesign it twice.

- Onboarding time was too long. First version took ~2 hours per employee. We're down to under 20 minutes now.

**What surprised us:**

- People LOVE reviewing Dolly's drafts almost as much as having it auto-respond. The draft review mode became a feature, not a fallback.

- Trust builds fast once they see their first 10 accurate responses. After that, adoption jumps.

**Where we are:** 3 orgs in pilot, opening to 20 total. 17 spots left.

Site: https://getdolly.ai

Happy to dig into any of this — the architecture decisions, GTM learnings, or the weird edge cases we've hit.

reddit.com
u/Substantial-Cost-429 — 7 days ago

Sharing this here because I think founders will resonate with the problem more than anyone.

The insight that started Dolly: high-performing people at fast-growing companies are essentially running two jobs. One is the work they were hired to do. The other is the endless cycle of messages — ~3 hours a day on average, according to research. Slack, email, async questions, status updates, internal requests.

Most of those messages have known answers. They follow patterns. They could be handled by someone who deeply understands how that person thinks and communicates.

But that person doesn't exist. Until now.

Dolly is an AI that models each individual employee — their communication style, their knowledge, their tools. It becomes their digital clone. When a message comes in that it can handle confidently, it does. When it can't, it surfaces a draft for review. Every employee gets their own Dolly — not a shared team bot.

What we learned building it:

  1. Trust threshold is everything. People will adopt it only if they can tune exactly when Dolly speaks for them vs. when it drafts.

  2. "Sound like me" is harder than "answer correctly." The hardest part is voice fidelity, not factual accuracy.

  3. Per-employee vs. per-org framing changes everything about how buyers think about it. It's a seat-based product, not a license.

We're opening to 20 organizations in our first cohort. 17 spots left.

getdolly.ai

Happy to answer anything about what we built, what broke, or how we're thinking about GTM.

reddit.com
u/Substantial-Cost-429 — 7 days ago

Sharing something we've been building that I think is directly in the wheelhouse of this community.

The core problem: employees spend ~3 hours a day on async messages. The vast majority of that is patterned — the same kind of questions, in the same domain, answered in the same voice, over and over. It's a task that has all the properties of something an agent should be able to handle: defined context, learnable tone, bounded knowledge domain, predictable inputs.

So we built Dolly.

Dolly is a per-employee AI agent. Not a shared org assistant — each employee gets their own agent instance, fine-tuned on their communication patterns and knowledge. It integrates with their existing tools (email, Slack, internal platforms) and handles response drafting or full auto-response depending on a configurable confidence threshold.

What makes it different from a generic AI assistant:

- It's trained on *that person's* history, not a general corpus

- Responses sound like them, because the model is initialized from their writing

- It knows what they know — their docs, their past answers, their domain

- It knows when to hold back: the confidence-gating system prevents hallucinated or out-of-character responses

Think of it less as "AI answering your emails" and more as "an agent that has modeled you, and is acting in your voice with your judgment."

Early pilot results: avg ~2.5 hours/day returned per employee.

Launching to 20 orgs max. 17 spots left. Link in comments.

reddit.com
u/Substantial-Cost-429 — 7 days ago

Wanted to share what we built and get some technical feedback from people who actually think about AI architecture.

The problem: the average employee spends ~3 hours a day reading and responding to messages. Most of that is patterned communication — questions they've answered dozens of times, in a voice that's distinctly theirs, using knowledge that's already in their head.

Our hypothesis: you can model that well enough to automate it.

So we built Dolly.

Architecture overview:

- Per-employee fine-tuned model layer on top of a base LLM

- Tool integrations (email, Slack, etc.) via standardized APIs

- Context retrieval from each employee's communication history and knowledge base

- A confidence threshold system — Dolly only auto-responds when it's above a defined certainty level; otherwise it drafts for review

Every employee gets their own Dolly instance. The model learns their tone, their typical answers, their domain knowledge. It's not a shared org-level bot — it's literally one AI per seat.

Early results from pilot orgs: ~2.5 hrs/day returned per employee on average.

Now doing a limited early rollout — 20 orgs max, 17 spots left.

getdolly.ai

Happy to go deep on the architecture, training approach, or the confidence-threshold problem (which is genuinely hard to get right).

reddit.com
u/Substantial-Cost-429 — 7 days ago

People automate workflows, deployments, data pipelines, and a hundred other things — but somehow the average employee still manually reads and responds to messages for ~3 hours every single day.

Think about that. 3 hours. Of typing replies that, most of the time, follow patterns you've already established.

We got obsessed with this problem and built Dolly.

The concept is simple: Dolly is an AI that models how you specifically communicate and work. It plugs into your tools — email, Slack, whatever you use — and can respond on your behalf based on your knowledge, your tone, and your context. It's not a shared team bot. It's your individual digital clone.

Every employee gets their own Dolly. Their own clone that handles the repetitive, predictable message load so they can focus on the work that actually requires them.

We're doing a limited rollout to the first 20 organizations. 17 spots remaining.

https://getdolly.ai if you're curious. Happy to talk architecture, use cases, or what we got wrong in v1.

reddit.com
u/Substantial-Cost-429 — 7 days ago