I spent months inside a law firm before writing a single line of code — here's what I learned about how lawyers actually work (and why most legal tech misses the point
I'm not a lawyer. I'm a builder. And when I decided to tackle the legal industry with AI, the most common advice I got was: "just learn the tech stack and ship something."
I ignored that advice. Instead, I spent weeks just watching. Sitting in on matter reviews. Reading through how attorneys document their time. Understanding what a "conflict check" actually means operationally — not theoretically.
What I found was that most legal professionals are drowning — not in cases — but in administrative overhead that has nothing to do with the law itself.
| Time Tracking: Attorneys reconstruct their day from memory at 6pm. Billable hours get lost. Not because lawyers are lazy — because logging mid-task destroys focus. | Contract Review A mid-size firm might review hundreds of contracts a month. The risk flags are often the same — unlimited liability, short notice windows, ambiguous IP clauses. Repetitive and high-stakes. |
|---|---|
| Conflict Checks Every new client intake requires screening against every existing matter. Done manually, it's a legal liability waiting to happen — and it almost always is manual. | Deadline Chaos Court deadlines, filing windows, contract renewals — all living in different places. Missing one isn't a bad day. It's a malpractice claim. |
What I learned about building for this space:
①
Legal language is load-bearing. Words mean specific things. "Matter" is not "case" is not "file." If your system uses the wrong term, attorneys won't trust it — full stop. Get the vocabulary right before you get the UI right.
②
The human-in-the-loop isn't optional — it's the product. Every AI output in a legal context needs a clear approval layer. Not because the AI is wrong. Because an attorney's signature is on the outcome. Design for accountability, not just accuracy.
③
Risk language matters more than feature counts. Lawyers think in risk. When I reframed my conflict check feature not as "a search tool" but as "liability prevention," adoption intent went up immediately in conversations. Same feature. Different frame.
④
AI confidence scores are a UX problem, not a model problem. When the model flags a clause as risky, attorneys want to know why — not just that it is. Building explainability into every AI output was the hardest and most important thing I did.
⑤
Don't automate judgment. Automate everything around judgment. AI shouldn't decide legal strategy. It should free up the hours that get wasted so the attorney can focus on the decisions only they can make.
I'm still deep in this build — and honestly the deeper I go, the more respect I have for the complexity of the problem. Legal operations is one of those spaces where the surface looks simple and the depth is enormous.
Happy to share more about the discovery process, the AI design decisions, or what's been the hardest part technically. Would love to hear from anyone who's worked with — or inside — law firms. What am I missing?
#legaltech #AIAgents #LLM #Chatbot