r/legaltech

Using Claude for drafting transactional documents

I’ve been using Claude pretty heavily inside Word / coworking tools over the past weeks, and honestly it’s been a bit of a game changer for me as a junior lawyer.

For the “dirty work” of drafting, it’s insanely good:

- fixing defined terms

- cleaning leftovers from precedents

- checking cross-references

- generating a solid first draft based on prior docs (after giving it enough context)

This alone probably saves me hours every week.

Where I still feel a gap is in structuring — it helps a lot to organize logic and sanity-check if things make sense, but it still lacks a bit of “deal instinct” / creativity that you build with experience.

That said, the productivity boost is real. Feels like going from manual to semi-automated drafting overnight.

Curious how others here are using it:

- anyone in transactional work pushing it further?

- any litigators using it differently?

Do you think people are underestimating how powerful this already is?

Also very curious to see how this evolves — feels like we’re still early.

reddit.com
u/Plus-Problem-8575 — 5 hours ago

Creating a small sub for practical legal AI tool discussion

I recently transitioned from biglaw to an in-house position at a small public company, and I've been trying to build useful legal AI tools for the last few months - probably similar ones to tools loads of other lawyers have been building at their own companies recently. I’ve enjoyed following this sub, but I’m also looking for something a bit narrower for people in my situation: a small, private, practitioner-focused space for lawyers actively using AI in day-to-day work and trying to build repeatable workflows, so we can compare notes and learn.

Topics might include:

* contract review systems

* prompt/workflow design

* Claude / ChatGPT / Harvey in real use

* internal adoption challenges

* confidentiality guardrails

Public forums are great for broad discussion, but I think people (certainly me) are uncomfortable posting specific tools they've built or custom instructions they've been working on in a public space. There's also a lot of vendors in any public sub, which definitely have their value but also dolute the specific discussions I'm looking for.

If that sounds interesting, comment or PM me. Even better, if something like this already exists, I’d love to hear about it!

reddit.com
u/dormidary — 14 hours ago

Claude Use Cases for Estate Planning / Probate Firm?

I work for a small local firm (less than 5 attorneys).

Our focus is Estate Planning / Probate / Trust Admin / Litigation.

Currently, we utilize Smokeball (think Clio) as our CRM and automation tool. It has various use cases including template automation for federal and state forms that they pre-built for us, which helps streamline our process a lot.

I have been seeing a lot of videos online about Claude Cowork, and my original thought was to use it for the template automation we do with Smokeball, but there is no real reason for us to switch over for that as costs are similar.

Does anyone have any other use cases im missing? I am looking for ideas geared more towards workflow optimization in our field of practice rather than marketing outreach, though I am open ears to both.

Thank you in advance

reddit.com
u/Heirachyofneeds — 8 hours ago

Best RAG setup for legal docs?

Building an internal contract review tool. Indexed ~8k docs (MSAs, NDAs, vendor agreements) into Pinecone with OpenAI embeddings, hybrid search on top.

Retrieval is weak: queries like "find the indemnification cap in vendor contracts under $100k" return the right doc but wrong section half the time.

What's actually working for legal RAG in 2026?

Different embeddings, different search stack, custom everything?

reddit.com
u/I_AM_HYLIAN — 12 hours ago

Every law firm I talk to has the same problem and none of them have solved it

Since posting about the AI research system I built for a German law firm I've been having conversations with lawyers in different countries. The pattern is identical everywhere.

The problem: firms accumulate years of internal knowledge in documents. Court decisions, case files, internal memos, regulatory guidance, client correspondence. This knowledge is incredibly valuable. But nobody can efficiently search it.

When a new question comes in, junior associates dig through folder structures trying to find relevant precedents. They search by filename. They ask senior colleagues "didn't we handle something like this before?" They spend 30-60 minutes finding what they need when the answer exists somewhere in documents the firm already has.

The irony is these firms sell their expertise by the hour but waste enormous amounts of billable time on internal knowledge retrieval.

What's interesting is why nobody has solved this for most firms:

  • Big legal tech companies (Westlaw, LexisNexis) focus on external legal databases not internal firm knowledge.
  • Generic AI tools don't understand legal authority hierarchy. A ChatGPT wrapper treats a blog post and a Supreme Court ruling with equal weight. Lawyers can't trust that.
  • Most firms don't have internal tech teams. They rely on IT support for email and printers. Nobody is building custom AI tools.
  • The firms that do have tech teams are building for client-facing products not internal knowledge management.

This creates a massive gap. The firms need custom AI systems built for their specific documents with their specific domain requirements. But there's almost nobody offering that service because developers don't think to target law firms and law firms don't know what to ask for.

The thing I didn't expect is how much of the architecture carries over. The authority hierarchy logic, the citation enforcement, the jurisdictional tagging, the annotation layer. Most of it isn't specific to one firm. A different compliance team or law firm would have the same structural needs just with their own documents and maybe slightly different authority tiers.

I spent most of the project solving problems I thought were unique to this client but turned out to be universal to how legal professionals work with documents. That realization changed how I think about what I actually built.

reddit.com
u/Fabulous-Pea-5366 — 19 hours ago

What pisses you off about Clio

Any feature that should already be there but isn’t, any integration with other apps, or anything else you think would enhance your experience.

reddit.com
u/Large-Rabbit-4491 — 19 hours ago

62 legal AI assessments in, here's what surprised me about small firm buyers

Spent time on the legal AI sales side before building an independent assessment tool a few months back. It walks firms through use case, size, budget, and integration needs and recommends which of the 6 major legal AI tools fits best (Harvey, Spellbook, Lexis+ AI, CoCounsel, Luminance, Kira).

Pulled the aggregated data this weekend. A few patterns surprised me.

37% of legal AI buyers running the assessment are solo or small firms (1-5 attorneys). Larger than I expected given how much vendor marketing leans big law first.

39% selected "budget is secondary, fit is what matters" as their budget preference. Given how concentrated the small firm segment is, I assumed this would skew more price-sensitive. Didn't hold up.

The vendor distribution also didn't match share-of-voice in press coverage. One of the loudest names in the category is mid-pack by recommendation count, while a quieter one is winning a plurality.

Sample is 62 legal AI assessments, anonymized at collection (no PII). Methodology is transparent and happy to answer questions about it here.

Not selling anything, the assessment is free and there are no ads. Just thought the data was interesting.

Anyone else seeing the SMB skew with your firm or clients?

reddit.com
u/PopularWeather6463 — 1 day ago

Am I overthinking it, or do others actually keep closed 811 tickets for years?

We’re in the middle of cleaning up project files from 2022–2024, and I’ve hit a pile that honestly just made me stop. Box full of printed 811 tickets from the field. Some are neatly stapled, others are crumpled, and a few are barely readable, coffee stains, dirt, half-faded print. Now I’m stuck on the question no one really wants to answer: how long are we actually supposed to keep these after the job is closed? I’ve heard different things from 3 years, 5 years, but nothing consistent. And I don’t want to be the person who shreds something today and then gets asked for it after a utility incident down the line. Right now it feels like we’re either hoarding paper just in case or risking not having it when it matters.

reddit.com
u/Cluten-morgan — 21 hours ago

Client, not lawyer — I've ended up keeping my own case chronology. Tools problem or lawyer problem?

I'm not a lawyer. I'm a client on an ongoing matter, and I've ended up keeping a detailed chronology of my own case in a Google Doc — because my attorney keeps forgetting small but important details, and I use the doc to nudge her back on track during calls.

This feels backwards. Shouldn't my lawyer be the one with the chronology, and I should be *asking* for it?

I started looking into what tools exist for this and hit CaseFleet and TimelinePad. Both seem real and capable, but from the outside they feel like traditional timeline/spreadsheet tools that bolted AI on recently rather than being designed AI-native from the start. Neither seems built for the scenario where the client also needs visibility.

Genuine questions for the crowd:

  1. Am I an outlier, or is "client ends up maintaining the chronology" more common than lawyers want to admit?

  2. For the lawyers here — do you actually use a chronology tool, or is it still Excel + folders + memory for most of the bar?

  3. Is there any tool you've seen genuinely adopted in practice (not just bought and abandoned)?

  4. Would an AI-native chronology tool — structured so the facts + linked evidence can be fed into any LLM, not locked to one vendor — actually change anything, or is that a solution looking for a problem?

Full disclosure: I'm thinking about building something simple here. But I'm also just a client who's paying for legal services and ended up doing part of the case-tracking work myself — genuinely unsure whether this is a real gap in the market, or whether I just need a different lawyer.

(This post is optimized with claude, so it might sound a bit AI)

reddit.com
u/HatAncient1742 — 1 day ago

Claude Use Cases for Estate Planning / Probate Firm?

I work for a small local firm (less than 5 attorneys).

Our focus is Estate Planning / Probate / Trust Admin / Litigation.

Currently, we utilize Smokeball (think Clio) as our CRM and automation tool. It has various use cases including template automation for federal and state forms that they pre-built for us, which helps streamline our process a lot.

I have been seeing a lot of videos online about Claude Cowork, and my original thought was to use it for the template automation we do with Smokeball, but there is no real reason for us to switch over for that as costs are similar.

Does anyone have any other use cases im missing? I am looking for ideas geared more towards workflow optimization in our field of practice rather than marketing outreach, though I am open ears to both.

Thank you in advance

reddit.com
u/Heirachyofneeds — 11 hours ago

Are any lawyers accessing LLMs through the API and a third-party UI?

Throughout last year, I started using ChatGPT Plus more for work--so much that I upgraded to Pro a few months ago and loved it. It works great for anything in the public record or where data security isn't an issue. But I also have some work where data/cyber security is an issue, and I've just become more wary of where data is going, where it's being stored, who can access it, etc. I'm primarily a solo; I consult with law firms and work with corporate clients, but I've read that I'd have a hard time implementing any type of ZDR protocol or BAA with the major LLM providers, as that's usually reserved for their large enterprise users.

I don't have a programming background, but I went down a deep rabbit hole a couple of weeks ago researching alternatives that give me the functionality of something like ChatGPT Pro with more "enterprise"-grade security protocols that small businesses would have trouble implementing directly with the LLM providers (ZDR, HIPAA compliance, encryption, etc.).

That research led me to UIs like LibreChat, TypingMind, and a few others, along with other online tools that I had no idea even existed (like I said, I'm a lawyer, not a programmer), like OpenRouter, AWS Bedrock, Cloudflare/S3-compatible sync/backup, RAG, and a laundry list of plugins, extensions, and thingamabobs that I'm still navigating through.

My ultimate goal was to have a setup where data (chats, prompts, inputs, documents, etc.) are stored only with a secure provider with encryption, that is HIPAA-compliant, that doesn't involve third-party access, and that could sync across devices--MacBook Pro, iPhone, and iPad. I finally have something that's been working pretty well, although I'm still learning more and more every week about all of the things I can potentially build out.

Are any other lawyers experimenting with "custom" setups like this? If so, what are you implementing, and how are you using it?

reddit.com
u/sps133 — 3 days ago

Open standard for collaboration-platform eDiscovery collection fidelity - help me break it

Hi r/legaltech. Long-time lurker, first-time poster.

Over the last year I've been working on an open, vendor-neutral standard called Reconstruction-Grade eDiscovery (RGR). It tries to define what "preserved the right evidence" actually means when the evidence lives in Teams, SharePoint, OneDrive, and Slack - platforms that broke most of the assumptions traditional eDiscovery collection was built on.

I'd rather get critique than an audience, so here's the thesis in one paragraph - tell me where it breaks.

Traditional eDiscovery assumes messages carry fixed attachments, threads live in single containers, and a file collected today is the file the custodian saw. Collaboration platforms invalidated all three. Messages reference live documents that change after sending.

Threads fragment across compliance records. Versions diverge between the communication and the collection. Modern attachments orphan when links break. "Reasonable steps" under FRCP 37(e) increasingly means something different for this evidence class than for email - but the industry hasn't had a shared vocabulary for what a capable collection methodology actually preserves.

RGR tries to be that vocabulary. It defines four conformance tiers (RG-Aware → RG-Core → RG-Plus → RG-Max) for collection fidelity, and requires exception reporting so defensibility is auditable rather than assumed.

For anyone who wants the shortest path in, I wrote a four-week, six-post narrative arc that walks through the problem, the case law, and the framework: https://rgrstandard.org/blog/four-weeks-six-blog-posts/

u/Constant-Ninja-3933 — 1 day ago

Not a lawyer myself, I'm a software developer doing some research and genuinely curious about how solo attorneys and small firms handle client inquiries.

A few questions if you don't mind:

  1. Do you have your own website, or do you mostly rely on directories like Avvo / Martindale / word of mouth?

  2. If you do have a website what happens when someone fills out your contact form at 10pm? Do you reply next morning, or do you have someone handling it?

  3. Has anyone experimented with any kind of chat widget or AI assistant on their site that answers basic FAQs (practice areas, fees, availability) and maybe books a consultation call automatically? Curious if that's something that would even be useful or if it would feel too impersonal for the legal context.

I've seen these tools work well in other service businesses but I genuinely don't know if law is different, seems like clients might want to talk to a real person from the start.

Appreciate any honest takes, even if it's "that would never work for us.

reddit.com
u/AccordingLeague9797 — 2 days ago

The gap between legal AI marketing and what actually works in production is wild

I'm a developer who recently built an AI research system for a compliance firm in Europe. Not a SaaS product, just a custom internal tool for one firm. Wanted to share some observations from the experience because the disconnect between how legal AI gets marketed and what actually matters in practice was eye-opening.

The biggest thing I underestimated was citation accuracy. Every legal AI demo I've seen shows a chatbot returning nice-looking answers. Nobody talks about the fact that the AI will confidently attribute a regional court's position to the Supreme Court if you don't specifically engineer against it. I caught this during testing and it took weeks of prompt engineering to get source attribution reliable. Stuff like the model writing "according to professional literature" instead of citing the specific document, or flattening two conflicting court positions into one answer as if there's consensus when there isn't.

The authority hierarchy problem is something I've never seen addressed in any legal AI product marketing. In practice, a high court ruling carries fundamentally different weight than a lower court opinion or a guideline or a law review article. Standard AI retrieval treats them all equally because it just ranks by text similarity. A well-written blog post can outrank an actual binding court decision because the blog uses more natural language. That's dangerous in a way that's hard to detect without domain expertise.

The other thing that surprised me was how much the lawyers cared about regional jurisdiction handling versus how little most AI tools account for it. In Germany you have 16 federal states with variations in how regulations get applied. Documents need to be tagged by jurisdiction and the system needs to flag when something is state-specific vs nationally applicable. None of the generic tools I evaluated before building custom handled this at all.

On the positive side, once the system actually worked properly with accurate citations and authority awareness, adoption within the firm was faster than I expected. The associates who were skeptical became the heaviest users because it genuinely cut their research time from 30-45 minutes per question down to a few minutes.

Curious if others here have had similar experiences with legal AI tools, either good or bad. The space seems to be moving fast but the quality gap between what's marketed and what's actually production-ready feels massive.

reddit.com
u/Fabulous-Pea-5366 — 4 days ago

Thoughts on Heppner decision? It directly affects Legal Tech?

In United States v. Heppner (2026), a federal court ruled that conversations and documents created with public AI tools (like Claude) are not protected by attorney-client privilege or work product doctrine.

So, this means any LLM out there, for now, presents a huge liability risk?

reddit.com
u/Special_Collection_6 — 5 days ago

Some suggestions for those looking to add AI to their law firm.

I've been looking into the internal operations of a few law firms recently as part of my research, and I see the exact same reflex every time a partner decides they need to "figure out AI."

They are completely lost on how to actually use it, so they assume they need to buy or build some massive, perfect agentic system on day one.

You don't.

If you want to actually incorporate AI into your practice here is how I'd recommend to get started:

Start with using the native "Interview" tool. The best is Claude's AskUserInterview tool, Gemini's is okay, and I would avoid using ChatGPT's for this critical first step. You can use a skill like this to help by typing Use /interview to interview me for ways to implement AI at my law firm.

# /interview                                                                                                                                                                              
              
  Turn a vague idea into an implementable spec by asking the questions the user hasn't thought to answer yet.
                                                                                                                                                                                              
  ## Input: $ARGUMENTS
                                                                                                                                                                                              
  ## Phase 0: Build an Internal Question Map                                                                                                                                                
                                            
  Before asking anything, write every question you might want to ask to `/tmp/interview-questions.md`. Organize by category: technical, UX, data, edge cases, security, operations. Aim for
  30+ questions across 6+ areas.                                                                                                                                                              
                                
  This map is internal — never show it to the user. Use it to ensure you don't skip categories. Mark questions resolved as answers come in. When an answer reveals new complexity, add        
  follow-up questions.                                                                                                                                                                        
                      
  ## Phase 1: Understand the Input                                                                                                                                                            
                                                                                                                                                                                            
  - File path: read it, summarize your understanding, identify gaps
  - Description: acknowledge what you know, note what's missing                                                                                                                               
  - Empty: ask what to interview about                         
                                                                                                                                                                                              
  ## Phase 2: Conduct the Interview                                                                                                                                                           
                                   
  Batch up to 4 questions per round. Cover at minimum:                                                                                                                                        
                                                                                                                                                                                            
  - 
**Core:** 
What user pain does this solve? Who uses it first vs. most? What does success look like?
  - 
**Technical:**
 What existing code does this touch? Simplest version? External dependencies?                                                                                               
  - 
**Data:**
 Where does data live? What happens offline? Conflict handling?                   
  - 
**UX:**
 Entry point? Happy path? Frustrated path? Existing patterns to follow?                                                                                                            
  - 
**Tradeoffs:**
 What are we explicitly NOT building? What could break?                                                                                                                     
  - 
**Operations:**
 How is this monitored? Debugging? Who owns it long-term?                                                                                                                  
                                                                                                                                                                                              
  When presenting options, 
**recommend one and say why**
 — don't make the user evaluate from scratch.                                                                                       
                                                                                                                                                                                              
  Keep going until the question map is exhausted. Judge completeness yourself.                                                                                                              
                                                                                                                                                                                              
  ## Phase 3: Confirm                                                                                                                                                                       
                                                                                                                                                                                              
  Summarize your understanding. Flag remaining assumptions. Ask user to correct anything before writing.                                                                                    
                                                                                                                                                                                              
  ## Phase 4: Write the Spec
                                                                                                                                                                                              
  Ask where to save it, then write:                                                                                                                                                         
                                                                                                                                                                                              
  - 
**Feature** 
→ user stories + acceptance criteria
  - 
**Initiative**
 → PRD (problem, solution, scope, success metrics)                                                                                                                          
  - 
**Technical**
 → architecture, implementation steps, considerations                                                                                                                      
  - 
**Bug/enhancement**
 → problem, proposed fix, testing approach  

The goal is to let the AI build context on you. You want it to understand how your firm operates, how you deal with clients, your daily bottlenecks, and the challenges you've had with AI in a legal setting in the past. If you think it didn't cover something, be sure to ask it about it.

Once it understands your actual baseline, have it generate a prioritized list of small, low-risk use cases.

Work through that list slowly over time.

Your goal is just to put down a solid foundation. Yes, people will brag online about their fully automated, zero-touch AI firm setups. They don't actually have those setups anywhere except in their dreams.

What matters is that you try it, find one or two things that actually work, and build from there.

If you run into roadblocks, bring your questions back here, or just ask the AI system directly to explain why it failed.

Happy to answer any questions below.

reddit.com
u/Mammoth_Doctor_7688 — 5 days ago

Mod Announcement: I work for a vendor now.

I've taken a role at SimpleDocs (Law Insider / oneNDA) as Chief Growth Officer.

I continue to talk to other vendors, and I’m planning a new AMA format which I hope y’all will enjoy. AMA eligibility rules will still be based on my rules I built into the rlegaltech500 index e.g. company ARR, age or valuation metrics.

Also, (u/Gee10) has been moderating this sub for 15 years. He has full authority to override me on anything where there's a conflict.

Thank you to the many of you who have claimed or created vendor pages on the rlegaltech.com wiki, SimpleDocs have kindly given me permission to carve out time each month to continue to maintain this site.

I will not maintain the SimpleDocs page on the wiki, I will leave that to Electra Japonas or Preston Clark. As far as reddit is concerned, they will receive the same advice I will give to all other vendors who reach out to me. Be honest. Be helpful. Follow the rules. Etc.

The reason I feel I was (and still am) well placed as a mod for this sub is that I know the tricks that a small minority of vendors use to shill and astroturf here, and I don’t want folks to disengage from the sub because of a small minority. 

I’ll also be reaching out to mods of the other legal subs to share lists of 10s of sock puppet accounts I’ve systematically dug out.

I know this community is small by reddit’s standards, but I’m enjoying moderating (and taking that responsibility seriously) it if you’ll still have me. Either way, thanks so far!

My flair now shows my SimpleDocs affiliation (as per my own rules).

P.S. This is not the sort of news I expect to get upvoted here, but I do want to keep being upfront with you all.

u/alexdenne — 5 days ago

How to break into Legal-tech as a student?

hi!
I’m currently a second-year law student in a 5-year program in India (moving into my third year soon), and I’ll also be starting a 3-year computer science degree this year (online, from a well-known institute), so I’ll be graduating with both degrees around the same time.

I genuinely enjoy law and tech, and I’m especially interested in the legal tech space. I like the idea of building things, solving problems, and utilizing technology to increase efficiency.

That said, I feel a bit lost when it comes to how legal tech actually works in practice, especially within law firms. I don’t have much exposure or guidance at the moment, and I’m trying to figure out how to move in the right direction.

I’ve come across a few structured programs like - Clifford Chance IGNITE Training Contract, Simmons & Simmons Wavelength (tech-focused team), Macfarlanes Lawtech Scheme

But I haven’t done deep research yet, and I’m not sure how to realistically work towards something like this — especially coming from India, where the legal tech market is still developing compared to the UK/US.

I’d really appreciate any advice on:

  • How to break into legal tech as a student
  • Skills I should focus on (both legal and technical)
  • Resources/courses that actually helped you
  • How law firms use legal tech in practice
  • Any opportunities or pathways that are open to international students

I’m actively working on building my skills, but I think I need a clearer direction and understanding of the field.

Would love to hear your experiences or suggestions — thank you so much!

PS - Used AI to refine the writing.

reddit.com
u/Pale_Librarian_7433 — 3 days ago

Harvey's World Model (or anyone else) Claims Make No Sense

(Disclosure I work at a legal AI competitor, Irys, but this post has no selling, it's simply an assessment of the costs of world models).

I saw a few people hyping up Harvey's plan on legal world models. Which makes sense because world models are THE hyped thing in AI right now. But their plans are all hype and no substance for a simple reason: world models are too expensive to build (especially in a domain like legal). As of now, any attempts to build them would be more narrative and less results.

If you want to know the true cost of a world model, look at Meta’s Code World Models (CWM). Code is the friendliest possible sandbox for this architecture. It has deterministic state, explicit transitions, and perfect, cheap verification (you can see if it runs or not). However, even here, Meta had to sink a ton of capital into:

- 35,000 isolated Docker environments,

- 120 million execution traces, and

-3 million fully verified agent trajectories.

To generate a single verified data point, the system had to fail tests, apply a patch, rerun tests, and prove the fix. That’s hundreds of millions of Docker-minutes which is an eight-figure infrastructure bill just to establish ground truth before training even begins.

The cost of training and inference adds orders of magnitude to this (and that's for one setup, do't forget to account for retraining etc)

And what did that buy them? It does well in the training setup but change the harness or the edit format, and performance collapses by double digits. In other words, it didn't learn general reasoning so much that it overfit to its environment.

Doing this for law is exponentially harder. There is a lot of ambiguous language, competing interpretations, and jurisdictional chaos. There is no compiler for an M&A contract. You can't write unit tests for a litigation strategy.

If you want to build a true “world model” for legal—one that explicitly simulates state transitions—how do you verify the trajectories? You need expert legal judgment to confirm that a specific redline achieves the desired state without breaking anything else. Generating millions of verified legal trajectories with human-in-the-loop oversight wouldn't cost eight figures; it would cost billions.

If Meta couldn't build a robust, generalizable world model in a perfectly deterministic sandbox with free verification, then no player in legal is going to crack world models anytime soon. There is a lot of groundwork to be done here.

That's all. I have my thesis on what will work in Legal and what won't but this isn't a vendor pitch so I won't rattle on here. Just wanted to flag that law isn't ready for world models, and a lot of this conversation around them in law is about catching buzz words

https://preview.redd.it/quanb9nylevg1.jpg?width=598&format=pjpg&auto=webp&s=8abd1d99c4a28e839c213fbb60b6d553a60be140

reddit.com
u/ISeeThings404 — 6 days ago

[Mod Approved] Webinar Recording — "Ask a Human"

I wasn't proactive enough to get this post out before the most recent webinar, but if this post is valuable enough for you all I'll give you a heads up for the next one.

Alex

Note: I'm one of the speakers for the session, but I'm a volunteer.

---

"Ask A Human" is a Q&A webinar session from the Law Practice Management Department of the State Bar of Texas. The series is designed to give legal professionals the clarity they need to navigate emerging practice trends, manage risks, and discover practical ways AI can support your practice today and in the future.

Webinar 1

Ask a Human — Full Webinar Recording (11/19/2025) | Law Practice Management

Questions

  • How can I use AI without getting in trouble?
  • Can a lawyer continue to practice without AI?
  • Do I really need a law-specific AI or is ChatGPT ok to start?
  • Do I need a paid or free AI provider?
  • What privacy considerations do we need to have beyond ensuring that the models cannot train on the data we feed it?
  • With the requirement to disclose that you're using AI, would it be good enough to put in your letter of retainer or must it be separately disclosed each time you use it? And what about when used to assist in drafting or researching for a pleading?
  • Question from a viewer: Is there an AI platform that would allow me to mine my previous work in future transactions?
  • How can AI notetakers be used safely?
  • Live Demo: How to Use ChatGPT

Webinar 2

Ask A Human — A Q&A on Artificial Intelligence [April 2026 Webinar Recording]

Questions Answered in this Webinar:

  • Can you name three commercial AI platforms that meet all of the ethical and fiduciary requirements for use in a law practice environment, regardless of firm size, that do not cost an arm or a leg to access on a monthly basis?
  • How can I verify citations or content that AI produced?
  • How can I tell if a pleading or other document is AI-generated?
  • Where is the line drawn between using AI as a drafting tool and delegating legal judgement to it?
  • What is my liability if I continue without AI?
  • Do we have to disclose AI use? When do we need to disclose?
  • Why are attorneys at larger law firms more likely to use AI than solo firms? Where is that gap coming from?
  • Where do you see AI heading in the future?
  • Live Demo: How to Use MCP Servers
u/shalalalaw — 4 days ago