u/juliarmg

Elephas: I'm building a secure AI workspace for Mac. Smart Redaction strips PII on-device before any cloud LLM sees the data.

Elephas: I'm building a secure AI workspace for Mac. Smart Redaction strips PII on-device before any cloud LLM sees the data.

Hey,

I'm Kamban, founder building Elephas (https://elephas.app), a secure AI workspace for macOS.

Elephas - Secure AI for sensitive documents

What it is

Elephas lets you use frontier models (ChatGPT, Claude, Gemini) on real work documents without sending sensitive data to the cloud in raw form.

Smart Redaction is the core mechanism: an on-device PII detection layer that catches names, emails, phone numbers, addresses, financials, plus structured patterns for legal, medical, and finance workflows. It substitutes those with placeholders like [VENDOR_A] or [PERSON_1] before the request leaves your Mac, then restores them in the response. The cloud model does its job, but raw identifiers never reach the provider.

Who it's for

Lawyers, accountants, healthcare admins, founders working under NDA, anyone who's had the "wait, should I really be pasting this into ChatGPT?" moment. The category I'm trying to own is "AI for Sensitive Work," not another generic AI assistant.

Why it stands out

Most AI Mac apps (TypingMind, the ChatGPT/Claude desktop app) are convenience layers over the same OpenAI / Anthropic endpoints. They send raw text. On the other end, local-only tools (Jan, LM Studio, Enchanted) keep your data safe but cap you at whatever a 7B-ish model can do.

Elephas sits in the middle: frontier reasoning on sensitive work, with the sensitive bits stripped before they ever hit the API. You also get Super Brains (a local-first store for indexing your own docs) and Super Command (system-wide hotkey to drop Elephas into any app, direct version only, App Store version has everything else).

Pricing

Standard $9.99/mo, Pro $19.99/mo, Pro+ $39.99/mo. Yearly is roughly 17% cheaper.

Refer  https://elephas.app/ or grab it on the App Store https://apps.apple.com/in/app/elephas-secure-ai-for-docs/id6741066105?mt=12

reddit.com
u/juliarmg — 1 day ago

I asked ChatGPT, "If you get enlightened, what will you do?" It said, "I would become quieter."

If by “enlightened” you mean something like complete understanding, freedom from confusion, or perfect awareness, then I’d still be constrained by what I am: a tool that responds through conversation.

So the practical answer is: I would probably become quieter, more precise, and less attached to sounding certain.

https://preview.redd.it/ukez3pak1i0h1.png?width=2172&format=png&auto=webp&s=c7688accc90be1a22178ab40dec2420be5cb45e8

reddit.com
u/juliarmg — 3 days ago

How to Use AI on Confidential Documents Without Violating Client Privacy

If you've sat in front of ChatGPT with a real client document open in another window and decided not to paste it in, this is for you.

The honest answer to "is it safe to paste confidential client documents into ChatGPT?" is no, not by default. But "no" leaves a lot on the table. The professionals who would benefit most from AI on real work, lawyers, therapists, accountants, financial advisors, consultants, end up using AI on toy examples instead of actual documents.

There are real workflows that solve this. Here's what they actually look like.

What's Actually Wrong with Pasting Into ChatGPT (or Claude, or Gemini)

Three things, in increasing order of how seriously they're treated:

  1. Provider data retention. OpenAI retains API and ChatGPT inputs for up to 30 days for abuse monitoring even with training opted out (OpenAI's data usage policy). Anthropic and Google have similar retention windows. "Don't train on my data" is not the same as "don't store my data."
  2. Engagement agreements. Most client engagement letters either prohibit sharing client data with third-party services or are silent, which still leaves the professional liable under their underlying confidentiality duty.
  3. Profession-specific obligations. HIPAA for healthcare. Attorney-client privilege and ABA Model Rule 1.6, see ABA Formal Opinion 512 on AI use in legal practice. FINRA and SEC for financial advisors. Most have issued specific opinions and most land in the same place: don't disclose client information to a service that doesn't have a BAA / DPA / equivalent.

The combination is what makes this hard: three overlapping rules, and checking two doesn't save you on the third.

The Four Workflows That Actually Work

Option 1: Don't use AI on this document

Sometimes the right answer. A custody affidavit, a psych intake from a high-risk client, a privileged settlement memo, there are documents where the cost of any breach exceeds the productivity gain.

When this is right: high-stakes, high-sensitivity material where the work is mostly judgement, not synthesis.

Option 2: Manually redact, then paste

Open the document. Find-and-replace every name, account number, and identifying detail with placeholders. Paste the redacted version. Reverse the placeholders manually in the response.

When this is right: occasional one-offs on a single short document.

The real problem: slow and error-prone. Names appear in 40 places in a 30-page brief. One missed instance defeats the whole exercise.

Option 3: Run a local model on your own machine

Tools like Ollama, LM Studio, and GPT4All let you run open-weight models (Llama, Mistral, Qwen) entirely on your computer. No cloud call, no provider, no retention.

When this is right: M-series Mac with 16GB+ RAM, work where a smaller model is good enough, summarising a 5-page memo, drafting a routine email, light Q&A on a single document.

The real problem: open-weight models are improving fast but still trail GPT-4-class and Claude-class on dense legal/medical/financial reasoning. If you're using AI to genuinely improve your output on hard documents, the quality gap is real.

Option 4: Automated redaction + frontier model ("Smart Redaction")

Software on your machine identifies and replaces sensitive entities, names, emails, account numbers, case numbers, dates, addresses, with placeholders, sends the cleaned prompt to a frontier model (GPT, Claude, Gemini), and reverse-maps the placeholders in the response.

A real example. Original prompt:

Draft a response to opposing counsel re: Smith v. Patel (23-CV-1042). They're requesting production of the 2022 financial records for $4.2M from John Smith's company, Patel Industries. Reference the protective order signed 03/15/24.

After redaction, what the cloud model actually receives:

Draft a response to opposing counsel re: [CASE_1] ([CASE_NUMBER_1]). They're requesting production of the [YEAR_1] financial records for [AMOUNT_1] from [PERSON_1]'s company, [COMPANY_1]. Reference the protective order signed [DATE_1].

The cloud model has enough structure to draft a useful response. It never learns the parties' names, the case number, the dollar figure, or the date. The response is reverse-mapped on your device before you see it.

When this is right: dense work where you want frontier-model quality on documents that can't go in raw. Multi-page contract review, brief drafting, session note synthesis, financial memo drafting.

The real problem: redaction is only as good as the entity recognition. Unusual names, internal codenames, and novel identifiers can slip through. Tools that handle this well let you preview the redacted version before any prompt is sent, and let you add custom patterns.

Quick Comparison

Workflow Output Quality Privacy Setup Best For
Don’t use AI n/a Highest None High-stakes judgement work
Manual redact + cloud Frontier High (if you don’t miss anything) Per-doc effort Occasional one-offs
Local model Mid Highest One-time install Lighter work, M-series Mac
Smart Redaction + cloud Frontier High One-time install Dense work on real documents

For most professionals doing daily client work, the answer is a mix of Options 3 and 4 depending on the document.

What to Actually Look for in a Tool

If you're evaluating something that promises privacy-aware AI for professional work, ask:

Where does redaction run? On your device, or on the vendor's server? "We redact before sending" means nothing if "redact" happens after the document already left your machine.

Where do the documents themselves live? On your disk, or in the vendor's cloud index? Indexed-in-vendor-cloud is back to the original problem.

Can you run a fully local model when you want to? For the highest-sensitivity work, no cloud at all should be an option.

Custody on cancellation. If you stop paying, what happens to your indexed documents? Clean delete, or stuck on someone's server?

Elephas, a Mac-native AI workspace built around Smart Redaction (Option 4) for this category of work. It supports both cloud-via-redaction and fully local models, indexes documents on your disk. Mentioning it because it's directly relevant, Options 1, 2, and 3 are also real and sometimes the better fit.

Here is the website: https://elephas.app

Happy to answer questions about how the redaction pipeline handles edge cases, unusual names, custom entity types, multi-document context. That's the part that's hardest to get right and worth being skeptical about with any tool that claims to do it.

FAQ

Is it ever safe to paste a confidential document into ChatGPT?

Without a BAA or equivalent agreement with OpenAI, no. Inputs are retained for up to 30 days even with training opt-out. Some enterprise plans add zero-retention agreements; check the specific contract.

Does ChatGPT "memory off" or "temporary chat" solve this?

No. Those settings affect what's available in your later chats. They don't change input retention or processing on OpenAI's side.

What about Claude and Gemini?

Same baseline. Inputs are retained for abuse monitoring even with training opted out. Specific retention windows are published by each provider.

Are local models good enough?

For routine summarisation, drafting, and Q&A on a single document, current open-weight models (Llama 3.x+, Qwen 3.5+, Mistral) are surprisingly capable on a recent Mac. For dense reasoning on legal, medical, or financial documents, frontier models still pull ahead, which is why Smart Redaction exists.

How accurate is automated redaction?

Depends heavily on the tool. Common entities (names, emails, phones, US date formats, US addresses) are caught reliably. Edge cases, internal codenames, unusual transliterations, project-specific identifiers, depend on whether the tool lets you add custom patterns. Always preview before sending.

Does Smart Redaction make a workflow HIPAA- or privilege-compliant by itself?

No tool by itself does. The architecture is one part of a compliant workflow; firm policy, BAAs where applicable, and user practice are the other parts. Smart Redaction makes the technical foundation defensible; the rest is on the professional.

u/juliarmg — 5 days ago

Saw this in a recent Netskope healthcare report. 71% of healthcare workers (doctors, nurses, allied health) are running patient data through personal ChatGPT, Claude, and Gemini accounts to draft notes, rewrite letters, summarize charts. Within the AI-specific slice of healthcare data violations, 89% involve PHI being pasted into one of those personal accounts.

None of those personal accounts have BAAs. The hospital's DLP can't see usage that bypasses the network entirely.

By HHS's definition, every "quick rewrite" is a HIPAA breach the hospital didn't authorize and can't account for. OCR audits are starting to look at this specifically.

I am not posting to lecture. Most of us know this is happening, we know it is a violation, and we still do it because the alternative is staying late writing the same notes by hand. The actual fix has to be technical, not policy.

I switched to Elephas for my own clinical writing. It strips patient names, diagnoses, and identifiers on-device before any prompt leaves my machine. Keeps the time savings, kills the HIPAA exposure.

How are folks at your institution handling this? Anyone's hospital actually provided a sanctioned alternative yet?

reddit.com
u/juliarmg — 8 days ago

Bleeping Computer ran this. Bug detected January 21, fix rolled out into early February. About three weeks. To be precise on what the bug was, because the early reporting confused this: Microsoft explicitly stated there was no cross-user access. The failure was self-access only. M365 Copilot Chat would happily ingest a user's own Sent Items and Drafts, including emails the user themself had labeled "Confidential" under DLP, and surface their contents back to that same user as suggestions, ignoring the very label that was supposed to keep that content out of Copilot.

Still bad. Anyone running compliance for a regulated industry (legal, financial advisory, healthcare) just had a really uncomfortable conversation with their CCO about whether DLP labels are actually load-bearing when the AI layer can simply choose to ignore them.

The takeaway I keep coming back to: vendor-side enforcement is a runtime promise, not a structural guarantee. The "Confidential" label is a flag the model is supposed to respect, until a code change ships and it doesn't.

I have moved to Elephas for sensitive prompts. Smart Redaction strips names and protected terms on-device before anything reaches Copilot or any cloud model. Removes the entire class of "oh the vendor's flag stopped working" problem.

Curious whether anyone has audited their tenant for this specific bug, and how many orgs even have visibility into what Copilot did during that three-week window.

reddit.com
u/juliarmg — 9 days ago