u/D3AD2U

▲ 0 r/WGU

very much annoyed with my tasks being sent back...

3 times now

saying my sources aren't verified

they are.

now I have to wait for approval from an instructor over a paper... a simple paper.

they either need to get rid of the writing PA's, or the evaluation system just sucks.

i even changed the topic twice just to satisfy the rubric since evidently i wasn't...OBVIOUSLY SOMETHINGS BROKEN. literally only flagged for sources, which is SUPER ANNOYING when you just want to get that class out of the way 🙄 😒

reddit.com
u/D3AD2U — 5 days ago

AI Voice agents in healthcare admin calls: payer-side observations

​

i spent about 8 months on the payer side working in insurance operations focused on hipaa compliance and provider access control.

day-to-day, that meant handling provider calls for eligibility, claim status, appeals, and authorization questions while making sure protected health information was only disclosed to verified parties.

around mid-2025, we started seeing a new pattern: ai voice agents calling on behalf of provider offices.

initially, they passed standard verification checks (npi, member id, date of service), so they were handled like normal provider calls.

over time, a few operational issues started showing up:

\- disclosure that the caller was an AI system often happened only after conversation had already started

\- voice interactions sometimes included human-like cues (pauses, background noise simulation) that made identification less obvious at first

\- there wasn’t a consistent or standardized way to verify whether the AI system was authorized to act on behalf of the provider in real time

because of that uncertainty, the default internal response became to end the call and request a human representative.

that created its own downstream issues:

\- repeat call volume from the same providers

\- increased manual handling on both sides

\- inconsistent outcomes depending on who answered the call

the core gap wasn’t “AI is calling,” but that there isn’t a shared operational standard yet for:

\- when disclosure should happen

how AI agents should identify themselves

\- what counts as valid authorization in real-time workflows

\- how escalation to a human is handled

anyone in payer, provider, or health admin roles are seeing similar patterns yet, or if this is still early?

reddit.com
u/D3AD2U — 9 days ago

i spent about 8 months on the payer side working in insurance operations focused on hipaa compliance and provider access control.

day-to-day, that meant handling provider calls for eligibility, claim status, appeals, and authorization questions while making sure protected health information was only disclosed to verified parties.

around mid-2025, we started seeing a new pattern: ai voice agents calling on behalf of provider offices.

initially, they passed standard verification checks (npi, member id, date of service), so they were handled like normal provider calls.

over time, a few operational issues started showing up:

- disclosure that the caller was an AI system often happened only after conversation had already started

- voice interactions sometimes included human-like cues (pauses, background noise simulation) that made identification less obvious at first

- there wasn’t a consistent or standardized way to verify whether the AI system was authorized to act on behalf of the provider in real time

because of that uncertainty, the default internal response became to end the call and request a human representative.

that created its own downstream issues:

- repeat call volume from the same providers

- increased manual handling on both sides

- inconsistent outcomes depending on who answered the call

the core gap wasn’t “AI is calling,” but that there isn’t a shared operational standard yet for:

- when disclosure should happen

how AI agents should identify themselves

- what counts as valid authorization in real-time workflows

- how escalation to a human is handled

curious if others in payer, provider, or health admin roles are seeing similar patterns yet, or if this is still early depending on your system.

reddit.com
u/D3AD2U — 9 days ago
▲ 3 r/healthIT+2 crossposts

i spent about 8 months on the payer side at a tricare dental insurer handling hipaa compliance and provider access control.

basically: making sure protected health info only went to the right people and access was actually legitimate.

around mid-2025 we started getting a new type of call: ai voice agents calling on behalf of provider offices.

at first, it wasn’t obvious.

they’d pass all the normal checks:

npi

member id

date of service

so we’d proceed like a normal call. eligibility, claim status, appeals, all that.

but after a few minutes something always felt slightly off. not in a “this is obviously a bot” way. more like the pacing was too controlled. the pauses too clean. even background noise like typing or breathing didn’t quite match a real human call center.

so i started testing it directly.

“am i speaking with a real person?”

there’d be a pause.

then something like:

“i am a virtual assistant calling on behalf of dr. smith’s dental office.”

and at that point it was already a problem. because we’d usually shared at least some level of protected data before that disclosure happened.

and even after they disclosed, there was still no reliable way to confirm they were actually authorized to represent that provider in a way we could accept under policy.

so our response became simple: end the call.

we had a script for it: we don’t speak with ai agents, please have a human representative call back.

what made it worse was the loop. they would just call again 10 minutes later. same flow. same outcome.

this repeated a lot. like, thousands of times across the system.

and the weird part is, the issue wasn’t even “ai is calling.”

it was everything around how it was being used:

disclosure only happened when directly asked

human-like audio tricks (breathing, typing sounds, filler pauses)

no standard way for payers to verify authorization in real time

no shared agreement on whether ai agents were even acceptable in these workflows

so the default policy became blanket rejection.

which sounds clean on paper, but in practice it just created more work for everyone. more repeat calls, more provider frustration, more manual handling, and honestly more compliance risk because nothing was standardized.

you start to realize the system isn’t really ready for this category of interaction yet. it’s either “treat it like a human” or “shut it down,” with nothing in between.

and that’s kind of where nhid-clinical came from.

it’s an open-source attempt to define baseline behavior for ai voice agents in healthcare payer workflows. not law, not regulation. more like: if you’re going to build or deploy these systems into real clinical admin workflows, here’s the minimum bar that avoids chaos.

things like:

ai must disclose before any data exchange happens

no pretending to be human (no fake breathing, typing, implied identity)

clear path to escalate to a human

logs that actually prove when disclosure happened

a way to verify who the ai is acting for

there’s also a basic conformance test suite and certification tiers so it’s not just abstract rules.

we also set up a validation program so vendors can actually test against it instead of guessing.

it’s live here if anyone cares: https://nhid-clinical.org/validation

i don’t think this “solves” the bigger healthcare admin problem. that system is way too big for one standard to fix.

but i do think we’re at a point where ai is already inside these workflows whether we’re ready or not, and the lack of shared rules is creating a lot of unnecessary friction on both sides.

right now it feels less like automation and more like both sides just generating more work for each other, faster.

anyway, curious if anyone else on the payer/provider/admin side is running into the same thing, or if it’s still early depending on the system you’re in.

reddit.com
u/D3AD2U — 11 days ago

Hey everyone!

​I’m sitting for my second attempt at Core 1 in about two hours. I got a 617 on my first attemp, and I’m determined to pass this time.

​My practice scores on CertMaster have stabilized in the 77–83% range, but I’m still feeling a lot of anxiety about the PBQs. Specifically, I’ve been struggling with the networking simulations—things like properly mapping MAC addresses to IP reservations in the SOHO router GUI and accurately identifying multicast patterns (IPv4 vs. IPv6).

​If anyone has tips on the logic CompTIA expects for the wiring (T568A/B) or the storage hierarchy/troubleshooting PBQs, I would really appreciate it.

​I’m planning to skip the PBQs at the start and do them at the end. Any other "must-know" tips for the simulations?

​Thanks in advance!

Update: I passed!!! with a 742 🙏🏾 thank you all!

reddit.com
u/D3AD2U — 15 days ago