u/Simple_Response8041

I work in audio DSP and tested OTC hearing aids in noisy restaurants — here's what I learned about "AI noise reduction" claims

Background: I'm a signal processing engineer who's worked on speech enhancement pipelines for telecom. I was diagnosed with mild-to-moderate bilateral sensorineural hearing loss about two years ago. I wear prescription aids for work but I've been genuinely curious about how OTC devices handle noise reduction — specifically the ones marketing "AI-powered" speech enhancement. So I spent the last few months buying and testing several OTC hearing aids in what I consider the hardest real-world scenario: noisy restaurants during dinner rush.

This isn't a product review. I want to talk about what's actually happening under the hood when OTC manufacturers say "AI noise reduction," because I think there's a meaningful gap between the marketing language and what the silicon can realistically do, and I'm hoping some audiologists here can weigh in from the clinical side.

What "AI noise reduction" probably means in OTC devices

In the telecom and hearables world, modern noise suppression generally falls into a few buckets:

  1. Classical approaches — Wiener filtering, spectral subtraction, fixed beamforming. These are computationally cheap and well-understood. Most basic hearing aids have used some version of this for years.
  2. Statistical/adaptive — Adaptive beamforming, minimum variance distortionless response (MVDR), postfilters. Better, but still limited in non-stationary noise (like restaurant babble where the "noise" is also speech).
  3. DNN-based — This is what "AI" usually means. A trained neural network (often a recurrent architecture like LSTM or GRU, or increasingly some form of convolutional network) that's learned to separate speech from noise on large datasets. These can be remarkably good at handling non-stationary noise because they've learned spectral patterns of human speech vs. everything else.

The catch with category 3 on OTC devices is compute. Running even a small neural network in real-time on a hearing aid DSP chip is non-trivial. You need to keep total processing latency under roughly 10ms to avoid the user perceiving a disconnect between bone-conducted and air-conducted sound. That's a brutal constraint. Most OTC devices are running on chips with nowhere near the processing headroom of, say, a smartphone neural engine. So the question becomes: is the DNN actually running on-device in real-time, or is it a hybrid approach where a simpler model was trained with DNN techniques but deployed as a more compact filter?

My restaurant testing experience

I tested four OTC devices over several weeks at the same two restaurants (one moderate noise, one loud). I won't rank them because that's not the point, but I want to talk about one specific observation.

One of the devices I tried was the ELEHEAR Beyond Pro, which markets something called VOCCLEAR as its AI speech enhancement system. In the loud restaurant scenario (I'd estimate 75-80 dBA ambient), it did something I found technically interesting — it seemed to be doing a more aggressive separation of the voice I was facing versus diffuse background noise than what I'd expect from simple directional microphones alone. My dining companion's voice had noticeably more presence relative to the background babble compared to two other OTC devices I had with me. It wasn't perfect — when someone at the next table spoke loudly from my side, it would occasionally "grab" that voice too, which is consistent with a beamforming + learned speech model approach rather than a pure speaker-separation system.

But here's what I can't determine from subjective listening: how much of this is genuinely a DNN-based enhancement versus well-tuned adaptive beamforming with aggressive noise gating? Without access to the actual algorithm architecture or independent measurements of speech intelligibility improvement (like SIN or QuickSIN scores in controlled conditions), I'm essentially guessing based on the perceptual artifacts I notice.

The honest limitations

Even the best-performing OTC device I tested was noticeably worse than my prescription aids (Phonak Lumity 90) in the loud restaurant. That's expected — prescription devices have more sophisticated multi-mic arrays, more processing headroom, and critically, they've been programmed by my audiologist with real-ear measurements tuned to my specific loss. OTC devices are working from a self-fit audiogram with generic fitting logic. For mild-to-moderate loss in moderate noise, several of the OTC devices were surprisingly competent. In severe noise, the gap widened considerably.

My question for the audiologists and researchers here

From a clinical perspective: when you see OTC devices claiming AI-based noise reduction, how much weight do you give those claims? Are any of you aware of independent third-party studies (not manufacturer-funded) that have measured speech intelligibility improvements from specific OTC AI noise reduction systems in standardized conditions? I'd love to see actual QuickSIN or HINT data if it exists.

And for anyone else with a technical background who's been evaluating these — have you found ways to objectively measure what these devices are doing to the signal beyond just subjective listening?

To be clear, I think OTC hearing aids are a net positive for accessibility, especially for people with mild-to-moderate loss who might not otherwise get any amplification. But I also think the "AI" marketing has gotten ahead of what we can actually verify these devices are doing. Would love to hear other perspectives.

reddit.com
u/Simple_Response8041 — 8 days ago