u/TwinHeadedGiraffe69

Hi u/oratory1990,

I am initiating a project to calibrate my Beyerdynamic Aventho 300 for Atmos/Spatial audio using a highly specific REM (Real Ear Measurement) protocol. I would appreciate your technical perspective on the averaging and integration process.

  1. Measurement Protocol (360° Spherical Averaging):

I will be performing measurements in an acoustically treated room with a flat-response speaker at a 1.5m distance.

The Process:

Instead of a single fixed point, I plan to capture measurements at ear level, above head level, and from the vertex, covering a 360-degree rotation. I will then calculate the SPL average of these points to derive my personal HRTF.

Goal:

To obtain a more robust, "spatially averaged" HRTF baseline rather than a single-angle snapshot.

  1. Bass Shelf Integration:

Since the raw HRTF from a 1.5m speaker measurement will naturally result in a "down-tilt" but lacks the typical Harman-style bass shelf (post-80Hz), I need to define the final target.

Question:

What is the most precise way to overlay a target bass curve onto this 360° averaged HRTF? Should I integrate the shelf directly into the compensation curve, and do you foresee any phase-coherence issues when combining spatial average data with a fixed low-shelf filter?

  1. DSP and Feedback Loop Conflict:

The Aventho 300 utilizes active ANC and internal feedback microphones.

Concern:

Given that I’ll be applying a manual EQ based on this high-precision REM data, how will the internal DSP/feedback loop react? Is there a high probability of the headphone’s internal "self-correction" attempting to counteract these manual adjustments, particularly in the lower frequencies where the feedback loop is most sensitive?

  1. Handling Spatial Nodes and Dips:

Through 360° averaging, many narrow cancellation nodes might be smoothed out, but some structural dips (e.g., in the 3-5kHz range) will remain.

Question:

Is it technically sound to apply high-Q corrections to "fill" these averaged dips, or does the spatial nature of the measurement suggest that these should be left as-is to preserve natural head-shadowing cues?

I am curious to hear your thoughts on using spherical SPL averaging for personal headphone EQ targets.

Best regards,

Yekta

reddit.com
u/TwinHeadedGiraffe69 — 13 days ago
▲ 6 r/oratory1990+2 crossposts

Hi 👋 my name is Yekta I used momentum 4 hd58x accentum wireless samson semi open edition xs sundara and he400se in the past. I got my aventho 300 5 days ago this is my review in terms of sound quality and my general opinions about the aventho 300

When I putted on my head for first time my first impression was "why this sounds so good in tonality (tonality: average DB amount of bass mid and treble.) but sucks in resolution ( peaks cuts and dips in frequency) and than I fell in love with the Dolby Atmos sound. I used apple music Dolby Atmos mastered albums for listenings and the experience was great 😃. My next move was fixing the resolution errors, i used tone generator and hearing tests to find the errors and it was not easy but after 1 whole day of frustrating i got some promising results. Errors were in 120Hz, 2kHz, 4kHz, 6kHz. But remember that every ear and head is unique and the results may vary :)

u/TwinHeadedGiraffe69 — 13 days ago

Hi everyone,

My name is Yekta and I’ve spent the last few weeks diving deep into the Spatial Audio rabbit hole. I recently followed the setup popularized by Sharur, which uses macOS, Blackhole 64 channel, and the Dolby Atmos Renderer to listen to Apple Music tracks through Samsung AKG EO-IA500 wired IEMs. However, after rigorous testing, I’ve discovered an Android-based setup that objectively and subjectively outperforms it. I conducted an A/B test with 6 participants, and all 6 preferred the Android/Samsung setup over the macOS Dolby Atmos rendered chain. Here is the breakdown of the setup, the science, and the tutorial.

The Hardware & Technical Foundation

The core of this setup is the Samsung Galaxy S24 or any other Samsung phone with dolby Atmos paired with the Galaxy Buds 2 Pro or Galaxy buds 3 pro, Galaxy buds 4 pro. While many dismiss "consumer" wireless buds, the technical specs on Samsung’s official documentation tell a different story:

Dolby digital plus Joint Object Coding (JOC): Unlike many systems that downmix to virtual binaural, the Buds 2/3/4 Pro series feature a dedicated chip capable of decoding native Dolby Atmos JOC (Joint Object Coding) streams.

Samsung Seamless Codec (SSC): This allows for 24-bit high-resolution transmission, ensuring the Atmos metadata remains intact and high-fidelity.

8-Channel Rendering: Samsung’s hardware supports native 8-channel multi-channel rendering, providing a much wider soundstage than standard virtualizers. Quote from https://www.samsung.com/uk/support/mobile-devices/what-is-the-galaxy-buds2-pro-360-audio-feature/ "The 360 audio in Galaxy Buds2 Pro supports Direct Multi-Channel, 5.1ch / 7.1ch / Dolby Atmos delivering incredibly immersive sound. The sound you hear is more multi-dimensional than previous Buds Pro and makes you feel as if you are in the centre of the cinema. Enjoy the 360 audio on Netflix, Disney+, HBO Max contents on a whole new level."

The Optimization Process (The Tutorial)

To achieve "Realism," you cannot rely on stock tuning. Here is how I optimized the chain:

  1. Personalized Target (peqdB): I used peqdB Personalized EQ tool. By adjusting the sliders while using the Buds 2 Pro’s 360 Audio, I found the optimal simulation EQ target for a 3D HRTF effect (Head-Related Transfer Function) for my ears to maximize the 3D effect.

  2. Physical Isolation and hardware tuning Mod: My testing showed a 20dB drop around 20Hz-25Hz in most users who used Galaxy buds 2 pro due to poor seating. I replaced the stock tips with Sony XM3 foam tips. This significantly improved the seal and brought the sub-bass back to a linear response and made the hard tune dips and peaks more tunable.

  3. Frequency Flattening:

Using Squig.link’s Tone Generator, I manually scanned the frequency range from 20Hz to 20kHz. I identified and corrected all "peaks and dips" to ensure a flat, reference-grade response.

  1. DSP Integration (Wavelet): I exported the final EQ curve into Wavelet. Crucially, I adjusted the Input Gain (Pre-amp) in Wavelet to match the negative gain required by the EQ to prevent "clipping and distortion" .

  2. The Final Chain:

Enabled 360 Audio with Head Tracking on the Samsung Wearable settings and set Apple Music to "Dolby Atmos: Always On."

The Results:

The A/B Test We compared this against the macOS setup (Blackhole 64ch + Dolby Atmos Renderer + EQ’d Samsung IA500).

The Verdict: 6 out of 6 listeners chose the Samsung Android setup. Why? The native hardware decoding of multi point channel Dolby Atmos combined with head tracking felt more "physical" and realistic. Tracks from "The Weeknd, Daft Punk, Lady Gaga, and Billie Eilish" (noted for their high dynamic range Atmos masters) felt like a 7.1.4 room rather than a processed headphone mix.

Conclusion

The macOS configuration with dolby Atmos Renderer is a great tool for enthusiasts, but the integration of Samsung’s DDPJOC + multi Chanel Dolby Atmos rendered hardware decoding and Personalized DSP creates a more "correct" and immersive soundstage. If you have the hardware, stop using the default settings and start optimizing your target.

TL;DR: Don't sleep on Samsung's native Atmos decoding. With a foam tip mod, Wavelet, and peqdB EQ personalization, it beats the industry-standard macOS based software Dolby rendering in blind tests.

u/TwinHeadedGiraffe69 — 1 month ago