Client handed you a noisy recording to fix? Here's the technical hierarchy for rescuing bad audio
When a client hands you footage with bad audio, the order you apply fixes matters more than the tools you use. Getting this wrong either costs you time or makes the problem worse.
Here's the hierarchy that's worked for me:
Step 1: Cut obvious problems before processing Remove the worst sections - the chair scrape, the phone notification, the cough. No noise tool handles impulse noise well. Cut first.
Step 2: Deal with the noise floor Broadband noise (room tone, HVAC, street) responds well to modern deep learning suppression. Tools trained on speech priors (DeepFilterNet, RNNoise) handle this better than FFT-based spectral subtraction because they understand what voice is supposed to sound like. Key: don't over-suppress. Set attenuation limits. The metallic artifact is always suppression applied too aggressively.
Step 3: Filler words and silences, if the client wants them This is where most editors give up because it's mechanical and slow. Word-level timestamps from Whisper-based transcription let you review and cut in bulk rather than scrubbing. Still needs a human review pass but dramatically faster than listening through.
Step 4: Loudness normalization last -16 LUFS for online delivery. Do this last, after all cuts, so your loudness measurement reflects the final edit.
What's the messiest client audio rescue you've had to do? And what's your go-to tool for step 2 right now?