More than six out of every ten commercial masters in my personal high-definition music collection are clipping before the streaming service even touches them.
I didn't expect that number to be that high.
I'd built a personal collection of hundreds of HD-quality tracks across every genre I could think of — pop, metal, country, hip-hop, jazz, classical, EDM, gospel, indie, folk, reggae, bedroom pop, R&B, the works. Originally just for my ears. I'm a producer, I like a deep library, and 24-bit/96kHz sources are the only way to hear what a master is actually doing.
Then I started building LuvLang Studio, an AI mastering platform that runs a 24-stage chain. The collection turned into something else: a research dataset. I ran every track through the same BS.1770-4 loudness analyzer the streaming services use, plus a 4× polyphase sinc upsampler for true-peak detection. Same math Spotify and Apple Music use to set your playback gain. Same math your distributor's encoder uses to decide whether you're clipping.
The five findings below are what fell out. Some of them I'd have bet against if you'd asked me cold last year.
1More than 6 in 10 commercial masters are already clipping before they hit streaming
The single most surprising number in the data: more than 60% of the tracks I analyzed sit above 0 dBTP — meaning they're already producing inter-sample peaks that go over digital full-scale. Closer to 70% slide over -1 dBTP, which is the threshold at which Apple's AAC encoder and Spotify's Ogg Vorbis encoder will introduce audible clipping artifacts.
Here's why this matters and why most engineers miss it: your DAW's sample peak meter only measures the loudness of the digital samples on disk. It doesn't predict what happens between samples when those samples get reconstructed into an analog signal — which is what every D-to-A converter, every car stereo, every set of AirPods, and every streaming-service transcoder does on playback.
Inter-sample peaks are the ghost notes between the dots on your meter. To see them, you need a meter that does 4× polyphase sinc upsampling — the same algorithm a quality streaming codec uses internally. Most DAW stock peak meters don't. Most plugin limiters that do offer "true peak" mode have it disabled by default.
So you finish a master, your DAW says peak is -0.3 dBFS, you ship it. Spotify's encoder receives it, reconstructs the analog waveform internally, sees +0.4 dBTP of inter-sample peak, and either clips it or applies negative makeup gain to fit the headroom. Either way, your "loud master" is now subtly distorted or quieter than the song that played before it.
The right ceiling for a master that's going through any lossy codec is -1.0 dBTP — that's the recommendation from Apple's Mastered for iTunes guidelines, from the EBU's R128 spec, and from every mastering-house style guide I've read. Yet less than a third of my collection meets it.
This is a 2026 problem, not a vinyl-era problem. Streaming encoders are doing math that didn't exist when most engineers learned mastering. The wisdom that "anything under 0 dBFS is fine" no longer applies.
2Metal is the quietest genre I measured. Bedroom pop is the loudest.
Ask a hundred people who's most guilty of the loudness war and they'll say metal. The compressed snare, the wall of guitar, the brick-walled chorus.
The data says otherwise.
Metal had the lowest median integrated LUFS of any genre in my collection — around -14 LUFS. That's quieter than Spotify's playback target. Modern metal masters have noticeably more dynamic range than modern indie pop, modern hip-hop, or — and this was the surprise — modern bedroom pop.
Bedroom pop topped the chart at roughly -8.3 LUFS median — almost 6 dB louder than metal. That's the loudness equivalent of doubling the apparent volume.
Why?
My read: it's not a genre thing, it's an access thing. Metal albums in 2026 are largely mixed by engineers who came up through the loudness wars and survived them — they know the dynamic-range argument cold. They have access to real mastering services, real reference monitors, and real budgets. Their listeners are notoriously gear-aware. So metal got pulled back toward dynamic.
Bedroom pop, by contrast, is often mastered by the artist themselves, in headphones, on a laptop, using a stock limiter set to "make it loud." There's no engineer in the loop saying "actually, your master's going to get turned down on Spotify anyway." The result: a genre that prides itself on intimacy ends up the most crushed sub-genre I measured.
Tells you something about who has access to what.
3Less than 1 in 13 commercial masters hits Apple Music's loudness target
Apple Music normalizes everything to -16 LUFS. That's their published Sound Check target, and unlike Spotify's -14 LUFS (which can be partially overridden if your master is louder), Apple's normalization is harder to escape.
Yet only about 7% of the tracks I measured fall at or below -16 LUFS. The other 93% are getting actively turned down on Apple playback.
If you're mastering for Apple Music and you're not hitting -16, the platform is literally adjusting your gain in software on every listener's device. Your master sounds quieter than it should next to a track that did hit the target — and in many genres (jazz, classical, acoustic, folk), the dynamic punch you worked hard for gets compressed by the playback-time gain reduction.
The fix isn't "make it quieter for Apple" — it's master to spec and let the platforms each do their own normalization. A track at -14 LUFS with -1.0 dBTP plays cleanly on both Spotify and Apple, because both platforms can normalize down accurately. A track at -7 LUFS with +2 dBTP gets clipped on encode by both.
Most engineers I talk to either don't know Apple's spec exists or assume it's the same as Spotify's. It isn't, and the data says almost no one is mastering for it.
4Genre is more predictable from dynamic range than from loudness
If I gave you a random commercial master and told you its integrated LUFS, you could guess the genre with maybe 40% accuracy.
If I told you its loudness range — the dynamic spread between the loudest and quietest sections — you'd guess it correctly closer to 70% of the time.
The highest LRA in my collection sits in classical music, where the median is around 14 LU of dynamic range. The lowest sits in reggae, at about 3.8 LU. That spread — more than 10 LU between the most and least dynamic genres I measured — is wider than the loudness spread between the same two genres.
What this means in practice: LRA is the spec that's harder to fake, and easier to use as a sanity check on your master. If your indie folk track has an LRA of 4, something is genuinely wrong — you've crushed it like a club banger. If your EDM track has an LRA of 14, it's going to feel anemic on a club system.
Loudness is what your meter shows. Range is what your audience feels.
5The loudness war is still alive and well in 2026
Streaming normalization was supposed to end this.
Around 2014–2017, the conventional wisdom hardened: "Streaming services normalize everything to a target, so loud masters get turned down. Make a quieter, more dynamic master, and you'll sound louder by comparison because the playback gain stays high." Spotify, Apple Music, Tidal, YouTube — all of them implemented loudness normalization to varying degrees. The economic incentive to master loud was supposed to disappear.
It didn't.
Just over four in ten of the masters I measured sit above -10 LUFS — squarely in loudness-war territory. That's a market where streaming normalization has been operational for nearly a decade. You'd expect that share to be small and shrinking. It's neither.
The honest read: many engineers are still mastering loud because their clients want a master that sounds loud off the platform — when an A&R rep plays it back in their car, when a producer drops it into a playlist next to a 2017 reference, when an artist plays it for friends through a Bluetooth speaker. The platform-normalized playback is one context. The other contexts haven't gone away.
I don't have a clean prescription here. But the next time someone tells you the loudness war is over, you can show them this data and disagree.
What this means if you're an indie artist
If you've read this far, you're probably either an engineer or an artist mastering your own work. The takeaways:
- Check your true peak with a real 4× upsampling meter. Not your DAW's sample-peak meter. Real true-peak. If you don't have a plugin that does this, run the free LUFS check — it uses the same BS.1770-4 + 4× sinc algorithm the streaming services use, and it's free.
- Aim for -1.0 dBTP, not -0.3 dBFS. Your master will still sound loud after the codec touches it. That's the whole point.
- Pick your platform target and master to it, not above it. Spotify -14, Apple Music -16, vinyl -18, podcasts -16, club -8 to -10. Mismatched targets are why your master sounds different on different platforms.
- Watch LRA more than LUFS. If your loudness range doesn't match what the genre expects, listeners will feel it before they can articulate it.
- Stop mastering for "loud off the platform." Master for the platform that's going to play it. The platform will normalize. Your peers will A/B-compare on the same platform. The "loud in the car" reference is from a different era of music distribution.
The numbers above came from running every track through the same loudness chain — BS.1770-4 K-weighted integrated LUFS, 4× polyphase sinc upsampling for true peak, EBU-3342 short-term loudness for range. No genre-specific calibration, no manual fudging. Same math the streaming services use to set your playback gain.
Run your own track through the same chain.
30% off your first single-track master — first 200 readers, 90 days. Master code at checkout: