I don’t have a new roundup for you this week, but I do want to tell you about My Feelings On Music Streaming Services.
See, I just switched to Tidal from Spotify, and I have thoughts. Mainly…
It sounds different.
It really does. As to why it sounds different, the most well-informed opinion1 I’ve read so far on Spotify’s audio post-processing is that of Andrew Sarlo (producer/mixer of some records you may have heard in recent years) who said in an Instagram post that it sounds like (I’m paraphrasing) Spotify basically attaches a multiband compressor to everything, which rings true with my experience in comparing Spotify to the original mp3s/wavs, and to Tidal (which as far as my ears can tell imparts no processing). My guess is Spotify’s goal is to reduce friction in the listening experience as much as possible by way of a) normalizing everything, i.e. making everything the same volume more or less and b) reducing harshness. The second one is the part I’m interested in, as whatever Spotify is doing to achieve this really does seem to impart a sound that I have come to think of (without even knowing it) as the Spotify sound.

It’s a bit like the Jack Antonoff sound, which to my ears is a production style built for the streaming era: rolled off treble on everything, a healthy amount of sub frequencies, and an extremely polite amount of presence frequencies (1k~2k).2 Precision-engineered music for Airpods, where the gentler the midrange, the more you can turn up the volume without the feeling of ice picks in your ears. Put it on in the car (especially on a modern, V-shaped EQ curve stereo) and all that rolled off high end will sound pretty good, and again, allow you to turn it up as much as you’d like without painful treble getting in the way.
The weird thing is on Spotify, all records sound a little bit like this—the top end is tucked in just enough that nothing you listen to will ever hurt you. It sounds like the low mids are in some way getting a boost (maybe just the relative effect of holding that treble in a little with a dynamic band, I don’t know); the result is, everything sounds darker and gentler. I will admit that on some records this is OK with me; if I were mastering a record (or turning the EQ knobs on the car stereo) I might make similar adjustments in some cases, because I do like to hear “into” a mix; I like it in general when sounds are darker and gentler.

At the same time though, when there’s essentially an extra layer of mastering on top of all recorded music, regardless of the final decisions of the artists/engineers/labels, it creates a strange situation where artists are all shooting at an invisible target. You can’t really know what your song will sound like on the most popular streaming service until you put it on there, publicly. I think it’s led people to start to mix in a way that already sounds like Spotify, dialing in a slick, boxy, rolled off sound, then releasing that on Spotify, and suddenly there’s a vicious cycle—all the adjustments in your mix to get things sounding dark enough are now essentially doubled in severity by Spotify itself, and now you’re contributing to the problem, because your song sounds even more rolled-off than everyone else’s, and you can start to see why records nowadays have gotten so dark and low end-heavy.
Anyway, this is not wholly bad or wrong, in some ways I’m all for the war on treble3, though I think it’s a bit strange from an artistic standpoint to add what is essentially a third mastering stage performed completely blind. I expect things will start to go the other way pretty soon, as if you make a record that’s really trebly right now I bet it’ll stand out from the pack (if you can get over the fear that that will make your work sound amateur, or dated). It’s just interesting.
Personally, I feel like music feels just a little more 3D without the processing. I can hear decisions about dynamics and intensity more clearly when the music isn’t being normalized/EQ’d by something outside of the artists’ control. It also, just slightly, feels more like I’m hearing something that lives in the foreground rather than the background. Spotify has made it pretty clear that it wants you to treat music as a mood-enhancer, and little more, and I am a little surprised to tell you that the more I think about it, their extra layer of processing might be their secret sauce, to that end at least.
—
OK, getting to the end here…
I also want to say that Tidal has a much better interface. I can sort my albums by artist now—an earth shattering feature that no one has ever thought of, except iTunes, which has let you do that since… 2001? Forever?
Anyway, I haven’t bothered talking about the ethics of how much the streaming companies pay artists. If you ask me, we need massive social reform in all areas of society and the music business is just one of them, and probably not the most important right now. At the end of the day I’m basically just an asshole complaining about some changing aesthetics that no one cares about. On the other hand, my Twitter timeline/the Ursula K. Le Guin robot tells me that “resistance and change often begin in art.” Sounds nice. But wouldn’t it be better with a little EQ to take the edge off?
It seems like Spotify is pretty secretive about this, so everything I’ve read is pure speculation. Fun to speculate, though!
4k is the new 2k. Or maybe it’s 8k. Anyway, things have been steadily clearing out of the midrange for years and into the highs or the low mids.
The sequel to the loudness war, if you ask me.
very interesting insights ... are you gonna go back to tidal or stay with spotify?