I listened to this one:
And it's just too distracting, it doesn't feel like being in the room with the music at all. It just feels like it's going from the left ear to the right ear. It just sounds really weird.
It doesn't invoke the feeling of being at a concert at all, as the sound is constantly "moving" in space. Maybe it would be like being at a concert where I'm in some sort of car circling the stage... With the source of the music constantly changing.
I listened to "Bohemian Rhapsody" and "Nothing else matters", and compared to the originals. The originals were much better.
Maybe if you're listening on rubbish earbuds anyway, the effect sounds cool. But on my reasonably expensive headphones it just kills it.
But then I don't get ASMR videos either (I find them rather unpleasant and a bit creepy usually) so I suppose I'm a bit of a snob.
That said, I could see a use for this in gaming with a VR headset (are they still a thing?) if that was your bag.
Makes sense that this would be both useful and easier to do well since there actually is positional data for where sounds ought to be coming from. Ideally you'd have the audio equivalent of the tools used to simulate generation, occlusion, reflection, and refraction of light on the graphical side.
But adding it to stereo tracks via post-processing is a lot harder to do well. It reminds me of "simulated surround" effects on my old home theater boxes but with more panning around instead of just trying to route certain frequencies to the proper speakers.
Somewhere were I can separate voices of back singers, where I can position mentally, where instruments are (if acoustic performance).
Instead this 8D thing I just listened (Bohemian Rapsody) -- sounded horrible. Quality of recording is bad, muffed and then amplified, and the constantly and unnaturally shifting balance of sound -- just felt annoying.
I am wondering if the onset of low-quality, heavily modified audio chips/earbuds, and recordings that proliferate YouTube and mobile devices -- is one of the cause of the new 'modify-shit-out-of-original' trend.
I tried Bob Marley's "Is This Love" and a bunch of others, and they are nausea invoking experiences. The constant back and forth stereo panning is just disturbing.
It probably makes a big pile of cash to the uploaders though since it's easy to do and sounds peculiar.
This idea would be so much better if the instruments were split out to individual tracks and each track would be panned to certain point in space and kept there.
> It just feels like it's going from the left ear to the right ear. ... the sound is constantly "moving" in space.
I don't think it's meant to sound like the former. For better or worse, it's meant to sound like the latter.
That's what the few examples listed in the article sounded like to me; in fact I found the panning of the instruments and vocals a bit frustrating to listen to. Perhaps this works better on some people than others, i.e. non-musicians?
I played around with those a while ago since it makes a pretty large difference in the realism of positional audio (both for surround video and for games). For me especially for differentiating between sound coming from in front of and from behind me. But I never really got it to work globally.  has a list of 50 different HRTFs. Only ~5 of them manage to make the sound come from in front of me correctly.
Anecdote time; friends of mine own(ed) a niche A/V store, their grandparents opened the store in the early '60s and their primary focus was to carry, sell, service, and most importantly demo everything from a $50 loudspeaker to the $10,000+ amp. Back then, every product was completely different from the next. A $50 amp and a $500 amp was like the difference between a CRT TV and a 4K OLED TV, same with speakers. About 25 years ago when their parents took over, it was already becoming evident (mainly with everything going digital) that the difference between a $50 amp and a $500 amp wasn't nearly as pronounced as it was 50 years ago. 5 years ago the business pivoted away from A/V, because nobody is going to spend $500 on an amp or receiver when they can pick up a $30 soundbar at Walmart and get 95% of the quality/experience. I've seen it myself; the family is still a bunch of die-hard audiophiles where money is absolutely not an object when it comes to audio, to the point they built a home theater and spent $30,000 on a custom-built 11.2-channel tube-based Atmos receiver, >$2500 per speaker (that's right, nearly $30,000 in speakers), and around $10,000 for each subwoofer. While most of our friend group will tell you it's unquestionably the greatest sounding thing on the planet, there's still a few who'll put in their dollar-store earbuds, load up a 360p song on YouTube, and tell you completely straight-faced it sounds no different.
People spend $200 or $300 on Beats headphones, and are happy to overlook the inferior quality to get a hot name brand. I have two wishes:
(1) that companies with very strong brands actually cared and invested in quality and R&D (ahem, beats);
(2) that companies that actually do have the quality and attention to detail in their product, find a way to use modern media effectively, and not just drive potential customers to their particular products, but encourage folks with a passion for pure beautiful sound that their aim is worthy.
I get that it's a novel feeling to hear music coming from a "soundstage location" that isn't traditionally used in mixing, but the novelty only lasts for so long, and constantly moving it around just draws your attention to it moving over actually listening to the song.
Early virtual surround sound examples always seemed impressive, if gimmicky: a plane would fly over your head and you'd hear it move from back to front.
But the examples in the article sounded like someone had discovered the stereo pan knob on a mixing desk and was slowly going from left to right.
This could surely be done (and probably has been done) using a VR headset or even just IR lights and cameras a-la TrackIR. If latency was low enough, you could process the sound to model the changes caused by head position/orientation.
"these go to eleven"
By far the best use of binaural audio I've heard is the simulated voice-hearing of psychosis that Ninja Theory used in Hellblade: Senua's Sacrifice.
[trigger warning re: psychosis, other mental health issues for all of Hellblade]
The very impressive - and disturbing - opening of the game (headphones required): https://www.youtube.com/watch?v=G-SRoil79g0
Dev diary about designing and implementing the voices: https://www.youtube.com/watch?v=LQQ2Jm2dgXk&t=0s&list=PLbpkF...
The way the fake front / back / top / under positioning is done is by simulating frequency alteration depending on sound origin (for example, your ear geometry filters higher frequencies this way when they come from the front, and this other way when they come from the back).
The problem is that not only your whole body is sensitive to sound waves, but also every person's body is different. If you want a good effect, you need to calibrate the frequency alteration to work with your body, using your sound equipment.
Edit: From the comments I'm starting to wonder if certain people perceive this effect differently. I'm very much on team "so you discovered the pan nob?" but the way people are talking, I have to assume that there's two groups of people having two wildly different experiences.
Yes I think it's dumb when people listen to music on their phone speakers. No this doesn't mean they're any different from the rest of us. This is just "pssh, kids these days!" spread out over a few sentences.
Simpler put, 'high fidelity' has become a boutique item. And as expectations fall, it's going out of fashion.
A lot of these vids are a bit gimmicky, and over doing it, but this is so cool.
I've just spent 30 minutes listening to as many as I can, some are terrible and some are great, maybe try more tracks.
I can totally empathize with people saying that this is annoying/distracting, and I can't see myself doing it for a long period of time, but as a tech demo/cool thing, I find this awesome.
I think binaural is a great opportunity for more access to emotions when used well.
We made a tool for this kind of effect: https://www.auburnsounds.com/products/Panagement.html
I think one aspect of it that adds to the annoying experience - it's as if the music is lurking around in your personal space right behind you, which is really irritating when a person does it.
If you do have 5.1 and an Apple TV hooked up, search for ‘surround speaker check’ in the App Store and play the 5.1 test. You’ll be amazed how nice 5.1 music sounds.
Now that there's this Queen demo, I recognise the kind of rendering effect this is achieving, and such schemes making the sound more realistically spatial with headphones has been existing for years (decades), such as Dolby Headphones or binaural audio either beforehand or via post-processing (e.g Bauer stereophonic to binaural a.k.a BS2B).
The Queen example is lousy at best: instead of having the single audio source wander in space, it could have been used with a true mix to actually position (and move, properly) each voices/instruments in space.
But of course, you'll need the original tapes to be able to do that. :p
These effects generally require at least having the original separate tracks, if not specialized recording techniques. However, I still have hope that advanced signal processing can separate the parts well enough after the fact to get a satisfactory result. I'm no ML fanboi by any means, but I'm pretty sure those techniques could be productively brought to bear on the problem as well. I'll bet there's a grad student or two working on it right now, and I wish them great success.
...such that the results could be flanged and recombined separately. But as you hear in their demos, any recombined result would be a Picasso painting as compared to a photograph, maybe interesting though.
I quite liked this demo, which shows a bit more potential for placement :
‘Cos I can’t hear anything else going on in there