Pianos have the same effect for the lowest notes, they will not reproduce the bases directly because the soundboard simply isn't large enough to accommodate the waveform. But the harmonics do fit and again your brain will interpret the harmonics pattern in such a way that they conclude the low tone is actually present.
This is called the 'missing fundamental' and is one of the more interesting psycho acoustic phenomenon.
It works like this: instead of playing A0 directly (27.5Hz) you'd play 110, 137.5, 165, 182.5, 220 etc all the way up to say 2 KHz in diminishing fashion as you go higher. For a piano you'd have to keep track of odd/even harmonics and ensure they are in the right relation to each other to get the right timbre. The brain then is apparently capable of determining the distance between those harmonics and make you believe that you are hearing A0 even though that frequency is not at all present in the output.
This is true of pretty much any AI research. Look at Puffer, which was just on HN a couple of days ago. They're running a free streaming service just to get enough data to train their algorithms, and in fact mention in their FAQ that they would love to use commercial data if they could get it.
Unfortunately, academic and commercial incentives don't really align here. Most commercial entities don't want to share their data because it's valuable to them, and if they let researchers in, they want the output of the research to remain proprietary to their commercial enterprise.
I wonder if there isn't some sort of governance solution to this. Like give companies big tax breaks for sharing their data with researchers, or something like that. Essentially subsidize academia indirectly.
Therefore, by Occam's razor, we don't need another "mirror" universe to balance out this one.