A speaker is very different to a microphone.
Is it? I've definitely had fun little experiments where I turn a microphone into a speaker or use a speaker as a microphone. It's just a matter of whether I'm powering the membrane or whether the membrane is powering me, i.e. its just a matter of which side of the mechanism I sit on. It's a conversion between electricity and mechanical vibration that can be driven from either side.
I'm sure there's a lot of engineering trade-offs that change if you're looking to optimize for recording fidelity vs output fidelity, so the details of their implementation are probably quite different, but at their core the two don't seem all that different.
Maybe they're totally different in the MEMS world?
Sure, at the very core of it, it's just trading electricity for mechanical vibration, and this functions in both directions. But the engineering that has gone into specializing both speakers and microphones is staggering.
The photocopy machine is complex, but that's the photocopy machine manufacturer's problem, not the problem of someone reproducing a document.
These devices are many times more complex to manufacture than the large scale dynamic drivers you might find in a pair of desktop speakers, and are definitely not possible to wind by hand. The armature is made with incredibly fine guage wire around a form of precise (non circular) geometry, and needs to be coated with epoxy to stay together. The diaphragm, reed and driver body are also precise parts with their own manufacturing difficulties.
In short, I absolutely agree with the article's assertion that replacing these manufacturing methods with lithography is indeed a simplification. Just because the concept of a wound electromagnet has been around for longer does not mean that it is simpler in practice.
That's fairly cheap and if a robot can do it for less becomes questionable because robots need a lot of maintenance and cost a lot.
For reference, that's about half the discharge rate of the Rio Grande.
You can make the same maching with many parallel winding heads + cutters and make as many coils as you like in parallel.
The coil must be wound in-situ and is usually epoxy potted due to fragility.
If the final product is more geared towards automatic manufacturing that is in fact a form of simplification. I'm quite curious how these sound in practice compared to similar weight/size regular devices.
Headphones and cellphones would be an obvious first application.
It looks like the ones from the article are meant for earbuds, but if these MEMS speakers someday become cheap enough to create a large phased array of them it might make for neat hi-fi system.
You could also skip all of these issues if you really want Hi-Fi and just used sealed IEMs.
LED strips over a sheet with a diffuser produces a more useful light. But, as light is in the Thz range, doing any meaningful beamforming is exceedingly difficult.
Because speakers are well within the range of controllability, its perfectly possible to alter the phase to steer the audio output. this is possible because changing the phase moves the peaks and troughs of the waves to "move" the sound to where its wanted.
I haven't actually confirmed this with testing yet, but won't those resonances still increase decay time at those frequencies after equalization? I'd like to see post-EQ waterfall plots.
On paper at least, it should be much more reliable than Coil Speaker. And this is my second MacBook Pro which has Speaker Crackling problem. ( Although it is more likely an Apple faults judging from Google Search )
For applications below 200Hz, there really isn't much better choice than a coil-driven speaker. You are limited to very low frequencies, so the relatively coarse mechanical nature of the coil speakers is more-or-less hidden by the fact that any higher frequency harmonics/distortion are designed out (crossovers, cabinet design, etc).
The best speaker is going to be a combination of various technologies. The current dream speaker for me would be:
- large array of plasma or MEMS-based tweeters (2KHz+)
- large array of 2~3" drivers (120-2000Hz)
- a handful of 18" drivers (20-120hz)
- a rotary subwoofer in an infinite baffle (DC-20Hz)
- DSP/equalizer/time-delay hardware+software for tying the whole thing together
Also note that the amount of power required to produce the same amount of perceived acoustic output as you go down in frequency goes up very quickly. 50 watts into a 18" subwoofer is going to sound fairly meek, but pipe that exact same electrical signal into a titanium dome tweeter and you will develop an instantaneous case of tinnitus.
I remember reading about a LCR speaker array that was built in the 1970s in someone's house, with horns that spanned the entire height of the room made out of cinder blocks. Don't really know how how I would dig that article back up, but it was a pretty interesting system.
If you have a few weekends, basic electronic & woodworking skills, and a $2000 budget to burn through, you can easily put together a 2.1 channel audio reproduction solution far more impressive than you could ever hope to buy off the shelf at any cost. I built a 800 liter subwoofer that plays flat to 13Hz back in 2008 for ~$600. Still works flawlessly to this day. The key is to be inventive with materials and design, and to always be aware of the space in which you will use the equipment. The room is always the most important part of the audio reproduction equation.
Or is your goal to minimize the complexity of the crossover network that would be needed to cleanly separate the the bands with minimal distortion and the "fun" of correctly lining up the physical phases of each of the different cones?
But, I would also argue that the IEM experience pales in comparison to the experience of having thousands of watts of electricity converted into low pressure acoustic waves for all in the vicinity to enjoy.
Watch the bridge shootout scene from MI3 with some IEM on your smartphone vs at your local cinema with its commercial grade sound system. There is a certain level of physical immersion that is simply not possible with just an IEM/headphone setup. Sure, you could strap haptic feedback vests to yourself and put linear actuators in all your chairs, but for me that is getting to be too finicky to tolerate. I'd much rather tell my spotify app to cast to my HTPC and instantly be greeted with a wall of powerful full-range sound.
- unusual amplifier requirement
- poor performance under 200Hz (mid-bass to sub-bass)
- good volume for a headphone driver, terrible volume for an open air speaker driver
It might go into your next AirPod, it won't go into your next MacBook.
Sometimes there aren't trade-offs. The thing about new technology is that it pushes out the "Pareto frontier", that configuration surface along which trade-offs happen. A classic example is the "fast, good, cheap" trilemma. You can imagine this trilemma as a geometric object (say, a hypersphere) and the particular "fast, good, cheap" trade-off you make as a point on the surface of that object. New technology makes the whole object bigger.
Does the v0 technology have important disadvantages? Sure. LEDs started out being red and dim. Now LEDs are the "too good to be true" option, beating most other lighting sources most of the time in most domains.
But the response curve of prev-gen looks just fine to me, I wonder if next-gen Etymotic Research IEM would use a device like this.