Hacker News new | past | comments | ask | show | jobs | submit login
Reverse-engineering the Yamaha DX7 synthesizer's sound chip from die photos (righto.com)
418 points by picture 66 days ago | hide | past | favorite | 131 comments



Reading this really drove home to me just how amusing some of the twists and turns of music tech development are. In contemporary modular synth formats, you now have modules that use similar FM tone chips, but are controlled by potentiometers and analog control voltage inputs (for instance, https://busycircuits.com/alm011/). I could see how someone could conclude that this destroys the reproducibility advantages of digital FM synthesis, while preserving the cheesiness of FM sounds. From another perspective, you can freely explore the enormous landscape of sound produced by FM synthesis, without diving into the deep end of DX7 programming. Instead of generating sysex dumps on the fly to dynamically change the sounds produced by the synth, you just feed whatever control voltage you can come up with into the inputs of a module and bask in the insanity that comes out (for instance—https://www.youtube.com/watch?v=2-YEbg040ww definitely not everyone’s cup of tea).

Edit: but if you want to explore the landscape of FM tones generated by these chips without spending all of your money on a modular setup, generating sysex messages will of course take you where you want to go: https://fo.am/activities/midimutant/


"Basking in the insanity" just about sums it up. It's remarkably difficult to craft a good sound with FM synthesis (or phase modulation, for the pedants).

My experience with FM sound design tells me that 2 operators is quite primitive, 3 operators opens up a larger set of non-insane possibilities, and 6 operators (like the DX 7) is, surprisingly, the minimum I would personally want to use for FM synthesis!

So if you are going to go 100% FM, and you're going the eurorack route, I'd personally want nothing short of two ALM011s. You obviously don't need two if you're mixing it with other modules, it's just that if you want to do FM synthesis, four operators is surprisingly limiting.


Oh for sure. An additional advantage of the modular approach is: if you don’t like the raw FM tone, you can try running it through an MS-20 filter, or a Moog filter, or a wavefolder, or a ring modulator, or…

One of my favorite custom SuperCollider synths that I wrote was a simple 2-op phase modulation generator run through a wavefolder and filter. I probably could have arrived at a similar sound with more operators, but it wouldn’t have felt as intuitive to write!

Edit: these darn mobile keyboards


When Garageband got some FM synths, I was pleasantly surprised by them ... the first FM that didn't immediately need an EQ. (Talented staff then.)

I started reading the article without noting the author's name. When I got through 2/3 of it, the thought was who the hell is this?. After seeing the name, it was of course, who else?!


What is your thought process when doing sound design with FM? I've been doing this for years and while I can easily wrap my head around subtractive synths and go from sound-in-head to sound-on-tape pretty well, FM is just impenetrable to me.

Are there any good tutorials on how to approach this?


“The Complete DX7” by Howard Massey was considered the gold standard back in the day. It’s on archive.org[0]. I recommend using it with Dexed, the open source DX7 emulator[1] (unless you have a DX7 handy). There’s a lot of detail about pressing buttons on the unit, but if you can get past that, it does a pretty good job of showing how to make the types of sounds you want.

[0] https://archive.org/details/thecompletedx7 [1] https://asb2m10.github.io/dexed/


I have never seen a tutorial. I've scoured a lot of forums, and read about how the team at Yamaha made the original presets. Most people never bother to design sounds. It's more tedious and frustrating than subtractive synth sounds.

First step is finding a capable FM synth you like with a decent user interface. Dexed, NI FM8, Arturia DX7 V, or maybe even Reaktor. (Edit: It’s obvious to me that you shouldn’t buy a Yamaha DX-something to do sound design, but just to make it clear, don’t do that. The user interface is abominable. Just get Dexed if you don’t know what you want.)

Build your sound up by adding "partials" or "tones", or whatever you want to call them. Start with two operators, with one chained to the next. Explore the space of ratios (start with 1:1) and modulation levels. Once you get a handle on some part of the sonic space, consider adding a third operator at the back, i.e. if your operators are A -> B -> C, and C is the output, then A is the one you're adding. Experiment with ratios and modulation levels, and consider this a way to expand the space you've already explored.

You'll find that most of the modulation values are either too subtle to notice or noise that you just can't stand. If you're at a point where the middle ground between subtlety and noise doesn't have the sound you want, that's when you might add the third operator.

With these partials, you can build up the sustain and attack of your sound separately. Experiment with using different timing for all of the envelopes.

You can tell that a lot of sounds are made from a smaller set of partials that have been recombined. For example, the classic DX7 rhodes sound is made up of an attack and sustain partial, and these partials have been reused and recombined with tons of other partials to make sound variations.

This is why I'm not fond of the DX-11, Ad Lib, or other 4-operator systems. They have the "FM sound" but you're really quite limited building your sounds. If you have a partial made with three operators, and you want to add something to it, you have to share operators or use a pure sine wave.


While 6-op definitely gives you more flexibility than 4-op, I find 4-op with non-sine operators more than usable. Examples include the YMF262 (OPL3), TX81Z and of course the digitone.


I find it the limitations too frustrating and I do not like the sound palette.


Korg Opsix has a pretty nice UI for FM: https://www.musicradar.com/reviews/korg-opsix

I've seen a fair number of people say that by using it they were finally getting the hang of FM.


I live those 4op FM synths. The TX81z is awesome. The newer Digitone from the Swedish company, Elektron is a also fabulous.


The TX81Z was super fun and versatile. But the sound was quite tinny.


It depends, but I get what you mean. I love all the basses a plucky sounds I can get with that synth. also doesn't need a lot of post processing to cut through the mix.


This guy explains the process of creating a basic bass sound quite well:

https://www.youtube.com/watch?v=1XbrTC0NndM


Here's an article on how to program the DX7 that might help: https://homepages.abdn.ac.uk/d.j.benson/pages/dx7/manuals/pr...


I've just started getting interested in FM and DX7. Power DX7 has some good insight. Here he is talking about algorithm choice and how it can affect sound. https://youtu.be/s4oMnCCUb8w


FM was always possible with analog, but it sounded like a mess because you need digital stability to make consistent sounds. Tiny frequency variations that add drift to analog sounds and make them interesting will completely destroy an analog FM patch and turn into unplayable noise.

If you really want to explore FM you can write your own code and create algorithms with as many carriers/oscillators as you like. This turns out to be less interesting than it might be, although if you use a lot of carrier/modulator pairs you get a kind of spicy variation on additive.

It gets much more interesting when you add formants and other variations like the later FS1R synth did.

If you want to keep the modular approach the various digital modulars - Reaktor, VCV Rack, Voltage Modular, etc - give you clean FM with patch cords.


An Fs1r second hand are outrageously expensive now :(.


> From another perspective, you can freely explore the enormous landscape of sound produced by FM synthesis, without diving into the deep end of DX7 programming.

You can get some of this with Dexed as well: https://asb2m10.github.io/dexed/.

It's basically an emulation of a DX7 that allows you to create patches without the horrendous menu-diving interface of a real DX7, and you can also use it to export and import patches to and from the real hardware over MIDI.

There is something quite fun about using a modern computer to hook up to a 1983 synth and having it "just work", despite the flaws in the DX7's MIDI implementation. I've had a harder time getting printers set up in the past.

Dexed's a useful tool for experimenting with and learning about FM synthesis (and I'm still very much at the "everything I do sounds terrible" phase, unless I go the route of modifying an existing patch), but I'm not sure you can automate it, which would give you the evolving sounds of something like MidiMutant.

I also tend to be of the opinion that a good analogue filter (or an emulated analogue filter - I'm not that much of a purist), can really open up the usefulness of an FM synth.


Can't you explore the depths of FM synthesis just as easily if not more with a good VST plugin?


Yes. Get a free version of ableton and go play with the operator synth. There also a huge amount of tutorials for that particular “soft synth”


Or FLStudio and spend some time with Sytrus. Or pick up a copy of FM8


Pro tip:

You can continually sign up for trial versions of the full suite for 90 days. At the end of the 90 days you sign up with another email and a new free trail. I’ve heard of a few produces that have done this for many many years.


Ableton Live is an incredible piece of software, the people who make it deserve to be paid.


Honestly it’s an average piece of software that has bugs and crashes on the hourly. Live 11 has been a massive regression in terms of speed and stability.

It’s original mission statement of being a great hybrid live performance and studio production software has kind of failed. The instability makes it scary to use live and the performance issues makes it frustrating in the studio.

(I guess each producer has a different use case. If you’re DJing with ableton using audio tracks in sessions view it’s stability is okay! But doing large live sets with hundreds of tracks and I start getting nervous.)

That’s not even talking about the lacklustre update that was live 11 especially for the price.

It’s obviously been on a downward trend for years given that Bitwig came out from some of the original founders.

I’m not condoning stealing. Actually the opposite. I’m encouraging people to try the free trial instead of pirating. Pirating software can be dangerous and have added malware. Ableton give out a free trial, use it and decide if you think it’s worth the money.


What do you mean? That's awesome!


Oh I definitely think it is too! I could see other people dismissing the modular stuff as a waste of money, or the midimutant thing as a- or un-musical. Maybe I could have reframed this as a meditation on how quirky technology development can be when it is subservient to human creative urges. (Edit: spell check)


This is maybe a little outside the scope of your article, but I've read about the DX7's log2 based sine wavetables in the past, and it makes sense from a math standpoint (log2(x) + log2(y) = log2(x*y)). However, I find it confusing when I go to try to replicate it. My understanding is that these are NOT floating point numbers. In which case the output would be in the 8bit range of (0-257) or -127 to +127 if we're using signed values.

So assuming I want to represent a signal value of 127 (i.e. sin(t) = 127) using base 2 logs, I punch this into my calculator and i get log2(127) = 6.9886. But that's not what's going to be stored in the signal LUT since it's a floating point number.

I think I'm probably missing something obvious, but any clarification would be amazing.


The numbers are bigger than 8 bits. I'm still figuring out the exact sizes, but it looks like the phase values are 23 bits and the table lookups are 12 bits. There are also numbers that are sort of floating point, with the value shifted by a few bits depending on how large it is. (This is one reason that DX7 emulators don't exactly match the DX7, because the number of bits used isn't standard.)


They’re all fixed-point. Some are logarithmic, some are linear.

> In which case the output would be in the 8bit range of (0-257) or -127 to +127 if we're using signed values.

The output is 12-bit. 8-bit is simply not good enough. Different internal signals have different widths. From what I understand:

- Phase accumulator is 22 bits (linear)

- Frequency is 14 bits

- Amplitude is 12 bits (logarithmic)

> So assuming I want to represent a signal value of 127 (i.e. sin(t) = 127) using base 2 logs, I punch this into my calculator and i get log2(127) = 6.9886. But that's not what's going to be stored in the signal LUT since it's a floating point number.

sin(t) never equals 127. The highest value it will ever reach is 1.0. Instead of punching in log2(127), punch in log2(1.0). From what I understand, the values have 8 fractional bits. So if code value N represents a signal level of 1.00, then code value N-1 represents a signal level of 0.997.

The synthesizer has two main lookup tables: one maps phase to log-sin, the other maps log to linear. Even with an incomplete understanding of how the chips work, it is believed that these widths are correct or very close.


6.9886 is not a floating point number - it’s just a number. Floating point is but one representation for approximation of real numbers, so called fixed point is another one. Having said that I believe the DX7 does use a floating point representation internally since floating point addition can be done with adds and bit shifts - both easy things to do in hardware.


What danachow said makes sense to me. Under the "Logarithms and exponential" heading, it says the operator chip uses an exponential look-up room to convert this value back to a linear value. "This value" here refers to the output of log2(x*y). So I think it's implied that yes, the log2 calculation will yield a floating point number, but that floating point number will be converted back to a phase increment value that corresponds to the LUT. This is my shallow understanding of things


Maybe it's stored as a fixed point? 4 bits for the signed whole number portion and 4 for the fractional part?


I don't think that's enough precision? I.e. if you only have -7 to +7 range for the whole number and 0-16 for the fractional portion, how do you represent 6.9886? 1000/16 means increments of 62.5 so the closest numbers we can represent are either 7.000 or 6.9375 - but 6.93xx overlaps with log2(126) = 6.9773 and 7.000 overlaps with log2(128)


It’s not 8 bits. Most of the Yamaha chips seem to use 16 bits internally in 8.8 format (based on Raph Leviens work). Different models of Yamaha chips had different DAC setups, but on the DX7 the representation was 12.3 floating point - which I think was converted from the internal log scale fixed point to linear scale via a lookup table. So while the DX7 chips technically use both floating and fixed point it probably does all arithmetic in fixed point.


Additional reading and discussion on the DX7 from 3 weeks ago: https://news.ycombinator.com/item?id=28940860


Not a coincidence :-) I'm working with the author and the submitter of that article.


There's a lot of good stuff to be had from disassembling the ROM as well, don't ask me how I know.



One big selling point of the DX7 was it did a decent piano sound, which was never really possible on analog.

Once ROM got a bit cheaper though, you could get a much better piano sound from new instruments like the Korg M1 which used internal samples of real pianos.

As the new sample-based keyboards hijacked the natural sound market, a resurgence of analog synth sounds in Acid House and other EDM took over the synthetic sound space.

Pretty soon nobody wanted FM synths any more.


Funny how now with the synth revival driven by Eurorack, fm synthesis is often the mosten interesting.


Analog stuff (tb 303, sh101, etc.) in early acid house was used because it was cheap at that time, mostly. M1 came later than early acid house or techno after all

Same reason FM was actually used a lot in electronic music as well. You can hear DX series a lot in detroit techno: Derick May, Jeff Mills, Robert Hood, etc.


I've never heard a synthesized trumpet sound like the real thing. I have no idea why.


Author here if anyone has questions...


Since you offered: what is that trace called that is presumably electrically connected to the ceramic cap solder seal? I've wondered about its function, but have never been able to distill the prior sentence to the necessary power words.

Also, here is an extra ham-fisted attempt to get you to cover an off topic interest of mine: did you know that the Soundblaster DSP was an 8051? Cool, huh? pleasedoan8051deepdiveimbeggingyouplease. I'm aware of only one gate level analysis, with evidence of its existence on archive.org... tragically the spider never bothered with the actual zip file and the author is long gone.

Thanks for all the great reads, btw.


The gold trace connects the metal lid to pin 1 (ground). This provides some electrical shielding.

As for the 8051, I might look at it at some point, but I've got lots of other projects...


ha, and here I'd assumed there was some convoluted vacuum getter / ion pump at the root of it... nope, just didn't wanna screw about with the bond wires. Thanks.


You might be interested in this 100% compatible SB 1.0 reimplementation (including 8051 of course) : https://github.com/schlae/snark-barker


Nice work. :D

-Austin, author of Gateboy (https://github.com/aappleby/MetroBoy)


Amazing blog, glad I found it and the other articles look great too! Very interesting history of computing.

Can you use an arbitrary waveform generator to make these sounds accurate or differences nonexistent?

Do you think that good audio can be “solved”? I’ve heard of Harmon tuning and people’s preference for it, and how good speakers are all tuned the same but high end headphones don’t. I remember this person who knew electronic design destroying the credibility of head-fi. https://nwavguy.blogspot.com/ what do you think of forums like audio science and his findings?


Well, you could feed the right digital values into the arbitrary waveform generator, but you have the problem of how to generate the values. As for your other questions, I don't really know anything about audio.


I like your blog a lot, thanks for existing.


awesome work and right up my alley. look forward to digesting


Electronic musical instruments, especially from the mid-80s on, make heavy use of customized and of course undocumented ICs, thus revealing their inner workings is of immense value for emulation purpose when the original instruments are either unobtanium or too costly for normal people to purchase. Also, should one day technology allow small run production of chips at home, sort of 3D printing silicon, it might be worth having a library of interesting chips to replicate for fun and/or to repair old instruments.


Behringer's redesign of the 3340VCO has been huge for bringing cheap analog equipment to the masses. I own their Neutron, which is a semimod kinda like the Mother-32 but with a few bonuses and more patch points. It's seriously impressive for being a home-grown instrument at a sub-300-dollar price point, and I'd probably choose it over most other instruments in it's price class like the Monologue or UNO.


Behringer's goal has always been to make stuff cheap enough that people can afford. Which is noble, except they do this entirely by ripping off other people's R&D and undercutting lots of people including smaller players.

Also Uli Behringer himself is completely mad and appears to be pretty chill with antisemitism. https://www.gearnews.com/uli-behringer-responds-to-the-corks...

tl;dr don't support Behringer, please.


The Neutron is an exception - it is one of the few Behringer products that is an original creation and not a cheap clone of a reputed instrument. Also, there is nothing like it anywhere near that price range. I bought a Neutron, but I would feel dirty buying a clone of something that is still in production.

Some background, for those who don't yet know the controversy surrounding Behringer clones: https://www.factmag.com/2017/04/08/behringer-minimoog-synth-...


Yep. I don't feel bad about them ripping off the Moog Model D when your only other option is to buy an authentic recreation at 10-15x the price.


Or downloading official Model D app made by Moog for $14.99

https://apps.apple.com/us/app/minimoog-model-d-synthesizer/i...

It sounds amazing.


That's not an analogue device though, so it's not a like comparison. Anyone can build a digital Model D these days, but mass-producing a piece of discontinued hardware at less than a tenth of the price is a different ball game altogether.


> Which is noble, except they do this entirely by ripping off other people's R&D and undercutting lots of people including smaller players.

Most smaller player are also ripping off "other people's R&D". The only difference is that they charge 5x, 10x what Behringer charges. An extremely small number of people are doing new stuff.

It's even worse with guitar stuff. There are several Tube Screamer clones that sell for 250, 300, while the original is still under production for 120. The constant complaining about Behringer's one that costs $20 is petty and privileged.

No to mention Behringer has some pretty cool original designs. The Deepmind comes to mind.

Poor people also deserve to make music. You can bet I'll keep supporting them.


> Poor people also deserve to make music. You can bet I'll keep supporting them.

Get a free software solution.


I'd rather that poor people have an option. There's nothing wrong with Behringer.

Also, free software is nowhere near as good as dedicated hardware or as paid software, IMO.


quack, softsynths work very well.


The ripping off of designs stuff is lame and enough to avoid but the 'antisemitism' smear comes from a poorly thought out gag video[1] that's honestly more offensive to the French if anything.

[1] https://m.youtube.com/watch?v=7mgsIjQXZ7c


I don't really care about the politics of it, I'm just shopping for cheap gear and they're the only ones pushing out quality synths at affordable prices. If Moog or Korg were actually competing in the low-end business there might be a different story, but I couldn't care less when Behringer's stuff sounds so good.


It sounds good because they are ripping designs off and mass producing clones for cheap.

In terms of build quality and feel there is a huge difference between a Moog or Korg and a Behringer.

Behringer gear is made with the absolute cheapest components available and that becomes obvious as soon as you touch it.


So they're the MFJ of the audio world?

I'm fine with that.


This is not really a competitor to the behringer semi modular analog synths that upset some people, but an inexpensive synth I'd highly recommend is the Arturia Microfreak.


Back in the 00's, I toured as a sound engineer. Behringer turned up in all manner of touring racks.


To the extent they're doing vintage clones with circuits that are long past any patents, that's how the system is supposed to work. I don't see how anyone can complain about their Model D, for example. The original was released half a century ago.


They also routinely go after journalists and critics, in at least one case by using antisemitic tropes (as you noted).


FPGA kinda almost works like that for digital projects with some light analog, that is you can convert analog processes to a digital form and output using Delta Sigma modulation.


Is accessible FPGA powerful enough for something like Intel 8080?


Consumer-purchasable FPGAs can emulate systems up to the SNES, and soon PS1 (https://www.youtube.com/watch?v=bo1GgF6X-7A). https://www.retrorgb.com/mister.html one requires a DE10-Nano ($176.50) and needs a $60 128MB RAM module for most consoles.


>a PM signal will have little frequency change if the modulation is slow

This was one of the programming tricks on the DX7. If you set the final carrier to a low frequency you got a nice drifty chorus-like effect.


It is a shame companies do not donate the actual design documents to a museum of some sort, like the Computer History Museum.

Of course things get lost over the years. I wonder if companies could put their designs in escrow to be released after some number of years so they don't get lost.


This is basically the original logic behind patents. You trade making your invention public for legal protection of your exclusive right to that invention for some number of years.

It's too bad it doesn't work anymore (or maybe it never worked?). Stuff like this doesn't make it into useful patent documentation, while patents are issued for vague and overly broad "inventions" like "storing numbers in a computer".


The DX7 patent is actually pretty good: https://patents.google.com/patent/US4554857A

It accurately describes the architecture of the DX7 and how it's implemented. There's a whole lot that it doesn't explain, but it was very helpful for understanding the chip. (I used a couple of diagrams from the patent in my article.)


There's definitely a certain uniqueness to the legalese of patent titles. At times they're simultaneously so vague, yet so oddly specific. The Dave Smith quote in the footnotes of my DX7 technical analysis (it's linked in Ken's article) provide an interesting insight into how synth companies approached patenting: "If you have a lot of patents and somebody comes after you, chances are one of your patents will overlap enough with one of their patents that you can negotiate a deal so nobody gets hurt. Whereas if you don't have anything to offer and you have nothing in your stable of patents, then you're stuck"

Some of Yamaha's FM synth patents have equally strange names. On the subject of Yamaha's FM patents, it's a shame that there isn't more information available about Norio Tomisawa, the engineer responsible for most of these designs. His name is on all of the FM patents. He played a huge role in the development of modern synthesis.


I don't know why patents even exist at this point. I have seen many times an obvious thing has been patented, where anyone sane wouldn't think of patenting something like that. Most likely this is to get a PR e.g. "we are using patented technology", but if you worked on a software and then you learn someone has patented something that your software does years later, makes you feel uneasy and there is no defence against that. One thing if companies are doing that for PR, but if "their IP" gets taken over by patent trolls...


Patents can be challenged for being obvious.


If you are a big corporation with deep pockets.


Please forgive any ignorance on my behalf. Does work like this make it any more possible for someone to engineer an FPGA-based replacement? I don't know too much about FPGAs, I just imagine that seeing the arrangement of the gates on the chip makes its functionality a little more transparent, and this a bit more possible.


It would still be a large project to build an FPGA replacement, as someone would need to trace out all the circuitry. I've just scratched the surface. But at a minimum I hope to extract the ROM contents, the data path widths, and so forth, which will allow emulations to be more accurate.


Next level wizardry this, the speed with which this all came together is incredibly impressive, it took about as long to mail the chip as it took for Ken to do all this work.


Did this come out of the recent DX7 thread or was this in the works for a while?


I've been talking to Anthony since about forever, I repair DX-7's to keep them in play and we both had a partial disassembly of the DX-7 ROM (mine SER version and his the regular one). We compared notes, I mentioned I have a dead board sitting here and then we asked Ken if he would be interested in taking a look at the chips.


Yamaha DX7 - Famous Sounds Demo:

https://www.youtube.com/watch?v=BCwn26FePAo


Shout out for the DX9 which was quite a bit less expensive, bringing it within reach of a wider market. My brother had one when we were in high school.


Someone actually made a fan-site for the DX9: https://yamahadx9.com/

As their headline puts it: "The Surprising truth about Yamaha's most vilified FM Synthesizer The DX9"


I always regretted it. Didn’t quite have enough money for the DX7, but I should have held out till I did.


It’s time to go to DX Heaven https://youtu.be/X9L60BUux1k


On the topic of synth chips, does anyone have any good resource talking about why those big retro synths still exist?

There’s got to be a good reason I simply don’t understand for why a computer cannot replicate everything those massive pieces of furniture do.

The one thing I concluded is that their interface, with hundreds of knobs and potentiometers must be very nice to interact with. But can they not just house a computer inside?

I guess my curiosity and ignorance boils down to: why can’t computers generate all the kinds of signals that these big synths do?

Edit: a bulk thank you for all these helpful responses.


In principle they can, in practice the limitations on processing power have caused the sounds they produce to lack the richness of analog. They're getting better though.

The thing about analog is that you get a lot of random variation in the waveforms. A naive computer implementation will lack that variation. A better one will purposely add particular random variations. The really good ones, like Diva (which runs on a regular computer), will run SPICE-style simulations of the actual analog circuitry, but running that at full fidelity is still pretty processor-intensive.

There are lots digital hardware synths that emulate analog. There are also hybrids, like Novation's Summit and Peak. These start with digital sound generation, but using FPGA to run at a much higher sample rate (24 MHz instead of the typical 48 to 96 kHz). Then they convert to an analog signal that runs through analog filters, then they convert back to digital for effects (reverb and such). The advantage of starting with digital here is that it can start with different waveforms than the ones easily generated by analog circuits.

Further muddying the waters, modern analog synths have components with a lot more precision than in the old days, so they also lack some of the random variation of the old stuff. That's great for staying in tune, but now synths like the OB6, which is pure analog, have added "vintage" features which artificially add the variations again.


Modern software simulations are very accurate for conventional VCO-to-VCA-to-VCF type sounds, but struggle with audio frequency modulation of control voltages. Aaron Lanterman includes an example of this in his excellent lecture series on analog synthesis:

https://www.youtube.com/watch?v=GZz8PJCfK1c&t=452s


That looks like a really interesting course.


One factor is analysis paralysis. On a computer you can have 100 different software synths, which can be daunting.

Having 2-3 quality hardware synths can force you to get things done and not get lost in minutiae.

Of course, there are also modern producers which are very effective with software only.


That is one of the biggest hurdles I have whenever I attempt to play around with fancy synths… lots and lots of options. Very easy to get completely lost in the weeds.


Analog has a lot of detail that digital doesn't. Real circuits are imperfect in all kinds of interesting ways. It's not just randomness, it's specific kinds of non-ideal operation - distortion, non-linearity, noise distributions, and other fine details.

It's trivially easy to make idealised digital models of VCO-VCF-VCA but they sound sterile and boring. The more imperfections you model the richer and more interesting the sound get, but the more processing power you need.

Good models - especially good models of analog filters - are incredibly difficult to design. And even the best aren't as smooth as the real thing, because - for example - with a filter you may have to recalculate a load of coefficients for every sample.

A lot of digital models get in the ballpark now, but very few do better than that. And there's a fair amount of secrecy among DSP coders. There's no mystery about the basics, but the manufacturers like to keep the best models to themselves. So the big computer music tools - like Csound and Supercollider - have third or fourth rate models that are a good few generations behind the latest code.

So it's still hard to get comparable sounds of pure digital synthesis. The new M1 chips are fast enough for better models, but you'd need about 5X to 50X the processing power - and more research - for no-compromise digital replacements of vintage analog hardware.


I don’t have any resources for you, but I will say that there are a lot of synth emulations out there (for instance, google “dx7 vst”). Developers might be more or less obsessive about emulating the particular quirks of different hardware synths, but I can’t say how significant those details are. I guess it comes down to the listener’s ears.

Some people like having vintage synths because they are collectors, and some like the workflow surrounding them (ie, the big front panel of knobs and faders you allude to, or the simplicity of plugging in audio and/or control voltage and/or midi cables).


I listened to this "shootout" video only two days ago. The different ones tested here all have their quirks. (I've been playing with Dexed as it has all the original features and is free). https://youtu.be/vMvGk_fcIo8


I'm not a sound engineer or anything, but I've noticed that dedicated audio equipment is really fast - there's always zero perceptible latency, even in a complex system that uses a bunch of different boards connected together. It's hard to make very low-latency software because of all the layers of abstraction between applications and the OS and various hardware buffers.


And software buffers! Those are usually much worse than the hardware ones.


    But can they not just house a computer inside?
A lot of modern modules _do_ just have a computer inside - for an open-source example, many of the Mutable Instruments modules use AVR or STM32 controllers:

https://github.com/pichenettes/eurorack


If you look at DAW and plug-in offerings they have a lot of these in digital format. But you’re right about having actual physical controls is very different. And I do think there will be a completely custom audio interface/keyboard/synth in the next couple of years (this is a side project of mine).


For digital synths, yeah they can be fully emulated afaik. With the actual hardware unit, you get the knobs and keys, sure, but it's basically an embedded device, the OS and synth functions are tightly integrated. Which means nothing competing for the scheduler, latency practically 0. Plus they come with all the standard inputs and outputs needed for performance over a PA. By comparison a laptop is a bit of a liability. Who knows when some system service is gonna activate and add latency, or cause a freeze, or stall the audio driver, or something. Plus you then have to lug around all these adaptors and cables and external devices, for the I/O that you need that isn't built into a laptop. If you're a performance artist anyway, it's not as big a deal in the studio.

For analog synths, well they can also be emulated quite accurately, but the way they produce sound is just fundamentally different to digital synthesis. They're directly manipulating the voltage of current to get oscillation, and then manipulate that signal to drive the speaker and produce sound. This process can be emulated but it doesn't produce the same output in a physical sense. It's questionable whether a typical audience would notice the difference I guess, but there's something quite raw and visceral about using an analog synth.


There is quite a big difference between a digital sound and an analog sound, and even among digital sounds, there is a big difference between a software program making sounds and a digital hardware circuit that is architected specifically for making particular types of sounds.


Thanks! What might I search for to understand this detectable and quantifiable difference (vs. Monster cable style subjective feelings about sound)?


It's mostly that hardware makers invested more time and money into the algorithms to make them more pleasantly sounding. Many software synths are one-man projects which has to do it all (DSP, UI, ...)

Hardware makers also have proprietary institutional knowledge, since you can have a couple persons who researched synthesis methods for 10 years.

They also work with users to tweak the sound and interface to perfection.


For a look at what it takes for a clean room reimplementation of this same synth:

https://www.youtube.com/watch?v=XJ97iXQrqzw


Few points:

a) people like having the original thing, even if it can be replicated. (I think this generalizes fairly far across musical instruments, old ones are always coveted by some, and into many different topics)

b) Limitations of the circuitry etc that affect the sound can be surprisingly hard to precisely simulate. For cult machines that's probably something someone has done, but it's a lot of work.

c) as you said, physical interfaces are important, as are the restrictions of what an instrument can't do the guide where people go with it

d) people will fight tooth and nail about b) and if a specific simulation is good or not - and often you can assume that the range of how the originals sound is actually also quite large (especially now, with the components having aged a few decades), so you'll always find someone who is sure the software doesn't sound exactly like the one specific device they compare it to.


This is cool shit. Hacker-approved.


The Play example generates horrible sound if you press play and drag the Modulation frequency ratio slider. Using Chrome 95.0.4638.69


In Firefox I only hear a single sine wave of varying amplitude/frequency, not a modulated one.


The Firefox implementation should work now; someone sent me a fix.


> The underlying problem is that multiplication is much harder to perform with hardware than addition, especially with 1980s-era technology. The solution is that the chip uses base-2 logarithms in many places because adding logarithms is equivalent to multiplying the values.

Maybe it would have been obvious to me if I had taken calculus, but this blew my mind.


Look at how multiplication is done with a sliderule.

https://www.sliderules.org/react/aristo_0901_junior.html


A similar trick is used for a different reason in statistical and probabilistic calculations -- for numerical stability rather than performance.

Suppose you have n probabilities p_1, ..., p_n. Each p_i is a real number in the interval [0, 1] . Often you want to multiply probabilities to compute prod_i p_i . Probabilities p_i can be tiny floating point values (very close to zero) and if n, the number of factors, is large, then the product will evaluate to zero due to underflow.

Software that processes probabilities often instead stores log-probabilities. Then the product prod_i p_i can be evaluated as sum_i log(p_i) , which is more numerically stable.

Where this gets a bit trickier is if you instead want to compute the sum of probabilities, rather than their product -- e.g. to renormalise probabilities so they all sum to 1. If you have encoded the probabilities as log-probabilities, you now need to take the log-exp-sum [1][2], which will cause numeric overflow if done naively, and is also relatively expensive to compute, as it requires n exponentials and 1 log to combine n log-probs. log-exp-sum can be evaluated in a stable way by first doing a pass over the data to compute the max log-prob, then doing a second pass to take the exponential of each log-prob offsetted by the max log-prob, so each of the terms being exponentiated is at most 0.

There's also a streaming version of log-exp-sum that permits a single pass over the data [3] -- a max is calculated on the fly and each time a new max is identified, the running sum is multiplied by a correction factor. I'm a bit suspicious that the branches in the streaming log-exp-sum might cause a performance impact when executing, although each exp calculation is so expensive that perhaps it doesn't make that much of a difference.

[1] https://en.wikipedia.org/wiki/LogSumExp

[2] https://blog.smola.org/post/987977550/log-probabilities-semi...

[3] http://www.nowozin.net/sebastian/blog/streaming-log-sum-exp-...


> I'm a bit suspicious that the branches in the streaming log-exp-sum might cause a performance impact when executing,

Since you're just checking whether the current sample is larger than the largest sample seen so far, you're very likely to find a "large" sample early that rarely gets updated. From then on, this branch will virtually always be false, and the branch predictor will make this fast. Unless the distribution is increasing (but not monotonically increasing) over time in an unpredictable way; in that case, the branch predictor will fail often and will be slow.

The branch predictor (and cache, for that matter) is a sort of Schrodinger's cat that makes programs both slow and fast at the same time, but you never know which until you benchmark it.


> you're very likely to find a "large" sample early that rarely gets updated. From then on, this branch will virtually always be false

Good point, provided there's a decent number of elements being reduced.

In the application I've been focusing on, many of the batched log-exp-sum reductions are over tiny arrays containing 1 to 4 log-prob elements. There's already branching to guard against the case where all elements are log-prob -inf (aka probability zero). I found it helpful to also branch to special case the 1 element case, in which case the reduction is the identity operation, saving both an exp and a log. It's probably the case that inside the exp and log there's branching as well, so doesn't make sense to get too myopically focused on that single aspect of performance.


It’s not too difficult to see.

Eg : 2^3 x 2^5 = 2^8

Explicitly writing the powers as multiplications :

2x2x2 x 2x2x2x2x2 = 2x2x2x2x2x2x2x2

So multiplying powers of the same base is just adding the exponents.

Logarithms are the inverse of exponentiation.

Log2(N) counts how many factors of 2 are needed to multiply together to get N.

So Log2(2^x) = x

As per above example :

Log2(2^3 x 2^5) = 3 + 5


Logarithms are taught before calculus where I grew up.


Calculus? Really? Logarithms are basic highschool math in my country…


Please don't put other users down, regardless of how math is taught in your country. We want curious conversation in which people treat each other well, even if they know less than you do. We do not want point-scoring and putdowns.

https://news.ycombinator.com/newsguidelines.html

Edit: you've unfortunately been posting unsubstantive comments and breaking the site guidelines with this account elsewhere, too. Can you please stop that so we don't have to ban you again?


We don't learn mahr here


Fun fact: in the UK they call it marhs!


Such a shame that grammar wasn't a basic thing back there.


Or tact.


That is some seriously clean routing.


This is ultrasound!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: