Hacker News new | past | comments | ask | show | jobs | submit login
Reverse-engineering precision op amps from a 1969 analog computer (righto.com)
115 points by matt_d 27 days ago | hide | past | web | favorite | 33 comments



Recently I became interested in DiY audio gear and I spent a lot of spare time on Rod Elliott's ESP audio pages. One project is https://sound-au.com/project07.htm, a discrete op amp project. It's been awhile since uni days so I bought a bunch of pnp and npn bjt transistors from digikey and spent a few days at my bench tinkering with the circuit. different gains, miller caps to set stability and bandwidth, how input and output impedance affect gains, etc. Very satisfying seeing your little mess of transistors come to life and amplify a signal as designed.

One thing it taught me was how the different sub circuits come together to make a power or operational amplifier. Long tail pair aka differential amplifier, current mirrors, current sources, Vbe multiplier, class A stages, Class AB stages, etc. Another great article which builds on it is https://sound-au.com/amp_design.htm. This is where you get to see how these building blocks come together. This basic topology is present in most amps be they little op amp chips or big beefy audio power amps: input -> diff amp -> class A gain -> class AB buffer -> output.

Interesting to learn that there isn't much difference between a little TL071 and my Rod Elliott P3A.


We'll never have the luxury of laser-trimmed resistor or matched transistor, an integrated opamp usually outperforms discrete designs, at least for low-power applications. But yeah, it's much fun to build an integrated component from scratch and see how it works.

BTW, NwAvGuy, the audio guru who have vanished, has some interesting criticisms on the dismissal of opamp in the audiophile community. https://nwavguy.blogspot.com/2011/08/op-amps-myths-facts.htm...


For laser trimmed resistors, there's always hand sorting resistors. Even today, that's how I've seen it done in commercial op-amp characterization. Tests like CMRR (Common mode rejection ratio) require precisely matched input resistances or the result is thrown off by mismatches.


I played with a large EIA analog computer at my first job, modeling a few simple equations and building an oscillator. They had bought it many years before to model guidance equations for missiles. Maybe one would be useful today for trying out guidance algorithms for quadcopters? In some cases an analog circuit can do the same things as a digital circuit and use less power. Insect sized flying machines might benefit from analog guidance circuits also.


I wonder how easy and cheap it would be to build a comparable analog computer using modern ICs. I also wonder if analog computers could make a comeback. Training a neural net seems very much like an analog computation to me...


If you want an accurate analog computer, you'd still be paying a pile of money for precision components. (0.01% resistors cost about $10 on Mouser.) And even with modern ICs, you're not getting a Moore's Law scaleup on your op amps. Plus, the interconnect doesn't scale nicely.

That said, there are claims that analog computers will make a comeback: https://spectrum.ieee.org/computing/hardware/not-your-father...


A good neural net simulator would work just fine with imperfect components.


This is basically what a brain is, right?


Far less than you might think. For example the wide range of neurochemicals would be largely unnecessary if that was the case.

It’s like how a city contains many buildings, but it also contains roads etc.


Different neurotransmitters provide different types (speed, response function) of activation.


The same neuron may release different neurotransmitters based on stimulus, but it’s not universal.

https://www.ncbi.nlm.nih.gov/books/NBK10818/


In my view it has become a cost/performance decision. There are still some cases where analog signal processing is the only way, or the cheapest way, of doing something. There will always be a place for the electronic equivalent of "last mile delivery," at least for getting stuff in and out of the analog domain.

One of my projects at my day job was an analog front end for a digital system, where part of the analog circuit was attached to a sensitive optical detector held at liquid nitrogen temperature. But the majority of signal processing was done in the digital domain.


I agree. A couple of years ago I was asked to design a circuit that could read a voltage, perform a few calculations and multiply the result by another voltage and create two phase-shifted signals from that multiplication product. It was a simple design in that it could be reduced to high-school trigonometry. However, finding an efficient implementation was not trivial.

After spending a lot of time looking at parts and considering multiple approaches, the simplest answer was to do the slower aspects in the CPU and use a 4-quadrant Multiplying Digital/Analog Converter (MDAC) to do the heavy lifting (signal multiplication).

ISTR that doing it all digitally would have required a processor with at least a 40MHz clock and probably a separate 200MHz clock to implement a 1-bit DAC. And I would still have needed extra circuitry since one of the inputs (and both outputs) could swing +/-10V.

Not to say that the all-digital approach wouldn't have worked, but the development cost would have been overwhelming for something that probably won't sell more than 200 units or so. By using an MDAC and a couple of opamps, the design was reduced to an Arduino "shield."

Have to say, it works pretty damn nice :-)


Look into Field-programmable analog arrays (FPAA) It's an FPGA for opamps instead of gates.


It's questionable how valuable this would be. Analog computers were just a precise implementation of technology available at the time. To multiply more accurately, you bought more accurate discrete components. Now that we have digital computers, we are not constrained by the physical properties of the components of the computer -- you can use as many bits as you need to get the desired accuracy in your computation, and it doesn't matter if every component in the computer has a calibrated accuracy.

(Work on better manufacturing is still relevant for digital computers, of course. But the work leads to lower-power devices, not more accurate computations. Years ago, we used 5V logic because the average manufacturing tolerances gave acceptable differentiation between 5V and 0V. As manufacturing got better, 3.3V worked fine, then 1.2V, and now even lower. The result is wasting less power.)

There are certainly physical processes at work that limit how "good" of a digital computer we can build... the question is whether or not there is something about manufacturing analog computers that scales better than digital computers, and when we get to the limits of die size or how small features can be with lithography, if analog computers will become competitive again. I kind of doubt it. But I know pretty much nothing about this area.


For simple adder/multiplier type circuits analog computers are inferior to digital ones for most uses because of the additive error factor that accumulates over every iteration of a loop.

I imagine that there are some analog circuits that would directly solve complex polynomials or perhaps differential equations without iteration, which would not suffer the error amplification problem. I don't know how common the equations or systems that those circuits solve for are in most scientific or other application use.


I once came across a paper on talking about something similar (https://dl.acm.org/citation.cfm?id=3001164). I can't access it right now and don't remember all of it, but it was about using analog hardware to run a convolutional network for computer vision before converting the camera's output to digital.



Now that analog fpgas are becoming a thing, I see someone in the next couple years replicting one with that.


Training a neural net with analog circuitry would mean an extra challenge in the reproducibility of results.

That's currently already a challenge, tiny changes have large differences in performance of the network.


Yes, but reproducibility is not always necessary.

For example, drawing a simple anti-aliased line on a computer produces results that depend (at the pixel-level) on the software stack and or graphics card used. Nobody ever complained (much) about that.

(Then again, it may allow your computer to be fingerprinted more easily).


thing about transistors is that it's easy to have many transistors on the same IC behave in set ratios of each other (By changing the transistor width and length ratio, etc), but getting a performance metric exactly controlled is fairly difficult (not saying it's not possible, just something you try to avoid because cost, reliability and yield).

I'd imagine any hypothetical implementation of a NN will be fairly difficult because of that.

Also achieving linearity cheaply (especially multiplication) is very difficult.


Doing it with discrete components is of limited use, but there are various companies working on doing it on-chip.

(Don't forget that analog systems have limited gain-bandwidth and noise immunity)


Quantum computers are analog devices! And, yes, folks are using then for training neural nets.


> Quantum computers are analog devices!

I've read this before used disparagingly; quantum is "just analog." Is that really the case? Is quantum computing is simply the ultimate miniaturization of analog computing?

Reading the recently disclosed "quantum superiority" paper from Google[1] that was somehow leaked by NASA one could be convinced that quantum is "just analog." The paper deals in "resonance", "coupling", "filter" and "attenuator"; it reads like the description of a superheterodyne transceiver; analog RF.

This reverse-engineering story points out that the speed of analog computation is due to the effectively parallel processing of the op amps. This aligns with the description of qubits also working in parallel.

One thing is certain; classical analog computers are vastly easier to understand. An op amp is simply an electrical function. So now I'm really intrigued; Is quantum computing really "just" faster and smaller analog?

[1] The actual PDF has appeared: https://drive.google.com/file/d/19lv8p1fB47z1pEZVlfDXhop082L...


> The paper deals in "resonance", "coupling", "filter" and "attenuator"; it reads like the description of a superheterodyne transceiver; analog RF.

This is the scaffolding for measuring and manipulating quantum states. Quantum computing itself is not "analog" in the original, signal processing meaning of the word: the states are not isomorphic analogies for modelled physical processes.


It's true that qubit states are inherently quantized. However, the control circuitry, couplers or gates, and often the qubits themselves, are analog devices. The "un-sexy" truth is that calibration is one of the most difficult parts of producing a useful quantum computer.


Sure, and any PC has tons of analog circuits too, for example speaker output. The calibration and scaffolding in quantum computers isn't a model of anything. Not in the same way as opamp board in this article is an analogy of integral.


Digital circuits also process everything in parallel. It's only CPUs (i.e. one particular thing you can build out of a digital circuit) that have trouble with it, really...

Quantum computing is (somewhat) analog in nature, but it's not "just analog". In theory, taking advantage of quantum physics in the computer gives you an improvement in asymptotic complexity (not just a constant factor speedup), one that you can't get any other way.


Nice read! I keep wanting to learn more about the hardware to build something myself. Only simulations so far. Analog computers are absolutely fascinating. I feel like there is a lot of lost lore in that field.


This reminded me of the TI acquisition of Burr-Brown, analogous to Oracle buying Sun. Rip.

Some early tilt rotor aircraft used analog computers for stability control.


Interesting comparison. What exactly did TI do to BB which warrants such a ghastly comparison?

I know BB well thanks to tinkering with their isolation amplifiers for an old data acquisition project. also familiar with them due to working on equipment from the 70's/80's with the really old BB 3451 monoblock isolation amplifiers.


TI is antagonistic to all but the largest customers. Trying to get access to manuals, instruction sets, etc was a pain in the 90s. Devkits commonly cost >$500, even for the smallest of parts.

There is a big memory hole, at least in the people I interact with about the personalities around the various semiconductor companies. Nerdy (Maxim,XMOS) dorky (Green Arrays) serious (ST) irreverent (Fairchild), sloppy and fun (Microchip). Companies like TI are in it for the money, others have a passion for engineering and solving civilization's problems.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: