Hacker News new | comments | show | ask | jobs | submit login
How to Set Up an OpAmp Circuit to Do Complex Mathematics (dummies.com)
74 points by mindcrime 37 days ago | hide | past | web | 65 comments | favorite

My dad designed Frequency Response Analyzers in the 60's and 70's that used OpAmps to compute Fourier Integrals and then "solve the right triangle". That is, relative to a sine wave it generated, it would take a return analog signal, multiply by sine and cosine waves, integrate both, and then take the square-root of the sum of the squares and also the arctangent of their ratio, to finally output amplitude and phase-shift of the return signal. All as analog voltages; all with OpAmps (and a tiny bit of TTL to turn the integrators on and off). It simply wasn't feasible to do this digitally; it's not just that doing a real-time FFT in the digital sphere was cost-prohibitive at the time, but also the A-to-D converters weren't fast enough to keep up (at the required accuracy).

Typical users went from aerospace and automotive ("will our wing or chassis oscillate badly when we hit a thermal or pothole"?) to audio (that's how they would make those Frequency Response charts for speakers) to hard disk drive head servos ("if we send the head from track X to track Y, will it get there directly, or "ring" -- slightly overshoot and then overcorrect for a while?").

One fun detail: eventually they had to encase their OpAmps in special little heaters that would keep them at a constant temperature, to avoid drift as the ambient temperature changed.

You can still occasionally buy one surplus on eBay for a few hundred dollars; they were about $10k new (in 70's dollars).

That sounds _extremely_ interesting for people who are into control theory! Do you happen to have schematics?

I think so; I'll have to look through storage. Drop me a line at {myid} at yahoo.com.

Let me point out that the description here is as understood by a pure-digital person: I didn't seem to inherit the analog gene, and as a teen couldn't grok his deeper explanations ("What do you mean 'the imaginary part of the signal'? We're in the real world; the electrons don't have any imaginary components!").

Imaginary numbers are pretty easy to understand, once you realize they're just a 2x1 matrix, and that all "imaginary numbres" math are equivalent to 2x1 Matrix linear algebra.

Roughly speaking, you need 2-degrees of freedom to represent a circuit: the voltage and the current. Instead of throwing "two numbers" around all the time, we create a complex number.

We call the two parts of the number "real" and "imaginary", but really... they're roughly lining up with "voltage" and "current". We just call them "real" and "imaginary" because that's what all the math guys call it.

A "purely imaginary" number in phasor world is when the current is +/- 90-degrees out of phase with the voltage. (180-degrees out of phase means the current is simply negative, going backwards).

A "purely real" number happens at 0-degrees (positive current) or 180-degrees (negative current). "Purely Real" numbers line up exactly to all of the DC-voltage experiments from beginner-level electronics.

As it turns out, all circuits are made up of resistors, capacitors, and inductors can be calculated using "simple" addition, subtraction, multiplication and division with complex numbers.


The above is a gross simplification of whats going on. More details can be found here: https://en.wikipedia.org/wiki/Phasor

For my control systems class several years ago I created a PID controller with just OpAmps. Each portion had a switch to enable/disable to see the results of P+I, P+D, P+I+D etc. The output from each portion could be amplified (controlled by a trimpot) and then was summed (also by an OpAmp) and then went to a pair of transistors to drive a DC motor on a Plexiglas board with a pointer and 0-180 degrees marked out. The input was a potentiometer with a knob to set the desired angle. Pretty fun project. Others in the class decided to do it digitally with a micro controller but seeing it all done with analog components was really cool!

I did something similar with op amps to implement a PID controller. I had a circuit design class in college where each week you would have some new thing to design. One of the assignments was to develop a laser that would find and then track a solar panel. The class had a contest to see which design would track the center of the panel the best. Fun stuff!

You made a servo.

Yes that's true, just wanted to explain the mechanism behind it. Just a note, I don't think most hobby type servos implement full PID control, though many industrial ones do I am sure.

Hobby servos often have controllers far more complex and accurate than PID. Something taking into account the gearbox and motor inertia, input voltage, etc. And not just tuned with high pass, low pass and gain filters.

Is that true? I know there are more optimal control schemes than PID IF you know the parameters of the load, but with a servo you could connect anything to it, so its not just the gearbox and motor, so how would it handle it? I would love to see some implementation code or diagram of a hobby servo control. Also my understanding is that PID is optimal IF you don't know about the load, but maybe that is wrong, its been a while.

My impression was that hobby servos used basically just PI control and were heavily damped so you don't really get overshoot, but again I don't really know.

The cheap, older 'analogue' servos were just PI, but the new 'digital' servos have much more advanced control. They actively reverse power to the motor when getting close to the target to extract regen power from the motor inertia while still getting a faster response. Yes, you can get this from PID, but you can't keep it at full reverse right up to the moment it hits the target without getting nasty oscillations as the voltage changes.

Analog electronic computers actually offered significant advantages in precision over their digital equivalents for much of the mid-20th century, due to limitations imposed by slow clock speeds and small word sizes. Calibrating them was non-trivial, though. https://en.wikipedia.org/wiki/Analog_computer#Modern_era

In case anyone enjoys reading about the history of control systems, Between Human and Machine: Feedback, Control, and Computing before Cybernetics by David Mindell is so much fun to read.


Interesting, especially in light of the "modern" view where the prevailing narrative is that analog computers are (in general) less precise, but faster. And, of course, less flexible, but I think that goes without saying.

I am curious to what extent an analog/digital hybrid can blend the best of both worlds, especially for something where the speed <-> precision tradeoff may somewhat favor speed. Like training neural networks.

Everything is analog when you look at a small enough time scale. "Digital" is an abstraction on top of analog, not a substitute.

It appears that digital is king for modern calculations. However, digital exists only in the ~millivolt to 5V range, its difficult to get a purely digital circuit to "sense" microvolts or nanovolts.

That's where Analog comes in. You can shape and calculate microvolt and even nano-volt level signals before upscaling them to millivolts so that the CPU can handle the rest of things.

In general, digital RAM will hold a value more reliably for a longer-period of time than an analog storage device (aka: capacitor). True, RAM is implemented using capacitors (DRAM needs to be "Refreshed" as those capacitors lose value).

Its easy to "refresh" a capacitor if you only care about two states: high and low. If you're analog however and care about microvolts / nanovolts, its impossible to "refresh" capacitors.

After all, what does a capacitor at 5000uV really mean? In digital, you can refresh it back to 6000uV every few seconds. In analog, you don't know if part of the information leaked out or not.

At the boundary between analog and digital, you typically need to provide an anti-aliasing filter.

If you are measuring a differential signal with a large high-frequency common-mode component, then it also makes sense to perform the subtraction in analog. Otherwise the inevitable small phase shift between the sampling of the two channels in your ADC will end up coupling the common-mode signal into your difference.

While what you say is true, its incredibly amazing how good ADCs are in the modern era.

3 Gigasamples / second: http://www.ti.com/lit/ds/symlink/adc07d1520.pdf

The fastest OpAmp I can find is this 18GHz GBP: http://www.ti.com/lit/ds/symlink/ths4303.pdf

At 3 GHz, the OpAmp would only offer a gain of 6 which isn't really enough for much accuracy.

Granted, the OpAmp is like $5 and the ADC is hundreds of dollars (and the ADC is only 7-bit accurate)... but the digital world has gotten scary fast and scary good.

That's why Oscilloscopes, even GHz-Oscilloscopes are being made with digital technology today.

Hell, this crazy product raises eyebrows: http://www.digikey.com/product-detail/en/analog-devices-inc/...

26Giga-samples (Nyquist of 13). I mean, 3-bits sucks, but holy crap is that fast. A 26GHz ADC would be able to perform digital-filter analysis on 2.4GHz Bluetooth and Wifi without any aliasing what-so-ever.

Like anything, its a cost tradeoff. DSP can be much cheaper than active analog for many frequency ranges. You can certainly oversample your way to a cheaper antialiasing filter in many applications. But you can't ever eliminate it entirely.

That exists. It's called mixed-signal ASIC's. They mix digital and analog components.

An interesting point when using Op-Amps: doing linear operations (addition/subtraction/multiplication by a scalar/integration/etc) is pretty easy (they are "linear devices").

Doing multiplication/division/exponentiation/etc is actually a little more difficult. Typically, they are done through log amplifiers[0], of which there are plenty of monolithic versions.

[0] http://www.analog.com/media/en/training-seminars/tutorials/M...

Back in the '80s my grandpa worked for Northrup flying target drones he designed the guidance system for back in the '70s. He hated digital computers -- thought there was no way digital with all ones and zeros could ever be as accurate as the continuous values in analog computers he built with OpAmps. (He also didn't like seat belts or the UN).

Well, back then analog circuits were likely faster and more accurate. There are lots of textbooks that showed you how to get 6.5 digits (21-bits) or 7.5 digits (23-bits) of accuracy using normal parts machined to 1% precision. These circuits would operate all the way to 100MHz and react to impulses within nanoseconds... even way back in the 80s... and could be mass produced for only a few dozen dollars.

You know, back in the era where computers were 33 MHz (That's 30-nanoseconds per clock!) with 16-bit adders and cost tens-of-thousands of dollars.

Today, things are different. We have 2GHz octcore computers in our pockets with 4GB of RAM. Analog-circuits have improved, but not nearly as much as digital circuits.

The primary issue with Analog is that we've hit the noise-floor. Getting more than 7.5 digits of accuracy on all calculations means getting rid of noise at the -75db (volts) level or 150db-watts. Case in point: 7.5 digits of analog accuracy on a 5V circuit is accurate to +/- 150-nanovolts.

At those minuscule levels of voltages and currents, simple heat, airborne static electricity, and even sound... noise... and physical movement... can disturb your calculations. (Crystal Oscillators for example are Piezoelectric and are tiny-little microphones that pick up sound / movement and convert them into voltages!! Every PCB trace is a potential antenna that may pick up a stray radio signal. Things get hard...)

Voltages are connected to the real world after all. At some point, the circuit can "feel" you breathing on it (condensation in the air changes humidity, which changes the resistance of the surrounding air. Extra heat decreases and/or increases the resistance of various components)

Yeah. I remember in the late 90's early 2000's patiently trying to explain in so many online forums why a simple feedback control with a couple of opamps or even a single transistor was a faster, cheaper & better approach than throwing a microcontroller and digital circuitry at many a n00b's problem.

Today? 90% of those problems would indeed be solved much more easily with a $0.50 microcontroller. Raw speed and price drops pretty much eliminated the benefits of analog circuitry in most applications.

Yeah, today it's the complete opposite. I see so many people on the internet raging about how using an Arduino is stupid when you can do the same thing with standard electronic parts. It's faster and easier to program something than it is to design a circuit, especially when you take debugging into account. Not to mention the fact that microcontrollers are general purpose, so you can have them sitting around ready to go.

It's pretty much impossible to maintain a stock of any other electronic bit you need, other than maybe resistors. I'm not sure why you'd want to either. I have a ton of ICs left over from random projects. I can never find them when I need them, and even if I can find them again, I have to spend a bunch of time re-reading the datasheet.

I've realized that you can accomplish almost anything you'd like with a microcontroller and some mosfets. It might be more elegant to use a ripple counter if you have to drive a whole bunch of elements, but I'd rather just chain a bunch of microcontrollers together.

IMO, you should learn Verilog and use CPLDs for glue-logic scenarios (like ripple counter or whatnot).

Some 74-series chips are still unique and useful. 74LV4060 for example is a good stable clock solution. But your typical "AND-gate" and "XOR-gate" chips seem completely obsolete by CPLD + Verilog / VHDL.

The ispMach 4000ZE costs $1.20 and has a current-draw of ~hundres of uA at 1MHz. Why use a 7400 (NAND-gate) ever again?


The main problem I have with CPLD's is that they seem to fall into a space much like FPGA's, where there's very limited open source tooling for them. And I'm kindof an open source ideologue, so I try to avoid anything where there isn't a completely open source toolchain.

Then again, maybe I just haven't looked hard enough.

Can you explain your perspective on open source ideology as it applies to hardware design? I see people complain every so often about lack of open source tools for FPGAs and CPLDs and it doesn't make sense to me.

Most people seem happy to use a chip that performs the exact functions they wanted even though it's completely closed/proprietary. But they seem to balk at using a programmable chip where they can choose the functionality but the software that let's them do so is closed/proprietary.

It's a combination of things. For starters, I generally feel that openness in terms of computing, and technology in the more general sense, is a Good Thing for the world at large. The more things are open source and unencumbered by various legal mechanisms (patents, whatever), the more people who can use those tools, build on them, innovate, and participate in the further advancement of technology. Think "virtuous circle".

Beyond that, when tools are Open Source it makes life easier and takes out friction for us as individuals. I hate the idea of ordering up a fancy new chip, getting excited about it, and then finding out that I have to pay a zillion dollars to download and use the programming tools. Or I get the tools and find out that there's some weird "field of use" limitation in the Terms and Conditions. Or worse, I lose my last copy of the software, the manufacturer goes out of business, and then later I find a few of those chips in my parts box and realize I will never be able to use them because I no longer have access to the software. Or maybe I still have the software, but it has a bug that nobody can fix because nobody has the source code.

That we still have to use some chips that are proprietary to some extent is something I accept out of pragmatism, but by the same token, I support efforts like OpenCores, RISC-V, etc. that may eventually lead us to reasonably usable CPU's that don't need binary blobs and aren't full of undocumented features, etc. At least one can hope.

For me, it's countering subversion, being able to fix the tool, being able to extend the tool, and increasing labor supply of contributors with free or cheap tooling vs $100k a seat for some.


The main thing is that the low-level details of macrocells varies from company to company. And those companies are protective of their designs, so they don't want people figuring out how their devices really work.

So it is a tough spot. Still, Verilog allows you to move your "code" from company-to-company (Xilinx, Altera, and Lattice). So its sufficient.

Thanks for the recommendation. I'll have to try that out. I do have a absolute mess of external clocks sitting around, and a few other ICs that I really like. I like building things though, and I build them in my free time, and I build them for myself. Tossing a micro in allows me to use my time better and use my skills better.

Hobbyist micro-controllers also come in through-hole packages. I can solder pretty much anything that doesn't require a hot-plate, but it's a lot easier to prototype with through hole.

For instance, one of the things I'm building right now is a 4x8 array of 6"x6" flip-flop sign elements that I got as surplus from the highway department. Each element has a dual 68 ohm coil and runs at 24 volts. I'm basically designing a one-off. Rather than getting clever, I just drive the coils with a ULN2803 (one of the parts I keep around), and drive the ULN2803's with arduinos, and tie the whole thing together with a raspberry pi.

It's kind of a mess, but the arduinos mean I don't have to worry about timing, and I can ssh in to the raspberry pi. It's probably an order of magnitude more expensive and less elegant than something an EE would design, but it works, and it probably won't catch on fire.

One of my professors told me:

> Perfection is always the final sacrifice before the alter of completion.

Nothing is ever perfect. You must always sacrifice perfection before a project is complete.

With the cheapness of Arduinos and Raspberry Pis, throwing a bunch of them together is always a potential solution.


EDIT: I'm not sure a CPLD could have stopped the use of the ULN2803, although maybe it could have replaced the Arduinos. If the Arduinos were good enough however, then no harm-no-foul really.

Its just that a CPLD can implement things like a 27-bit shift register over SPI or I2C really easy (assuming you have some Verilog or VHDL coding skills). So maybe you'd be able to do Raspberry Pi -> CPLD -> ULN2803.

I remember back in Intro to CompE, we did the first project using handful of 7400s to build seven segment display driver using a bunch of switches to control the input 3-bit value. It required more chips and breadboard real estate than we had so we were only required to count up to 6. Immediately after that, we got our FPGAs and did the same project but with a button that would increment the 8-bit counter. The coding took about 10mins in verilog compared to the hours of logic diagramming and wiring.

Of course those 2 GHz octocore computers with 4 GiB of RAM have no chance in hell to react to impulses within nanoseconds or operate anything at 100 MHz. All sacrificed at the altar of throughput.

Well, that's true. Cellphone chips don't do that.

But lets say... a 1GHz DSP (specialized chip designed to perform "digital signal processing") implementing an optimized FIR or IIR filter most certainly react within nanoseconds and pick out signals in the 100MHz range.

That's still a general-purpose processor too. Switch to FPGAs or even ASICs (the fastest and most expensive of digital-logic designs), and you can probably get even faster.


Analog can only get better accuracy by literally creating temperature-controlled cases for various components. (Think I'm kidding? https://www.vectron.com/products/literature_library/OCXO.pdf) Analog hit the noise-floor decades ago.

a dedicated 2-3GHz processor can definitely receive a signal, turn it into an interrupt, and respond to it within nanoseconds. it's the OS that prevents that from happening.

> it's the OS that prevents that from happening.

Not quite. Its more about the difficulty of writing the code. A lot of OSes aren't realtime which can be part of the problem, but a careful administrator can tune Windows or Linux to be "more realtime".

Consider this: most hardware is set up to DMA to RAM through the northbridge. It never touches the CPU, the hardware just writes the data directly to RAM (indeed: this is a good thing. When the CPU is executing from L1 cache, it isn't touching the RAM anyway).

Then, the CPU gets an interrupt, and then it can read from RAM.

It takes roughly 60 ns alone to access DRAM on a modern processor (https://software.intel.com/sites/products/collateral/hpc/vtu...), plus whatever time it took for the DMA to happen.

NOW the CPU finally has the data in its cache and can start calculating.


To fix this issue, you DMA directly onto the CPU's cache itself.


That takes special device drivers, a lot of optimizing, a lot of thinking about latency.

Obviously I'm talking about code that is only using L1, and touching only registers.

I see your point, but raise you this thought:

Exactly what "received signal" exists inside of a CPU's L1 cache and register?

In practice, any I/O of a modern CPU / Microprocessor is going to run through the Northbridge and most likely be dumped into DDR3 / DDR4 RAM. That's already 60ns of delay minimum that you've introduced into the system and you haven't even done any math yet!

Building a digital system that responds within nano-seconds is difficult, while a purely passive LRC filter... taught in maybe ~2nd year or ~3rd year Electrical Engineering classes, can indeed respond within nanoseconds!


I guess what I'm saying is... building Digital systems that respond within nanoseconds is certainly possible, but its specialized knowledge and requires specialized CPUs (aka: a Digital Signal Processor). Most programmers don't need to do that after all (write a damn login page again)

I'm not exactly an expert on the ways of the DSP. I just know people who moved in that direction. Its an intriguing field and takes lots of study to be effective.

I don't think we were talking about DSP... at least when I originally replied I was making the official point that a high frequency CPU doing single-clock-cycle reads of GPIOs, can respond in nanoseconds. I think you are talking about something different...

The GPIOs are pins that are directly attached to the CPU. They are accessible by the CPU without requiring access to cache.

Not denying that it's hard to do on a general purpose Intel CPU with its absurdly high clock rate. But anybody with a fast oscilloscope can tell you that fast microcontrollers have that kind of response time, too (10s of ns), because ops take a clock cycle, and 100MHz = 1ns.

They tried this on an AM335x, a modern ARM Cortex A8 used on the BeagleBone Black for example. Clocked at 1 GHz and doing nothing else other than toggle a GPIO pin - it took 200 ns. This is for a fully integrated SoC. Merely due to various levels of interconnect within the CPU.

Here is the slide for reference:


That's with an OS (right? You left out the necessary context), and it's not a fast processor.

> 100MHz = 1ns.

You've misplaced a decimal point.

1GHz == 1ns.

100MHz is 10ns.

>There are lots of textbooks that showed you how to get 6.5 digits (21-bits) or 7.5 digits (23-bits) of accuracy using normal parts machined to 1% precision.

Do you happen to know the name of any of these books? I would love to play around with something like that.

In Chapter 5 in "The Art of Electronics 3rd Edition", Horowitz offers an analysis on the 6.5 and 7.5 digits of accuracy attained from the Agilent DMM.


Chapter 5 is all about precision, Chapter 8 is all about noise.

In essence, you run a worst-case error analysis over your circuit. In many cases, 1% errors can be attenuated into smaller errors with good design. If you aren't careful however, then errors grow bigger instead.


On a tangent, but somewhat related note for those software people...

The methodology is kinda similar to error analysis with Floating Point arithmetic btw. You think about where error happens, and whenever possible, try to "squash" the error instead of making it grow bigger.

All double-precision Floating Point math has an error of +/- 1 bit 53-bits over. The question is what do you do with that error, and how can you prevent it from moving up. If you do things like sort your numbers from smallest to largest (in magnitude) before adding them up, and prevent subtraction (or addition of a negative number) at all costs, you can actually make your errors smaller.

If you're not careful about the order of operations... errors grow exponentially (and even faster than that with subtraction!). On the other hand... if you are careful about things... errors shrink exponentially!

You probably realize this, but here's chapter and verse on why analog was superseded...

The thing that makes digital attractive was actually pointed out in the 50s by John von Neumann: every digit you add to the representation buys you another factor of 10 (or 2, in the case of binary) of precision. I can't find a reference but it's in a famous von Neumann paper, and is celebrated these days by Neal Gershenfeld as one of the major things that makes digital tech work.

And on a related note. https://archive.org/details/anacomp

Lately I've become really interested in analog computing and want to explore the possibilities of analog/digital hybrid computers. If anybody has done / is doing anything fun in this regard, I would love to hear about it.

Also, there's a subreddit for Analog Computing if anyone is into that sort of thing. http://analogcomputing.reddit.com

Hear about it, courtesy of nickpsecurity: https://news.ycombinator.com/item?id=10614952

Thanks for posting it. I wanted to but my mobile didnt have it. Those were best papers I found showing amazing results you can get with analog computing.

Thanks for originally sharing; hoping to do something with this info and will share.

If you want some funding or impact, an accelerator for a proven deep-learning architecture could be useful. Prior work in neural computation got plenty results. Just havent seen many independents try it for deep learning. One wacky idea I had was using analog accelerators for complex, evaluation functions in genetic algorithms. The digital chip would just feed each solution to a bank of them with fitness result popping out other side. Another idea was attempting any of that with Field-Programmable, Analog Arrays to see what results they got. Lowers development costs vs making masks for custom circuits.

Those are awesome ideas, ones I never would have thought of! I was thinking more along the lines of using analog filtering for convolutional layers of deep neural networks, dropout layers and filter templates for 2-d features are just bandpass filters, which analog filters can approximate easily. That analog fpga idea is a good one also; there needs to be a way to perform rapid experimentation on these types of ideas. Are there any products/companies you know of in this area?

Your ideas are possibly better just because you already have a chance of building them if they're that close to existing analog schemes. I'd try them first to get a feel for the whole thing.

Far as reprogrammable, type in what I told you into Google: Field-Programmable, Analog Arrays (FPAA's). Or reconfigurable analog. You'll get companies and CompSci. Also remember that analog works best/easiest on oldest process nodes. That combined with multi-project wafers can get your ASIC prototypes down to thousands of dollars rather than tens to hundreds. Middle ground company is a structured ASIC that's basically like a FPGA programmed with 1-2 masks at fab time. Triad Sdmiconductor does that for digital+analog with claimed $400k per design with few weeks turn around. Idk if their analog cells are suitable, though.

Also, for good measure you might enjoy at least looking at the last general-purpose (programmable), analog computer:


Here is a neural network using analog weights and neurons: https://arxiv.org/abs/1610.02091

> If anybody has done / is doing anything fun in this regard, I would love to hear about it.

There were several hybrid computers sold in the 60's.

you don't even need electronics to make a computer! here's a 1953 training film for a mechanical fire control computer


This kind of stuff fascinates me. There's an amazing amount of intelligence you can put into oddly-shaped cams.

Another cool example: solving differential equations with opamps


Is it possible that high precision/performance analog devices might perform some graphics tasks (e.g. ray tracing) faster than digital computers?

Sure -- if you cast light at physical objects, and then collect the reflected light with photo-sensitive component, you can perform real-time ray-tracing with arbitrarily good accuracy.

Well, sure. An analog ray tracer is called a camera. Scene setup can take a while, though.

Sounds like a fun weekend project. Any guide on how to actually put this together to run check the math?

Get a breadboard, opamp, some resistors/capacitors and wires. Read the data sheet for the opamp to connect it correctly (connect V+ and V- to +/- 15 V power supplies). Feed in a saw tooth wave and look at the output with an oscilloscope (you won't see anything interesting with a constant voltage for input). If you need a reference for understanding the principles that make it work, use The Art of Electronics 2nd or 3rd ed.

Thank you. I had a hunch that this would require a scope and a signal generator. And I have neither unfortunately.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact