Typical users went from aerospace and automotive ("will our wing or chassis oscillate badly when we hit a thermal or pothole"?) to audio (that's how they would make those Frequency Response charts for speakers) to hard disk drive head servos ("if we send the head from track X to track Y, will it get there directly, or "ring" -- slightly overshoot and then overcorrect for a while?").
One fun detail: eventually they had to encase their OpAmps in special little heaters that would keep them at a constant temperature, to avoid drift as the ambient temperature changed.
You can still occasionally buy one surplus on eBay for a few hundred dollars; they were about $10k new (in 70's dollars).
Let me point out that the description here is as understood by a pure-digital person: I didn't seem to inherit the analog gene, and as a teen couldn't grok his deeper explanations ("What do you mean 'the imaginary part of the signal'? We're in the real world; the electrons don't have any imaginary components!").
Roughly speaking, you need 2-degrees of freedom to represent a circuit: the voltage and the current. Instead of throwing "two numbers" around all the time, we create a complex number.
We call the two parts of the number "real" and "imaginary", but really... they're roughly lining up with "voltage" and "current". We just call them "real" and "imaginary" because that's what all the math guys call it.
A "purely imaginary" number in phasor world is when the current is +/- 90-degrees out of phase with the voltage. (180-degrees out of phase means the current is simply negative, going backwards).
A "purely real" number happens at 0-degrees (positive current) or 180-degrees (negative current). "Purely Real" numbers line up exactly to all of the DC-voltage experiments from beginner-level electronics.
As it turns out, all circuits are made up of resistors, capacitors, and inductors can be calculated using "simple" addition, subtraction, multiplication and division with complex numbers.
The above is a gross simplification of whats going on. More details can be found here: https://en.wikipedia.org/wiki/Phasor
My impression was that hobby servos used basically just PI control and were heavily damped so you don't really get overshoot, but again I don't really know.
I am curious to what extent an analog/digital hybrid can blend the best of both worlds, especially for something where the speed <-> precision tradeoff may somewhat favor speed. Like training neural networks.
That's where Analog comes in. You can shape and calculate microvolt and even nano-volt level signals before upscaling them to millivolts so that the CPU can handle the rest of things.
In general, digital RAM will hold a value more reliably for a longer-period of time than an analog storage device (aka: capacitor). True, RAM is implemented using capacitors (DRAM needs to be "Refreshed" as those capacitors lose value).
Its easy to "refresh" a capacitor if you only care about two states: high and low. If you're analog however and care about microvolts / nanovolts, its impossible to "refresh" capacitors.
After all, what does a capacitor at 5000uV really mean? In digital, you can refresh it back to 6000uV every few seconds. In analog, you don't know if part of the information leaked out or not.
If you are measuring a differential signal with a large high-frequency common-mode component, then it also makes sense to perform the subtraction in analog. Otherwise the inevitable small phase shift between the sampling of the two channels in your ADC will end up coupling the common-mode signal into your difference.
3 Gigasamples / second: http://www.ti.com/lit/ds/symlink/adc07d1520.pdf
The fastest OpAmp I can find is this 18GHz GBP: http://www.ti.com/lit/ds/symlink/ths4303.pdf
At 3 GHz, the OpAmp would only offer a gain of 6 which isn't really enough for much accuracy.
Granted, the OpAmp is like $5 and the ADC is hundreds of dollars (and the ADC is only 7-bit accurate)... but the digital world has gotten scary fast and scary good.
That's why Oscilloscopes, even GHz-Oscilloscopes are being made with digital technology today.
Hell, this crazy product raises eyebrows: http://www.digikey.com/product-detail/en/analog-devices-inc/...
26Giga-samples (Nyquist of 13). I mean, 3-bits sucks, but holy crap is that fast. A 26GHz ADC would be able to perform digital-filter analysis on 2.4GHz Bluetooth and Wifi without any aliasing what-so-ever.
Doing multiplication/division/exponentiation/etc is actually a little more difficult. Typically, they are done through log amplifiers, of which there are plenty of monolithic versions.
You know, back in the era where computers were 33 MHz (That's 30-nanoseconds per clock!) with 16-bit adders and cost tens-of-thousands of dollars.
Today, things are different. We have 2GHz octcore computers in our pockets with 4GB of RAM. Analog-circuits have improved, but not nearly as much as digital circuits.
The primary issue with Analog is that we've hit the noise-floor. Getting more than 7.5 digits of accuracy on all calculations means getting rid of noise at the -75db (volts) level or 150db-watts. Case in point: 7.5 digits of analog accuracy on a 5V circuit is accurate to +/- 150-nanovolts.
At those minuscule levels of voltages and currents, simple heat, airborne static electricity, and even sound... noise... and physical movement... can disturb your calculations. (Crystal Oscillators for example are Piezoelectric and are tiny-little microphones that pick up sound / movement and convert them into voltages!! Every PCB trace is a potential antenna that may pick up a stray radio signal. Things get hard...)
Voltages are connected to the real world after all. At some point, the circuit can "feel" you breathing on it (condensation in the air changes humidity, which changes the resistance of the surrounding air. Extra heat decreases and/or increases the resistance of various components)
Today? 90% of those problems would indeed be solved much more easily with a $0.50 microcontroller. Raw speed and price drops pretty much eliminated the benefits of analog circuitry in most applications.
It's pretty much impossible to maintain a stock of any other electronic bit you need, other than maybe resistors. I'm not sure why you'd want to either. I have a ton of ICs left over from random projects. I can never find them when I need them, and even if I can find them again, I have to spend a bunch of time re-reading the datasheet.
I've realized that you can accomplish almost anything you'd like with a microcontroller and some mosfets. It might be more elegant to use a ripple counter if you have to drive a whole bunch of elements, but I'd rather just chain a bunch of microcontrollers together.
Some 74-series chips are still unique and useful. 74LV4060 for example is a good stable clock solution. But your typical "AND-gate" and "XOR-gate" chips seem completely obsolete by CPLD + Verilog / VHDL.
The ispMach 4000ZE costs $1.20 and has a current-draw of ~hundres of uA at 1MHz. Why use a 7400 (NAND-gate) ever again?
Then again, maybe I just haven't looked hard enough.
Most people seem happy to use a chip that performs the exact functions they wanted even though it's completely closed/proprietary. But they seem to balk at using a programmable chip where they can choose the functionality but the software that let's them do so is closed/proprietary.
Beyond that, when tools are Open Source it makes life easier and takes out friction for us as individuals. I hate the idea of ordering up a fancy new chip, getting excited about it, and then finding out that I have to pay a zillion dollars to download and use the programming tools. Or I get the tools and find out that there's some weird "field of use" limitation in the Terms and Conditions. Or worse, I lose my last copy of the software, the manufacturer goes out of business, and then later I find a few of those chips in my parts box and realize I will never be able to use them because I no longer have access to the software. Or maybe I still have the software, but it has a bug that nobody can fix because nobody has the source code.
That we still have to use some chips that are proprietary to some extent is something I accept out of pragmatism, but by the same token, I support efforts like OpenCores, RISC-V, etc. that may eventually lead us to reasonably usable CPU's that don't need binary blobs and aren't full of undocumented features, etc. At least one can hope.
The main thing is that the low-level details of macrocells varies from company to company. And those companies are protective of their designs, so they don't want people figuring out how their devices really work.
So it is a tough spot. Still, Verilog allows you to move your "code" from company-to-company (Xilinx, Altera, and Lattice). So its sufficient.
Hobbyist micro-controllers also come in through-hole packages. I can solder pretty much anything that doesn't require a hot-plate, but it's a lot easier to prototype with through hole.
For instance, one of the things I'm building right now is a 4x8 array of 6"x6" flip-flop sign elements that I got as surplus from the highway department. Each element has a dual 68 ohm coil and runs at 24 volts. I'm basically designing a one-off. Rather than getting clever, I just drive the coils with a ULN2803 (one of the parts I keep around), and drive the ULN2803's with arduinos, and tie the whole thing together with a raspberry pi.
It's kind of a mess, but the arduinos mean I don't have to worry about timing, and I can ssh in to the raspberry pi. It's probably an order of magnitude more expensive and less elegant than something an EE would design, but it works, and it probably won't catch on fire.
> Perfection is always the final sacrifice before the alter of completion.
Nothing is ever perfect. You must always sacrifice perfection before a project is complete.
With the cheapness of Arduinos and Raspberry Pis, throwing a bunch of them together is always a potential solution.
EDIT: I'm not sure a CPLD could have stopped the use of the ULN2803, although maybe it could have replaced the Arduinos. If the Arduinos were good enough however, then no harm-no-foul really.
Its just that a CPLD can implement things like a 27-bit shift register over SPI or I2C really easy (assuming you have some Verilog or VHDL coding skills). So maybe you'd be able to do Raspberry Pi -> CPLD -> ULN2803.
But lets say... a 1GHz DSP (specialized chip designed to perform "digital signal processing") implementing an optimized FIR or IIR filter most certainly react within nanoseconds and pick out signals in the 100MHz range.
That's still a general-purpose processor too. Switch to FPGAs or even ASICs (the fastest and most expensive of digital-logic designs), and you can probably get even faster.
Analog can only get better accuracy by literally creating temperature-controlled cases for various components. (Think I'm kidding? https://www.vectron.com/products/literature_library/OCXO.pdf) Analog hit the noise-floor decades ago.
Not quite. Its more about the difficulty of writing the code. A lot of OSes aren't realtime which can be part of the problem, but a careful administrator can tune Windows or Linux to be "more realtime".
Consider this: most hardware is set up to DMA to RAM through the northbridge. It never touches the CPU, the hardware just writes the data directly to RAM (indeed: this is a good thing. When the CPU is executing from L1 cache, it isn't touching the RAM anyway).
Then, the CPU gets an interrupt, and then it can read from RAM.
It takes roughly 60 ns alone to access DRAM on a modern processor (https://software.intel.com/sites/products/collateral/hpc/vtu...), plus whatever time it took for the DMA to happen.
NOW the CPU finally has the data in its cache and can start calculating.
To fix this issue, you DMA directly onto the CPU's cache itself.
That takes special device drivers, a lot of optimizing, a lot of thinking about latency.
Exactly what "received signal" exists inside of a CPU's L1 cache and register?
In practice, any I/O of a modern CPU / Microprocessor is going to run through the Northbridge and most likely be dumped into DDR3 / DDR4 RAM. That's already 60ns of delay minimum that you've introduced into the system and you haven't even done any math yet!
Building a digital system that responds within nano-seconds is difficult, while a purely passive LRC filter... taught in maybe ~2nd year or ~3rd year Electrical Engineering classes, can indeed respond within nanoseconds!
I guess what I'm saying is... building Digital systems that respond within nanoseconds is certainly possible, but its specialized knowledge and requires specialized CPUs (aka: a Digital Signal Processor). Most programmers don't need to do that after all (write a damn login page again)
I'm not exactly an expert on the ways of the DSP. I just know people who moved in that direction. Its an intriguing field and takes lots of study to be effective.
The GPIOs are pins that are directly attached to the CPU. They are accessible by the CPU without requiring access to cache.
Not denying that it's hard to do on a general purpose Intel CPU with its absurdly high clock rate. But anybody with a fast oscilloscope can tell you that fast microcontrollers have that kind of response time, too (10s of ns), because ops take a clock cycle, and 100MHz = 1ns.
Here is the slide for reference:
You've misplaced a decimal point.
1GHz == 1ns.
100MHz is 10ns.
Do you happen to know the name of any of these books? I would love to play around with something like that.
Chapter 5 is all about precision, Chapter 8 is all about noise.
In essence, you run a worst-case error analysis over your circuit. In many cases, 1% errors can be attenuated into smaller errors with good design. If you aren't careful however, then errors grow bigger instead.
On a tangent, but somewhat related note for those software people...
The methodology is kinda similar to error analysis with Floating Point arithmetic btw. You think about where error happens, and whenever possible, try to "squash" the error instead of making it grow bigger.
All double-precision Floating Point math has an error of +/- 1 bit 53-bits over. The question is what do you do with that error, and how can you prevent it from moving up. If you do things like sort your numbers from smallest to largest (in magnitude) before adding them up, and prevent subtraction (or addition of a negative number) at all costs, you can actually make your errors smaller.
If you're not careful about the order of operations... errors grow exponentially (and even faster than that with subtraction!). On the other hand... if you are careful about things... errors shrink exponentially!
The thing that makes digital attractive was actually pointed out in the 50s by John von Neumann: every digit you add to the representation buys you another factor of 10 (or 2, in the case of binary) of precision. I can't find a reference but it's in a famous von Neumann paper, and is celebrated these days by Neal Gershenfeld as one of the major things that makes digital tech work.
Lately I've become really interested in analog computing and want to explore the possibilities of analog/digital hybrid computers. If anybody has done / is doing anything fun in this regard, I would love to hear about it.
Also, there's a subreddit for Analog Computing if anyone is into that sort of thing. http://analogcomputing.reddit.com
Far as reprogrammable, type in what I told you into Google: Field-Programmable, Analog Arrays (FPAA's). Or reconfigurable analog. You'll get companies and CompSci. Also remember that analog works best/easiest on oldest process nodes. That combined with multi-project wafers can get your ASIC prototypes down to thousands of dollars rather than tens to hundreds. Middle ground company is a structured ASIC that's basically like a FPGA programmed with 1-2 masks at fab time. Triad Sdmiconductor does that for digital+analog with claimed $400k per design with few weeks turn around. Idk if their analog cells are suitable, though.
Also, for good measure you might enjoy at least looking at the last general-purpose (programmable), analog computer:
There were several hybrid computers sold in the 60's.