This video discusses the history and principles of classical analog computing and then says that analog computers have energy efficiency benefits over digital computers. This can give the incorrect impression that classical analog computers are more efficient than digital computers, which is absolutely not correct. It may be true that modern analog phase-change memory AI accelerators are more efficient than their digital counterparts, but this assertion should be distanced from the discussion of classical analog computers to not give the wrong impression.
People always love to say some old lost tech is better than the modern version
But it's like, a one in a thousand shot.Almost all modern tech is better with regressions lasting a decade at most.
Occasionally there are rumors of some modern medical treatments being worse even though medicine as a whole improves, but mechanical and electrical stuff is much better understood and more controlled.
The exception is a few specific appliances that are seemingly designed to fail. Even then there's no lost tech. I'm pretty confident we could make a much better washing machine today for much less money if we tried, than anything from 30 or 50 years ago.
The ancient tech myth I see the most is the idea that the recipe for "Roman concrete" was lost, and somehow modern engineers can't figure out how to make a superior mix.
Certainly, Roman engineers built incredible unreinforced concrete structures. But this was accomplished through structural engineering techniques designed to keep the concrete compressed (e.g. arches, domes). Modern structures like elevated highways and skyscrapers would be impossible to build this way, and require steel reinforcement.
While the mix the Romans used was slightly different (apparently it contained a bit of volcanic ash and used less water), modern engineers deliberately choose a different mix based on the structure. E.g. larger structures with natural reinformcement, like dams, will tend to use a mix closer to what the Romans used.
The analog computers on the Missouri class battleships was never upgraded because the kill radius of the shells was larger than the margin of error the analog computers introduced and that the analog computers required less electricity which is good for something such as a ship. So in this way, yes, they are better (these computers developed in the 30s). It depends on the application.
> Occasionally there are rumors of some modern medical treatments being worse even though medicine as a whole improves, but mechanical and electrical stuff is much better understood and more controlled.
Such as the iron lung probably being better than the invasive machines we have today.
> The exception is a few specific appliances that are seemingly designed to fail. Even then there's no lost tech. I'm pretty confident we could make a much better washing machine today for much less money if we tried, than anything from 30 or 50 years ago.
look at the space industry, it actually costs us more (adjusting for inflation) to try and go back to the moon than when we did before when we had no idea on how to do it.
Or look at tractors, farmers are clamoring for 1980s tractors over these new expensive (break down easy with no way to repair) 'modern' tractors John Deer is pushing.
People died going to the moon the first time. Better to not go at all, or to have it cost 10x as much, than to risk even one astronaut's life.
Besides, if we just redid what we had, we would not be developing any new tech that could be used on earth. And it would be less comfortable, a very bad thing if you're still fighting the losing battle to convince anyone that they'd like to be in space for more that a brief adventure.
AFAIK they have cuirass ventilation to replace iron lungs now.
Those new tractors are most likely way more reliable. In practice they may be worse because of artificial limitations put there on purpose, but were it not for those it would almost certainly be better(Although people would probably still like the same ones, because no matter how reliable something is, people seem to like things with a "substantial" feel that they can understand).
All the commentary I see about the moon program says that the rockets bit of NASA was set up to be a pork barrel for all of Congress just so nobody cancelled the Apollo mission, and that the many people are annoyed that this structure is still in place.
Given that, the cost of the latest moon mission is “as much as we can squeeze the taxpayer for” rather than representative of the actual cost. The various new startups worldwide (not just SpaceX) are all much more interesting, though obviously they’ll only get a price comparison when they actually land a human.
It depends of your criteria. E.G: old appliance are bulky, noisier and suck more juice. You could say modern ones are then better. But my mother still own stuff from decades ago, that are consistently sturdier, last longer, are easier to repair, and have a better UI.
I just bought a food slicer. The motor was connected to the saw using a very soft plastic gear. Its teeth became smooth after a month of use, rendering the machine useless, and no way to buy a new gear on the internet.
This is modern tech at work: you getter a cheaper machine, but to get there, the materials used suck. And there is no stock for replacement parts because it's expensive to keep them around.
They're probably deep in planned obsolescence since most people will use them about 3 times in 30 years.
The more common everyday things like kettles, vacuum cleaners, toasters, etc all seem to have very good options for not much money.
I've never driven a car, but my family seems to need less trips to the mechanics by far than when I was a kid. Computers definitely seem better in every way, and of course all small electronics like tape players have been phone-ified and seem to be much more reliable.
It may well be that slicers are niche enough that consumer versions are worse.
Still, nylon gears can be very durable(Unless any ozone gets on them, that seems to kill them).
A truly modern slicer with the same crappy materials would probably be using the MCU to predict the gear temperature based on motor current and limiting the duty cycle, and it would last a long time and perform acceptably well.
Either that or they'd have some direct drive scheme for then really nice ones, or maybe even some kind of no moving parts linear motor.
Modern power tools do this all the time. They shut down or reduce power for what seems to be no reason or a minor reason, but the computer probably detected some subtle overload condition I wouldn't have. It's a bit annoying, but it makes them cheap and durable enough to not think twice about buying used.
The stuff at Wal-Mart usually sucks, but there's almost always some affordable modern version that beats the older tech.
It does seem that 3rd party gears exist though, for some slicers.
Modern refrigerators and washing appliance can be really flaky from software bugs. I've talk to others who have $3000 fancy stainless steel refrigerators and they seem vulnerable to weird and hard to diagnose problems.
With a clickbaity title like that, I knew this would be Veritisium before I clicked it. And now, having verified that it is, I share this information with you.
One thing about Veritasium is that they do get things wrong. They did one on electric circuits where if you had a circuit that was miles long with the energy source sitting right next to you, you would get current as soon as you turned on the electrical circuit because 'electricity doesnt follow the wire' or something along those lines.
This has been disproven by people doing experiments
Can you share the experiments that disprove it? The one video I saw proved it.
Someone set up some long wires over a farm field, and measured the time to first current flow, which was faster than if the current had to travel the whole wire length. Granted, the full current flow doesn't appear until later, but the first little bit of current is there, which is exactly what Derek said in the Veritasium video
I believe this is the same video, the author theorized (probably correctly) that it was because there was residual 'power' on the wire, as it acts like a giant capacitor, among other things. this is because the wire that immediately showed up was extremely small and dissipated very quickly. https://www.youtube.com/watch?v=2Vrhk5OjBP8
One could quibble over the details of the experiment, such as what sort of lamp to use, but Maxwell's laws would have to be repealed for the experiment to not work essentially as Veritasium claimed. Check out "electromagnetic induction" and "displacement current".
The problem is that he frames the whole whole experiment as if the inductive current is a property of the circuit rather than a property of two separate wires being next to each other - he heavily implies the inductance is result of the current "starting" to travel through the loop, and IIRC never once uses the term "inductance" (which seems odd for a video purporting to promote understanding rather than just be clickbait).
(The fact that the loop is closed can't be relevant by simple thought experiment, because otherwise it implies you could get ftl signaling by having a switch on the physically distant checking whether the "tiny current" flows when you close the source)
Veritasium may not have mentioned either inductance or displacement current; I don't recall. He did, however, talk about an even more abstract concept which subsumes the effect of both: the Poynting vector. Pedagogically speaking, it was probably a mistake to go there without covering the prerequisites in considerable detail, but that does not, IMHO, make it clickbait: for someone who has some background, this could be food for thought.
Whether the distant end of the loop is closed or not makes a difference eventually, where 'eventually' means once the initial EM wave has reflected from the end and returned to the load.
> Pedagogically speaking, it was probably a mistake to go there without covering the prerequisites
And considering the fact that Derek has a Ph.D. in physics education research [1] that seems like an odd mistake for him to make, unless it was intentional.
As another commenter pointed out [2] Derek's content lately seems to be tailored to the maxim of "Make people believe they're thinking and they'll love you. Make them actually think, and they will hate you." It seems intended to invoke incredulity rather than understanding, presumably because he's decided the former is more marketable - because providing incomplete understanding and phrasing things in a way that's just technically correct enough to invite flame wars boosts engagement and therefore virality.
I remember seeing that video. It was an insta-click for me because I understand very, very little about circuits and electricity. Something that actually bugs me quite a bit.
I understood even less after watching it I think..
No, you’re wrong. If you hook an ohmmeter to a 10 light-second long piece of LOSSLESS coax, you’ll measure the characteristic impedance (50 or 75 ohms, probably) of the cable for 20 seconds.
I have done this personally with shorter coax and TDR.
I didn't watch the whole thing but is the ladder ever explained? I didn't understand why he's doing the video on a ladder. Is that part of the clickbait? You have to have a crazy face and be on a ladder, and that's how you make Youtube's algorithm happy?
At this point it seems like a vicious spiral. Someone clicks on a guy on a ladder and all of a sudden you're being recommended guys on ladders, so now everyone needs to get on a ladder to get on the front page.
Youtube runs the thumbnails through a sentiment analysis AI and promotes videos more when they rate with a high likelihood of the bucket of emotions that have been found to correlate with ad impressions.
Something about Veritasium absolutely rubs me the wrong way, and I generally like Youtube education creators, but his videos are borderline unwatchable for me.
Does anyone have an explanation for what it is about his videos that is so grating?
It's some form of uncanny valley type of thing where every detail or facet or property I might point out, there are other people exhibiting the same nominal qualities without bothering me. But something here is just slightly off in some way, but so subtly you can't say what it is.
I honestly can't say he's doing anything actually wrong. I don't think even the most dumbed down ones are really guilty of being wrong. Not enough that I'd complain about it in a casual yt video.
I can see some people maybe not liking his basic style of exposition. That slightly affected dreamy pondering thing.
One thing I can observe is that another guy that just turns me right off even worse is the Smarter Every Day, and these two appear in each others videos at least a few times. Or maybe it was just one time and that one time just bugged me that much :)
I don't know what it is but I frequently see a video from either of these guys where the topic they are presenting looks interesting, but then I do not like their video on that topic.
I think it's down to the presentation just seems a bit too affected. And it's not even as bad as seeming fake or insincere. They both actually strike me as perfectly sincere.
Its interesting you mention Destin from Smarter Every Day, because he always came off as a very earnest learner in his videos. Yeah, he's still putting on a show, but it does seem like he genuinely enjoys and revels in the things that he does. It seems to me like Veritasium has a more contrarian thread of: "Oh, you thought the world worked like that? Well you were wrong, and here's a 15 minute video all about how you were wrong." There's an Onion video making fun of Vox that touches on this perfectly: https://www.youtube.com/watch?v=RpkQEq75y18
I guess to summarize, with Destin it feels like you are learning with him and Veritasium feels like you are being lectured.
Funny. Veritasium actually has some videos talking about why he has this approach :)
I saw something similar all the time as a dance teacher: explaining stuff and showing over and over made the students feel like they learned a lot. But they just didn't. It was mostly a total waste of time. I switched to showing ONCE then they had to try. Then I showed again. The speed of learning went up dramatically. The problem I had then was that the students didn't like it!
"Make people believe they're thinking and they'll love you. Make them actually think, and they will hate you."
See, I’ve heard that, but I like being challenged. I like to have to “pause and ponder” as the excellent Youtuber 3Blue1Brown puts it, but Vertiassium is just annoying.
It'd be nice with a real scientific test! There's a lot of subjective feeling here and not anything very solid. Veritasium at least backs up his position with his own thesis research. I find that more compelling as an argument than having a position not backed up by anything at all.
Hey, I also don't like chocolate all that much, and I seem to be some kind of inexplicable alien because of that. So, it's very possible "it's just me" that I don't like good things because curmudgeon. :)
I see the distinction you make there now that you point it out.
To me the videos feel like a sustained form of clickbait. I keep expecting a payoff at the end of the video, but instead he sort of weaves together a grand narrative that nevertheless feel unfulfilling to my analytical mind. Take this video for instance. In the beginning he talks about analog computers and how they differ from digital computers. Then he talks about how neural networks work. Then he explains how neural networks could be better if they used analog values. But he doesn't ever get around to proving the title of the video ("we're building computers wrong"). He doesn't even convincingly show that we could build an analog neural network given current technology. His video format is just a lofty premise, a middle full of glossed-over science, and a disappointing conclusion with no hard takeaways.
I did like the impossible to measure the speed of light one, even though it has Smarter Every Day.
Ok that one has something I can identify clearly: The way Justin (Smarter Every Day) seems to be just badly hamming up the whole "this is so weird and hard to visualize" as though to pander to the audience who are expected to be baffled by the idea being presented. That was Justin though not Veritassium.
Can't say if it's the same feeling, but to me it's the fact that he plays everything off like he's doing a helpful thing and trying to inform you for your own benefit, but the reality is the motivations are warped and the video only exists because some company paid him money to do so.
I've unsubscribed from this channel and smarter every day because while I believe the authors feel like they're doing the right thing. They let themselves be highly biased/influenced by their sponsors that I cannot watch the video and believe that they're being honest.
Same, but it wasn't this unbearable in the beginning. I think it's partly me thinking there's no way he understands such wide ranging subjects at such great depths without significant research and prep for the video, which contradicts how casually and confidently he's presenting everything. I'm used to expecting a more scripted presentation in these cases, which it probably is, but it doesn't sound like it.
Veritasium used to do a lot less a/b testing on his videos and I guess maybe he just got too big too fast. The thing that bothers me the most though is that he went from almost exclusively explaining interesting scientific facts/experiments to a more "aggressive" style where a lot of his videos are about x being wrong and here is z reason why. And I personally just don't like that style, especially when it turns out that there is a lot more nuance than his titles would indicate. I mean what are the odds of an entire field being wrong , and the truth is actually in a 15 mins youtube video by a very general science channel?
I feel like its more the assertiveness on how right the information he's sharing is, when not everything he makes a video on is quite so correct but there isn't any of this nuance or doubt in his works. Doesn't feel like any of the professors I've spoken to, there's too much confidence. The only thing we can be that confident in is how wrong humans tend to be
The two things that bother me about Veritasium are his brand of pseudo-intellectual (smugness maybe?) and the fact that his videos often aren't telling the whole picture.
All of his videos that I've seen have a weird thing where he challenges people to explain something, and then he reveals he was talking about something else and their explanation isn't relevant. This is just https://xkcd.com/169/
His videos don't strike me as intending to educate. They strike me as intending to show how much more clever he is than you - and it's entirely that he came up with some weird gotcha that people logically assume he couldn't be talking about because it's nonsense.
It's like watching a smug teenager who never grew out of it. No thanks, I don't need that reminder of how grating I was.
That comic reminds me of all the times I've seen someone pose the Monty Hall Problem without mentioning that the host always reveals a goat on first round.
I think he is easily one of the smartest people on YouTube and is producing some of the very best videos. He is also a brilliant educator. For examples, his videos on special [1] and general relativity [2] were real eye-openers for me, even though I have studied physics and have seen this stuff explained a million times before.
What I really don't like are the new video titles. At some point he started A/B testing them to maximize his revenue. He even made a video about it [3]. Of course, this selects for the most outrageous clickbait in order to get that sweet engagement. I am starting to wonder if he took into account, how much this is starting to alienate long-term followers.
I'm subscribed to his channel, as he's covered some great science topics in the past. I skipped this video based on the title - it doesn't tell me anything about the content and "feels" more like a plea for the video to be watched than some good science broken down.
I imagine that, over time, this may shift his target audience. But if a new audience is comprised of a demographic he's aiming to reach, and it gets him the views, I can't fault his decision on how he titles the videos, even if they have me skipping them.
I thought it was actually just native advertising for Mythic.
Besides that I think the topic is prescient; there's a company local to me in Australia called Brainchip that is doing something similar I think. Given that NNs are just a bunch of matrix multiplications it's a promising approach.
I occasionally used to watch Smarter Every Day then he started doing a lot of this stuff, I think one of them might have been for an oil or defense company which I found utterly gross and I've avoided watching the channel ever since.
At the end of the day, specialized hardware, particularly on the analog side (neuromorphic, optical, etc.) locks development into the path of highly uniform feedforward networks by optimizing large matrix multiplications, and it is unclear if this tradeoff is worth it as we still have so much to figure out about which methods will make progress in AI.
First of all, there is no single accepted definition of "neuromorphic" [1]. Still, as a point in favour of the "neuromorphic systems are analogue" crowd: the seminal paper by Carver Mead that (to my knowledge) coined the term "neuromorphics" specifically talks about analogue neuromorphic systems [2].
Right now, there are some research "analogue" (or, more precisely "mixed signal") neuromorphic systems being developed [3, 4]. It is correct however that there are no commercially available analogue systems that I am aware of.
Unfortunately, the same can be said for digital neuromorphics as well (Intel Loihi is perhaps the closest to a commercial product, and yes, this is an asynchronous digital neuromorphic system).
Yes, analog computing can be more efficient and faster. But there are reasons why analog was eclipsed by digital.
1. Flexibility. Reprogramming digital computation is easy and quick.
2. Robustness. One can mass-produce devices that operate in the digital domain and sell them as working as spec'd. But operate them in the analog domain, and you will work with a million snowflakes that all operate slightly differently, giving different computation results. And when temperature changes, the result of your analog computation will inevitably change, too. You can work around that by adding more circuitry, and partly also on the algorithmic end but it will cost efficiency and precision.
Matrix multiplication on an analog device is great, unless you want an exact and reproducible result.
on point 2 another way to put it is that digital computers take much much lower tolerance components than analog computers.
A digital computer transistor has to operate(assuming ttl logic) at around 5 volts and 2 volts the transistor behavior in between does not really matter.
The analog computer transistor has to operate to a high precision at all voltage ranges.
Go ahead and start pricing out high precision transistors and the environmental controls needed to keep them there and you will see why we use digital computers.
The history of the navy NTDS air defense system is interesting because they were in a position where they could have gone ether way. use known and understood analog computers or go with the unknown new tech, digital computers.
> we are now getting into a regime of computing where exact reproducibility is not necessary.
I think that is a myth. Predictable, reproducible and explainable outcomes are the holy grail of computing, in particular in AI.
If stochasticity is desired, there are methods to inject it, with precise control of the level of stochasticity and the distribution.
This level of control is absent in analog computing. Device mismatch introduces some randomness, but it cannot be controlled in practice. Instead of adapting the randomness to the algorithm, one ends up adapting the algorithm to the randomness.
Working with analog computers is a fascinating academic exercise. For practical applications, I doubt we’ll see it compete with digital computation anytime soon.
I said exact reproducibility. Every time you add numbers your neurons likely fire in slightly different ways, but the end result is what is reproducible and useful.
He's not totally correct because, like I said, the way we think of analog computers right now is likely not going to get us to where we need to be. Noise is a huge problem. Our own neurons do not seem to use continuous voltage, for example, and use pulse density coding, at least for some of their operation.
I'm a bit worried for that analog-matrix-multiplication-for-AI company. I vaguely remember reading somewhere that the past half century is littered with companies whose value proposition was "our specialized thing does 10x better than digital transistors" and then they predictably just got steamrolled by Moore's law. And although Dennard scaling ended two decades ago, the flops-per-watt and flops-per-second of AI-specialized chips like TPUs has been improving substantially recently [1][2].
You're correct that specialised analog companies have not done well historically. However, we don't find ourselves in exactly the same position in computer architecture/performance as we've been decades before.
There's some (relatively) new ideas that now the performance of computers will be pushed more by dedicated silicon for a dedicated purpose, and tools. See for example there's plenty of room at the top [1], or Hennesy's talk at Google [2].
This of course does not mean that analog computers are suddenly viable, but it does mean that they could potentially fill a niche where they failed previously.
Anecdotally, when looking at jobs for hardware design by the likes of Infineon, STM, Cyient etc. there seems to be a relatively high ask for (senior) analog designers, and a new focus on mixed-technology chips. It might turn out to be a dud still, but it isn't the same situation as decades before.
Fuzzy logic was pretty closely tied to Neural Networks at one point (80s to early 90s?) in terms of which researchers talked about them and even just which books info about them was located in. I think dedicated analog fuzzy logic chips were, as you say, steamrolled by Moore's Law.
(My zorojushi rice cooker still mentions fuzzy logic, but they must be just implementing the fuzzy "transfer function" with an mcu at this point, right?)
found it bit interesting, goes on about the history of AI/ Neural Networks and says these do not need the precision of digital computers (manly requiring matrix math)
and this can be done faster with analogue computers (using variable resistors type of transistors)
In the intro to this video he mentions analogue computers predicting tides. His video before this one[1] goes into detail on that, and it was incredible.
In the 1800s, Lord Kelvin spent years working on tidal rise and fall patterns, applying Fourier Transforms to break them down into ten individual sine waves, then combining those sine waves back together to predict future tides. And built analog computers to do all the involved integration, multiplying and summing. It's a part of the history of computing I'd never heard of, despite hearing quite a lot about the dawn of electro-mechanical computers in the early 1900s.
The machines (or later derivatives) is in the science museum. One variant uses cones, I imagine they provide for a huge amount of variance of the parameters of the fourier transform, another is pulleys, but changing out a pulley is a lot more work than moving where two cones are co-incident rubbing to transfer motion.
Along with a meccano numerical analyser, and bits of babbage's original work
That video has one of the best explanations of how and why deep learning works.
The argument for analog is weaker. I've tried some stuff with analog computers, and at one time I was into op-amp circuits. The basic problems are noise and inflexibility. However, that may change as people develop ICs that are reconfigurable, like an FPGA.
Flash memory cells as multipliers by a changeable but not dynamic value are a new idea. That only leads to analog ICs that do a pre-trained neural net, though. Running neural nets isn't that expensive. It's training deep neural nets that needs entire data centers. If they can figure out a way to do the whole backpropagation thing in analog, that would be impressive.
Since the chip is meant for inference but not for training, in theory you only write to the Flash once or twice. In this case data retention is a more limiting factor than wear.
For digital storage the data can change quite a bit before a 1 becomes 0 or vice-versa, but for analog even a small change can be a problem. So I don't know if we can extrapolate the typical Flash memory data retention number to the expected aging for this device.
He's interested in creating analog circuits that are more efficient than digital ones for signal processing and machine learning.
I've programmed an analog computer when I was working in the military-industrial complex long ago, implemented a standard missile guidance algorithm for an IR missile using a dual feedback loop and adjustable nav ratio. It was kind of like programming with wires to implement math. It had noise limitations on accuracy, in part because of the many wires exposed to florescent lights and digital computers in the room. Quite interesting. I think the Analog Thing might make a good EuroRack music making module! Turn math into music.
I liked some of veritasium's older stuff but it seems to be getting more and more clickbait as time goes on (which is reasonable as it's clear that YT is his job, and YT promotes clickbait above everything else).
A few people said that they "knew" it would be veritasium, but I had guessed it would be Mill computer folk :D
I have to say that the title is definitely “click baity” - I watched the video because I wondered what we were doing wrong. It is focused on how to improve AI when we run out of atom sized hardware. I don’t appreciate the title.
Let's use the LM358DT as an example of a modern op-amp. Just sitting there, it has current sources sinking 100uA of current. At 5 volts, it consumes about 0.7 mA with no load.
So, compare that to a 4 bit ALU (an old one, at that) the 74HC181. At 5 volts it consumes 80uA at idle.
Analog circuits require transistors have bias flows through them at all times to ensure linearity. CMOS digital logic has the transistors either fully on or off, except during switching times.
Going analog isn't as great an option as this video makes it seem.
I feel like it was a bit misleading to mention large amount of energy to train a model and then show a chip which can only be used for inference and not training. In general people are optimizing for power use at the edge and not for training.