I can't find the reference now, but I remember reading about some early computer engineers (possibly on Whirlind?) contemplating the use of a microwave retransmitting station as a form of memory, essentially creating a "mercury" delay line with microwaves in the atmosphere.
This is how desperate people were for anything even remotely affordable which could be tortured to behave somewhat like memory! No wonder Intel made a bundle when they started offering chip memory. The stuff it was replacing was just totally inadequate for the purposes many people wanted to put it to.
Exactly. Electronic arithmetic hardware predates WWII. IBM had an electronic multiplier working. ENIAC was a giant plugboard machine. It's not that people didn't think of stored-program computers before Turing. It's that there was nothing in which to store the program.
IBM built machines with plugboard memory. Relay memory. Electromechanical memory. Punched-card memory. Punched tape memory. Drum memory. Look at the history of the IBM 600 series machines, a long battle to get work done cost-effectively with very limited memory.
Delay line memory was sequential and slow. Willams tubes were insanely expensive per bit. Core memory was a million dollars a megabyte until the early 1970s and didn't get much cheaper. There was plated wire memory, thin film memory, and various ways to build manually updated ROMs. All expensive.
Then came semiconductor IC memory (1024 bits in one package!) and things started to move.
This is way out of my knowledge domain, so I'm curious why mercury delay loops were used in the first place.
Thus, there is an inherent tension between making the tubes longer (more storage per tube) versus keeping them short (lower access times). This tension is inherent to all forms of "delay" memory, including the wire torsion memory grandparent mentions.
One of these computers was the machine at Manchester that Alan Turing worked on:
There were two sources of noise: external noise, from stray electromagnetic fields; and internal noise, caused by leakage of electrons when reading from or writing to adjacent spots. External noise could, for the most part, be shielded against, and internal noise was controlled by monitoring the "read-around ratio" of individual tubes and trying to avoid running codes that revisited adjacent memory locations too frequently -- an unwelcome complication to programmers at the time. The Williams tubes were a lot like Julian Bigelow's old Austin. "They worked, but they were the devil to keep working" Bigelow said.
This phenomenon is very similar to the recently discovered row hammer vulnerability of DRAM memory, except it predates it by roughly 65 years.
The biography Alan Turing: The Enigma (which I highly recommend) also goes into a lot of detail about the early computers that Turing worked on:
I learned about Turing Machines as a CS student a long time ago, and got the false impression that he was only involved in theoretical pursuits (theory of computation, algorithms for cryptography, etc.). It was only after reading this book that I learned how much he had contributed to the design of actual computing hardware.
> trying to avoid running codes that revisited adjacent memory locations too frequently
And today, it's exactly the opposite: we try to write code that has as much locality of reference as possible so that we can avoid expensive cache misses.
* A CRT RAM thing. It's a refreshing RAM that takes advantage of the persistence of the the phosphor glow.
A-Z is non sequential, I’m guessing it’s something to do with making character selection logic simpler, but by looking at the letters I couldn’t come up with a definite pattern or rule.
EBCDIC has discontinuities in the same places, I-J and R-S.
Why can't I have simple hobbies.
It can be worked safely if you follow necessary precautions, but unless you are familiar with those, better follow the disclaimer and "do not try this at home".
The main issue is that capacitors keep their charge after unplugging, so you might have some that are still dangerously charged.
I remember when opening old CRT TVs they had a big notice inside about discharging them before servicing some parts of the circuit.
Especially the first one saved me more than a dozen times at this point. Yay for required-by-law GFCI on the house-level.
I also do recommend to get some Schuko (CEE 7/3 and CEE7/4) or UK plugs and sockets, even if you're in the US just so you can avoid the safety nightmare that is the US plug. (Thuogh I'm not sure if you can do them as permanent installs in the US... they're still neat for lab equipment)
edit: you should have respect for anything about 50V or so, after that point it can be quite dangerous. 120V is way above where i start using safety equipment.
But yes, working with codes and guidelines is the best option here.
In many ways mains power would be safer if there was no ground, and a lot cheaper. However a couple failure cases are even worse without ground, and they are the type you only find out about when somebody dies. Thus we put ground in houses.
edit: also, in case you put two hands in a device, it won't matter much if it's on the other side of an isolating transformer or not. The GFCI might not trigger in this case.
That was my point.
The only reasonable advice is to not even touch the stuff unless you’re an expert with plenty of training. Especially since there’s nothing to be gained except satisfying a useless curiosity.
I will gladly touch stuff to change my lightbulb, I'm not gonna consult an expert for that. Or swapping fuses and other, similarly, simple procedures that are reasonably safe if you are "just careful".
Useless curiosity is where the majority of human progress comes from.
Playing with HV can be fun, just build fly swatter-level inverters instead of using a transformer/capacitors big enough to kill you.
A CCFL inverter from an LCD screen is a good start, you can light neon bulbs and gets ozone
This thread serves as a great example of why it's not a good idea to plaster dire warning labels all over everything on the planet "just to be safe." When everything is dangerous, nothing is.
The issue is that it's a lot easier to build or buy a kilovolt power supply that doesn't have adequate current limiting than one that does. And even one that has it in theory may not have it in practice — that big capacitor across the output? Make sure it isn't just wired directly to the output terminals, because its ESR sure as hell isn't going to be adequate current limiting.
So I think it's reasonable to be wary of kilovolt circuits. Dying is easy, but you only get to try it once.
While van de Graaff generators are indeed relatively affordable, and indeed some even cost less than US$200, a working microwave oven costs US$60, and a broken one can be had for under US$10. Furthermore, a safe van de Graaff generator is not actually capable of supplying enough current at the high voltage to operate a vacuum tube, while a deadly microwave-oven transformer is; and there are orders of magnitude more microwave ovens available.
That's .. not really what I'd call a video amplifier, unless the amplifier itself is also made out of tubes.