Hacker News new | past | comments | ask | show | jobs | submit login
Building a vacuum tube computer (ltu.se)
107 points by ChickeNES on Aug 14, 2019 | hide | past | favorite | 46 comments

>but electron speed in cupper is about 15-20cm per nanosecond so it is usually not a big factor.

I'm going to be extremely pedantic for a moment, but this is a case where there's a bit of a difference between electricity and electrons. The speed of electricity, aka electrical energy, aka electric field propagation through copper is 15-20cm per nanosecond. The speed of electrons, aka drift velocity, is far slower and governed by the current and conductor cross-section. Vd=I/neA. For a 91w TDP desktop CPU at maximum current draw, the drift velocity through the power connector would be about 245cm per hour. Yes, per hour. For AC circuits the drift velocity is effectively zero because the electrons vibrate back and forth around their starting position.

This blew my high-school-physics mind; is there a quick primer / ELI5 on this somewhere? Are you saying that if I connect a battery to a lightbulb, "electricity" moves at speeds we're accustomed to, but "electrons" will only move from one side terminal of battery, through a bulb, to the other side, way way slower than laypeople think? Or did I misread and oversimplify your message? :|

Basically correct. If you could attach a magical GoPro to a single electron in a conductor and applied voltage to it, you would see that it zips around at very high speeds in various directions across very small distances and only averages out as moving in the right direction. This average velocity is the drift speed. If you magically labeled all the electrons in the lamp's wiring, you would find it contains (almost) all the same electrons it started with. The company just bills you for the energy it took to move those electrons back and forth like the teeth on a sawblade.

You can also think of it like water in a filled tube. If you pump more water into one end the person on the other end will get your signal long before the physical water you put in to create the signal reaches him. You could put a water wheel or propeller in the tube to convert the energy of the moving water into work, and just like the lamp you don't have to wait until the physical molecules of water you pumped in reaches the wheel to start using energy from the moving water.

> You can also think of it like water in a filled tube. If you pump more water into one end the person on the other end will get your signal long before the physical water you put in to create the signal reaches him.

This made it click for me, thanks!

Exactly: the signal propagation speed from turning on the switch arrives at the lightbulb at a high fraction of the speed of light. The actual charge carriers take a lot longer. My own preferred analogy is a bicycle chain: pressing on the pedals transfers energy to the back wheel almost immediately, you don't have to wait for individual links to reach the back wheel. And the disposition of electrons is a lot closer to the rigidity of a chain. Just as with the links, each exerts a force on the next.

On electronics.stackexchange we battle this misconception a lot, e.g. https://electronics.stackexchange.com/questions/245610/is-vo...

For almost all practical purposes you should ignore electrons when doing electronics. They're only relevant in detailed theoretical analysis of semiconductors, or (as per article) in vacuum tubes and related items (VFD displays, CRTs).

I'll add one more example to the others: picture a crowded hallway full of people all trying to move in the same direction. The flow of people is quite slow, but if one person in the back trips and pushes the next person and so on, the domino effect will travel much faster than the individual people.


>In metals, electric current is a flow of electrons. Many books claim that these electrons flow at the speed of light. This is incorrect. Electrons in an electric current actually flow quite slowly; at speeds on the order of centimeters per minute. And in AC circuits the electrons don't really "flow" much at all, instead they sit in place and vibrate. It's the energy in the circuit which flows fast, not the electrons.

> Metals are always full of movable electrons. In a simple circuit, all of the wires are totally packed full of electrons all the time. And when a battery or generator pumps the electrons at one point in the circuit, electrons in the entire loop of the circuit are forced to flow, and energy spreads almost instantly throughout the entire circuit. This happens even though the electrons move very slowly.

yeah, electrical power propagates way faster than electrons drift (and the power flux is primarily outside the wire, but that's a fun story for a different day).

One way to think about it is as a newton's cradle sorta deal. It's not perfect, but fairly close. Also maybe flicking a jump rope - energy propagates, but the bits of rope don't move from beginning to end.

Before I read your comment, I thought the description sounded like Newton’s Cradle.

Flicking a jump rope sounds more akin to AC to me, though. Maybe, pulsed DC?

What about a train or bumper car analogy (not joking)?

I would really like to see this as a visualization.

Think of a large tube filled with tiny balls. If you push on one end, the other end moves almost immediately, but the balls themselves are barely moving.

The analogy from my physics class long ago was we each hold one end of a broomstick, if I shove the stick, you'll feel the stick move at a delay consistent with the speed of sound (speed of sound in wood, different than air...) yet the stick actually moves physically a good deal slower than the speed of sound when it transmits the force that moves at the speed of sound. If I whack the broomstick you'll feel it vastly sooner than I can hand the entire stick to you. Speed of sound in wood is hundreds of miles per hour, but I can't physically move a wood broomstick faster than several dozen miles per hour.

Bulk material moves slower than forces move... usually. The physics of shock waves and supersonic stuff is interesting.

Thanks for all the replies, that is fascinating; I intuitively understand that for A/C, it all kinda sorta averages out without much movement. But for DC, I assumed electrons actually moved at a "meaningful and impressive velocity on human scales" - not as fast as the electricity propagation, I appreciate and understand the analogies and they make sense - but I imagined it'd still be way, way faster than "centimeters per hour" :O

It astonishes me that much energy can be transmitted / work can be done, with so little actual movement of electrons... I'll follow the links / articles suggested and see what I can gleam; thanks all! :)

In a lightning the electrons move a considerable distance, but even there the speed is relatively low.

Also astonishing is to calculate the weight, or mass, of the electrons participating in a lightning discharge. So little mass, so much energy.

Absolutely. Electrons are accelerated by the field until they hit something. The average velocity of all the electrons due to the competing acceleration and collisions is the drift velocity and it's very slow.

On the other hand, electric fields move quickly. At some large proportion of the speed of light depending on the particular materials of the wire and some other factors.

You might want to look into "phase velocity" and "group velocity", for example on wiki: https://en.wikipedia.org/wiki/Phase_velocity#Relation_to_gro... .

Won’t even get that far, because it’s alternating current. It changes direction a hundred times a second.

“A batteri” is direct current.

I would not target RISC-V- they need to review much simpler machines, like Princeton IAS or Manchester Baby, and they should try a bit serial design.


"By June 1948 the Baby had been built and was working.[23] It was 17 feet (5.2 m) in length, 7 feet 4 inches (2.24 m) tall, and weighed almost 1 long ton (1.0 t). The machine contained 550 valves (vacuum tubes)—300 diodes and 250 pentodes—and had a power consumption of 3500 watts."

Also look at super tiny tubes:


Here is a bit serial RISC-V: https://github.com/olofk/serv

The super tiny tubes are cute. They look like they have to be soldered in though. Could be annoying to replace when they break down? The bigger tubes are socketed.

There are special types of high-reliability tubes used for computers. If I recall correctly, one of the tradeoffs is that they use a different alloy for the wire, which makes the wire more expensive to manufacture because the process wears out the tooling more quickly. I can’t find the article where I found this information, though.

The key to long life for valve (tube) computers was never turning it off.

That’s not really the key. When ENIAC was first run, several tubes would fail each day—and yes, it was turned on all the time. They addressed these problems by adjusting heater currents. In general, there’s a tradeoff here because you don’t want to strip the cathode (which can wear out a tube quickly) and you want good performance from the tube. The on-off cycle effect on filaments is also largely due to surge currents with a cold tube, at least for typical designs, and there are various techniques to limit the surge currents which will extend tube life.

These were socketed too: https://en.wikipedia.org/wiki/Nuvistor

Ya, but could you source 8000 of those today at a reasonable price?

Some of the early tube computers had some really interesting circuits. One that comes to mind is the use of "kirchoff adders," which are more-or-less analog adders with easy one-bit DACs and ADCs on the inputs/outputs. I can't remember if this ended up in the "final" IAS machine, but I'm pretty sure it was used in the MANIAC and is described in the "Second Interim Progress Report on the Physical Realization of an Electronic Computing Instrument": https://library.ias.edu/files/pdfs/ecp/secondinterimpro02ins....

Seems like a PDP-8 would be a smarter start. Only 8 instructions, IIRC, and a few 12-bit registers. And it can emulate a PDP-11, so (the emulator) could run UNIX.


BTW, PDP-11 (and PDP-8?) uses 74 logic chips, not discrete transistors, so I would say it's already complex enough for tubes.

The original -8 (no slash anything) used discrete transistors and DTL logic.

Going the opposite direction I always wanted to reimplement the flipchip modules using SMD technology just for fun. A discrete transistor SMD PDP8 that fits on a desk would superficially seem realistic, at least for a large enough desk. Honestly I just don't want to pay for the connectors and wire the backplane connectors. The SMD components could turn 60s era "handheld" flipchip modules into postage stamps, but no technological progress has happened for wire wrapped backplanes so my SMD design's backplane would remain exactly the same size and complication as the original -8, unfortunately. I'd have a giant tangle of wire wrap wires and a couple hundred postage stamp size "nano-flipchips". Probably 50 pounds of interconnect copper and 1 pound of circuit boards, LOL.

This is whats likely to get Op. Its easy to make an adder or a latch, but takes an entire spool of wire to connect one up to the overall circuit. Lots of work.

Sure, if you use 1950s germanium transistors, RTL and DTL logic is no fun anymore. I always wanted to try RTL and DTL using something more fun, like 2010s era microwave low power amplifier FETs. I would not like to pay for it, but it would be a lot of fun to build a 50 GHz ALU or similar. I have no idea what I'd do with it other than say "wow" a lot, which makes it an ideal hobby experiment.

As far as backplanes go, you could go with FPC cables instead of wire-wrapping. That would fit the new components theme, but would require actual circuit boards for the backplanes.

However, there's a lot of micro-decoding for individual instructions and you wind up handling microprogramming, by this even raising the complexity.

Maybe they only recently decided to target RISC-V but the ALU design does not cover all the functions needed. It is missing implementation of the shift instructions, for that they need a 32 bit 5-level barrel shifter. Plus they also need a comparison block which is quite complex on its own. Given the number of years it has taken to get to this point, if could be 2050 before it is working.

You don't need a barrel shifter. You can shift one bit at a time. It's slower, but takes far fewer components.

Capacitor memory addressed with tubes seems a strange choice. DRAM is capacitor memory, of course. Atanasoff, who had a a sort of computer in 1939, had capacitor memory, but he had to use a drum rotary switch to address it.

Memory was the big problem in the early days. IBM had an electronic multiplier before WWII, and plugboard-programmed machines, but no good memory elements. (Just registers with motor driven wheels and clutches and contacts.) Pilot ACE had a delay line (slow, serial access), the Manchester Baby had a Williams tube (too expensive per bit, but random access), and the EDVAC had mercury tank delay lines (slow, serial access, and toxic). Whirlwind (1951) had the first core memory (expensive per bit, but got cheaper over time.)

Core would be a reasonable choice for a tube system. Addressing is XY, so you need O(sqrt(N)) tubes.

Memory was a million dollars a megabyte as late as 1970.

I agree. Most people don't realize just how big a problem memory was in the early years of computers. People think about how to implement the ALU with various technologies, but memory is where things get sticky.

Core memory was much better than the earlier approaches you described, and is really the only way to go for a pre-DRAM computer. Once core memory came along, the other approaches vanished. The IBM tube computers used dynamically-refreshed capacitors for register storage (called a Havens Delay Unit), but I agree that capacitor memory is a strange choice for main memory since it needs O(N) tubes.

> You don't need a barrel shifter. You can shift one bit at a time. It's slower, but takes far fewer components.

I think I've finally found someone to ask this question.

How many transistors and space did it take to implement a 8-bit barrel shifter in the 1980s? I found the computing capabilities of 8-bit CPUs in retrocomputers, like the Z80 or 6809, were actually not too bad. With shift-and-add, a lot of computation can be done effectively, but their biggest limitation is the lack of a barrel shifter. Without one, no constant-time bitshift is possible with variable step, you have to shift 1-bit at a time. Performance could be boosted dramatically if a single 8-bit barrel shifter is included, and would open the avenue of a lot of optimization techniques for graphics as well.

Why didn't they include one? Especially when you consider that the Z80 was an "ultimate upgrade" of the Intel 8080, and the 6809 was the "ultimate upgrade" of the 6800.

Was it technical limitations, i.e. even a 8-bit barrel shifter is still too expensive for a 8-bit chip? Or was it that the cost of additional shift instructions decoding/processing is much higher than the barrel shifter itself? Or was it the lack of demand, i.e. it could be done reasonably well with acceptable cost, but simply no commercial reason in the 8-bit era to add one?

I examined the ARM-1's barrel shifter on the die [1]. It takes roughly 10% of the chip so it's a fairly hefty investment.

8-bit chips probably didn't have a barrel shifter because they were very limited in functionality. (I didn't realize how limited until I looked at mainframes of the time.) Functions like multiply and divide are more useful than barrel shifting, and those were missing too. Barrel shifting is sort of a frill. As for the Z80, it was very, very tight on space so it's not surprising the barrel shifter was lacking. It didn't even have an 8-bit ALU; it had a 4-bit ALU that was used twice per operation.

[1] http://www.righto.com/2015/12/reverse-engineering-arm1-ances...

> I examined the ARM-1's barrel shifter on the die. It takes roughly 10% of the chip so it's a fairly hefty investment.

> Functions like multiply and divide are more useful than barrel shifting, and those were missing too.

My thought was that the lack of multiplier/divider can be somewhat compensated by adding a cheaper 8-bit barrel shifter instead. So according to your analysis, apparently even that didn't make economic sense for a 8-bit chip.

Thanks a lot for giving an authoritative assessment, Ken!

Most of the 8-bit microprocessors I'm aware of -- including the Z80 and 6800 -- only had single-bit shift/rotate instructions, which didn't require a barrel shifter. Variable shift instructions were comparatively rare.

Besides, a 32-bit barrel shifter is considerably more complex than an 8-bit one. It's not just four times larger to deal with the wider operands; it also needs another two levels of muxes, making it closer to 6x as complex.

> most of the 8-bit microprocessors only had single-bit shift/rotate instructions

Yes, they don't have a barrel shifter because they don't have variable shift instructions. But unfortunately, you didn't answer the actual question: Why didn't they have it then? As I've already asked, Was it technical/cost limitations? Or was it simply the lack of demand/commercial reasons to add one?

Probably more, we bought 1.5Mb of core for our B6700 in the mid to late 70s for ~US$1.25

RISC-V has pretty big registers and a lot of them, so you will need more tubes than you otherwise might...

I'd probably say a 6800/6805 is about the right complexity for doing in tubes and not breaking the bank

there's an embedded subset with half the number of registers

That's still huge for the technology. The tube machines were in the order of a hundred or so gates, an RV32E is around 10k-15k.

(and people have talked about doing 16-bit subsets)

Applications are open for YC Summer 2023

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact