Hacker News new | past | comments | ask | show | jobs | submit login
An 8-tube module from a 1954 IBM 705 mainframe: it's a key debouncer (righto.com)
137 points by fanf2 on Jan 7, 2018 | hide | past | web | favorite | 56 comments

To understand how the inverter works, when the input (pin 2) is high, the current through the tube pulls the plate (pin 1) low. Conversely, when the input is low, the electron flow is blocked and the resistors pull the plate high.

Note that "high" and "low" here are quite different from the usual digital logic conventions of high=+V, low=GND; in this case, high would be near GND while low is a negative voltage. The circuit manual PDF gives low=-30 and high=+10. A lot of other early transistor logic families operated with such "unusual" (for today) signal levels too.

The other notable characteristic of tube circuits is the high resistor values --- this is because they operate at low current (in the mA range) but high voltage (hundreds of voltages). Contrast this with transistor logic which is relatively high current and low voltage.

One amusing component I found in the tube module was "Vitamin Q" capacitors.

Those are actually very desired by audiophiles, so it's odd to see them in a digital circuit. They're paper-in-oil capacitors and "Vitamin Q" was Sprague's trade name for the proprietary oil they used.

A lot of other early transistor logic families operated with such "unusual" (for today) signal levels too.

One wacky thing in IBM's transistor computers is that they would alternate NPN and PNP gates. This avoided an extra transistor in each gate to shift the voltage level back. So you'd end up with one gate using +12V/0V logic levels and the next one using +6V/-6V.

This technique was used in three different transistor logic families, each with their own voltages. So there were 6 different transistor logic levels, plus other miscellaneous logic levels (e.g. 48V relays). IBM's old computers were a crazy collection of different voltage levels.

> transistor logic which is relatively high current

Huh? The gate resistance of a FET is hundreds of megaohms, so the input current is measured in nanoamps. That doesn't seem like high current to me.

That's the static resistance, but when it's switching there is current to charge/discharge the gate. Don't forget leakage currents too, which are quite high in modern CPUs --- they consume ~100A at ~1V when in active use.

Sure, but that is amortized over billions of transistors. If you had billions of vacuum tubes the current draw would be vastly higher.

Transistors in TTL logic use mA of input current because they're bipolar, not FET transistors.

In classic TTL the input currents are also very asymmetric, e.g. for a high level you mustn't draw more than a few dozen µA out of an input, for a low level you have to draw something like 1.5 mA or so.

This is different from current-steering based logic (all kinds of ECL, including CML/SCL) were the current in the circuit stays the same, but only takes a different path depending on state. Supply current is largely independent of circuit state with these.

That's true, but we're talking about computers here. Computer chips don't use TTL, they're all CMOS, which uses FETs (MOSFETs to be precise).

CMOS didn't exist when the first transistorized computers were made.

Ah, right. I thought the OP was referring to modern CPUs.

Quite different from a modern key debouncer circuit like http://zipcpu.com/blog/2017/08/04/debouncing.html .

The modern FPGA circuit first passes the input through two latches to synchronize it to the system clock, and then has a digital counter to add a delay to rapid bounces. Here the delay is analog, and there is no clock input: the output digital signal is still completely asynchronous.

I wonder if they needed a second module to synchronize the signal and avoid metastability issues, or maybe that doesn't matter at the low clockspeeds they were using?

That depends on environment you are working in. Digital debouncer in FPGA (or PLC for that matter) will involve some kind of digital timer as you cannot easily build simple RC low pass filter in such environment.

In discrete logic debouncer designs with RC-filter that are quite similar to this tube circuit are still widely used and to some extent the modern designs are more similar to this when the schmitt trigger is realized as input of some MCU/FPGA and only external parts is the RC filter in contrast to tradditional discrete logic solutions which usually placed the filter between two schmitt trigger gates.

By the way adding such external RC-filter (or even just series capacitor across the contacts) is quite cheap and quick fix for worn-out rotary encoders on various devices when existing (usually software) debouncing logic stops to be sufficient.

In a similar vein, old arcade games might debounce their coin detectors with two latches synchronized to, say, HSYNC/128 -- effectively giving a debounce of a few msec.

> The modern FPGA circuit first passes the input through two latches to synchronize it to the system clock, and then has a digital counter to add a delay to rapid bounces.

What about debouncing with a low-pass filter, followed by a Schmitt trigger?

"This signal goes through two inverter circuits, creating a sharp output."

The two inverter-tubes share a common cathode resistor. This provides positive feedback from the second inverter to the first. The circuit is called a Schmitt trigger:


And today you get 6 of them in a small package like in the 74XX derived families (74AHC14 for example)

And that's when you're doing a not so streamlined circuit, and can't have them inside your ASIC or something, if you can it gets even smaller

Footnote 12 in the article goes into details on the Schmitt trigger.

The article mentions that government and business rented these machines. Does anyone know what the actual use cases were? I wonder how they justified the enormous cost to a Dilbert type manager in the 50s. Wow.

Actual use cases of the 705:

Texaco: accounting, technical and research applications. The accounting applications are integrated crude oil, integrated gas and gasoline, wholesale marketing, payroll, supply, and distribution. The technical and research applications are producing geophysical, petroleum engineering, civil engineering, refinery simulation, crude evaluations, plant process studies, pipe stress analysis, and determination of maximum allowable operating pressures. Calculations related to crude stills, fractionation, absorption and stripping are also performed.

U. S. Army, Pentagon: military personnel accounting, civilian personnel accounting, and organizational accounting.

AT&T Long Lines Dept: circuit provision, traffic load studies, accounting for operating and construction activities, message analyses (by mid 1960), pricing and billing private line customers (by late 1960), and plant trouble results - message circuits.

National Security Agency: the system is used for data processing. [no details :-) ]

These are just a few uses from a detailed 1961 report: http://www.ed-thelen.org/comp-hist/BRL61-ibm0705.html

Same as they do now... money!

One of my mentors was a statistician who got into IT while transitioning a large state labor statistics department to computerization in the 70s. They replaced 2000 clerks and tabulating machines with 1 mainframe and a 4 year project. Ditching the building lease and tabulator maintenance paid for the project!

I can see in the 70s being able to justify it as a given that the project would succeed if implemented. But that's a full 20 years after this machine. Completely different technology and probably order of magnitude lower cost per performance unit. So the question is, what workloads were are so high value to justify the risk the project would fail and the enormous cost per compute unit.

Payroll and general ledger. It’s hard to fathom how many billions of dollars are spent on payroll.

"Engineered primarily to handle business data, the 705 could analyze millions of bits of data to determine the optimum location for a retail store; simulate the entire operation of an oil refinery; handle a huge billing operation in minutes; furnish inventory production control reports; or make up a 50,000-employee payroll with millions of deductions.":


One of my flatmates (roommates to you Americans) works for a payroll company.

It takes them over an hour to run the payroll for 5000 people due to terribly written SQL queries. It's like we've gone backwards.

Where and when did we go wrong?

They were already using IBM tabulating machines in the 1910s (not general purpose computers but similar enough from some perspectives) so it wasn't such a new concept

One of the picture credits says it shows Farmers Insurance's first computer. I would guess it had both accounting and actuarial applications there.

I vaguely recall reading there were only a handful of companies offering computing at a scale, and none had the breadth of IBM. So, monopoly?

An antitrust suit was filed under the Sherman Act on 17 January 1969. It went to trial on 19 May 1975. It was eventually withdrawn on 8 January 1982 (what a coincidence).

I was kind of amazed at the poor soldering across the board on this module, I wonder what the failure rate was on these things in actual use. I suppose the high heat load kept the joints flexible at least.

When I look closely that work actually looks fine. Solder flow is reasonable and I don't see cold joints. The only true WTF was where a resistor had been ripped out for whatever reason. Also it's unclear whether they had flux-core solder yet and they are essentially soldering in 3D. (Former electronics assembler here.)

I did a bit of cursory research and it would appear that flux core solder was invented in around the mid-20th century, so there's a very good chance that they weren't using flux cored solder.

According to Kester, the solder company, they were formed in 1899 to make flux cored solder. Looking at the 1933 Allied Radio clearly shows Kester rosin core solder.

Rosin core predates that computer by a half century, at least. Kester was formed in 1899 just for that purpose, according to them. Was just looking at Allied Radio 1933 and rosin core solder is prominent. Not sure why the comments; radio building was at high form in the early 20th century.

Thanks for that pointer, all I could find was a vague wikipedia article. Obviously soldering was well established. The specifics of what was available weren't clear.

I'm sorry, but are we looking at the same pictures? The soldering looks more than OK. For example, the solder used is just enough that it takes a pleasant concave shape on the terminals.

A lot of old point-to-point wiring and soldering does typically look kind of terrible compared to modern day electronics, probably because it was assembled by hand by workers just trying to crank stuff out out as fast as possible.

Reminds me that my Dad told me he worked at Ferranti in the early 50's building "logic modules". I noticed one on display at the CHM in Mt View [1] and sent him a picture. He said he thought it looked the same as the units he built back then. His soldering was always pretty top-notch though. Ferranti was a military contractor so had some experience with reliability.

[1] http://www.computerhistory.org/revolution/early-computer-com...

Been soldering/desoldering that very type of equipment for many decades; it is perfectly normal, and what you would have seen in commercial products of the time, from TVs to computers to even military, IIRC.

Soldering looks just fine to me. Some flux residue, but not a problem here. The oxidization didn't come from the factory, and wouldn't be a reliability concern.

oxidization over the years? May not have been as tarnished looking when built.

Are any 705s running today? I feel like the cost of keeping it up would be prohibitive. I’m also curious as to what tubes are used here.

I don't think there are any original vacuum tube computers still running. There are working replicas of the Colossus, ABC, SSEM and (almost complete) EDSAC.

As far as tubes, the 700-series circuit manual [1] shows mostly dual triodes: 6211, 5687, 5965, 6350, 6072, 6528. Also 6136, 6197 pentodes. And probably a variety for special cases (e.g. power supply, core).

[1] http://www.piercefuller.com/library/700circ.html

There certainly isn't any still running 705 which serves useful purpose. In fact they were probably all replaced by 7080s and then 360s in sixties as 7080 is software compatible and specific 360 models have some support for running 7080 software. Also the software that ran on these machines is probably simple enough that it got rewritten for more modern machines cheaply.

A few for mining Bitcoin, I think :)

I hadn't heard of Eccles and Jordan before this article; I had previously believed that the flip flop was invented by Eckert and Mauchly. Good to know!

5865 and 6211 are mentioned in the article there were probably a few more types tho.

Ha! I didn't know what a debouncer was, and then I read this blog and wondered if there's still a version of them in modern hardware and then watched this video https://youtu.be/D23FbGFrVuo?t=699 which specifically discusses the way the default DOS keyboard driver has a built in debouncer.

What a weird coincidence in one day.

What the video calls "debounce" is just an autorepeat delay. And on AT and PS/2 keyboards it is not done by any kind of PC side software (neither driver nor keyboard controller firmware), but by the keyboard itself (In late 90's/early 00's there even were PS2 keyboards where you can set the rate manually with key combination). In the protocol layer, autorepeat works by repeating the key down event without key up events in between, so if you are interested in state of the key without the autorepeat you can simply ignore key down events for keys that you already see as down.

The real debouncer hardware is also in the keyboard itself for all standard external keyboards ever used with PC.

On the other hand there were simple keyboard interfaces which involved reading out the whole keyboard as if it was one big shift register and debouncing was then done on the host side in software, two examples from top of the head are Symbolics machines and Wyse terminals. Interface for (S)NES controllers (really for Nintendo controllers up to GameCube) is similar, but I'm not sure whether debouncing is done by controller hardware or not.

I hated the PC keyboard repeat rates until I started using X-Windows and found 'xset r rate 160 80'. Warning: don't typo those numbers or you might have a fun time restoring a sane default :-)

Debouncing is a pattern used in UI software development. Here’s an implementation.


I never would have guessed that debouncing was first used to tame physical keys bouncing on contacts.

Every keyboard has one, every microcontroller reading pushbutton inputs will have one. There is one in your mouse.

The impression I got from the DOS video is that the debouncing function may have just been moved over into software?

The guy in the video is confused. He says

> The default keyboard handling on a DOS PC are meant for typing. So any time you hit a key there is a debounce before the key is allowed to repeat continuously, usually on the order of a quarter second to a half second.

What he is describing is the keyboard driver autorepeat feature, if you hold down a key for a while the driver will generate a sequence of virtual keypresses. This is just a software convention, it's also possible to read the state of the keys directly (pressed down or not) instead of treating them in terms of key-press events. Using the word debouncing for this is incorrect.

Debouncing happens on a millisecond time scale, it's due to the key actually physically bouncing.

> it's due to the key actually physically bouncing.

More precisely: it is due to the contacts bouncing, they don't close just once when they touch each other, they rebound and close again, this repeats a number of times before they finally settle.

Consider it just one more example of software eating the world. Yes, it is in software now, not in hardware but the function is exactly the same. One of the first assembly programs I ever wrote was a debounce routine for a KIM-1 (it already had one, but that didn't stop me from reinventing that particular wheel).

That's how it's often done with Arduinos:


What is done in software is translation from raw scan codes to ASCII which is done by BIOS code (or overridden by DOS driver for non-US keyboard layouts) that handles interrupts from keyboard controller and presents API on INT 16h.

This code works by tracking state of modifier keys (and setting the keyboard LEDs appropriately) and converts key down events for other keys into keycode/character pairs.

DOS in turn provides function INT 21, AH=07 which is thin wrapper on top of INT 16h which waits for key press and returns it as either ASCII character or zero followed by extended scan code (returned on second invocation) for non-ASCII keys (this DOS function is what is called by readey() in Borland Pascal/C++).

Edit: it is not as thin wrapper as it looks like, because it does not blindly call into BIOS and block there, but loops in DOS code and calls INT 28h until key is ready. INT 28h is intended as a hook for TSR to either do unimportant background processing (kind of poor mans multitasking) or call into DOS when it is known to be safe to do so. Particularly interesting thing to do in INT 28h handler is executing HLT in order to conserve power as any condition that would cause DOS to exit from this loop generates interrupt.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact