It's not ENIAC any more, it's a bunch of DMX blinkenlights mounted on an ENIAC frame. If you're going to do that, what's the point of using the original hardware? That's just destroying it further.
Making the whole machine work like it originally did is obviously not going to happen without, well, the rest of the machine. But we can fit a whole ENIAC in a fingertip these days. What I would have done is attempted to reverse engineer the logic of the available pieces, collected any existing documentation, and then made it do something. Real calculations. Wire it up to a virtual reproduction of the rest of the machine.
Sure, maybe there isn't enough existing documentation to do it faithfully. Maybe it would all be a giant pile of guesses. But at least it would be computing something, even if it's just our best guess of how it was supposed to work, instead of just flashing lights randomly.
It just feels really sad knowing that not a single one of those vacuum tubes is actually performing any logic functions, the way they wired it up.
But it's really just a curiosity to have the physical machine literally running. It's merely a nostaghic artifact. We have photos, stories, documentation, and historians writing books and articles. That's what matters-- not the blinky lights even if they're running 70 year old "code" on an emulator or miraculously preserved vintage electronics.
I think programming up an raspberry pi (or whatever) to run a mainframe emulator and hooking it up to old iron panels is something akin to building animatronic dinosaurs. It's great to do for some representative artifacts, but it can be overdone.
Keeping the IBM 1401 running at the computer history museum is fine too. The room even smells great, it's a spectacle for geeks. It's feasible for as long as some old-timers can keep replacing the components that age-out. But at some point it's going to be a better idea to unplug the thing, remove the leaky caps, and keep the dust off it.
It will be interesting what happens to VM's and Containers in the future. Are these things going to be preserved and displayed like a "brain-in-a-jar"? Will people in the future marvel at our convoluted code from 2020, our turgid (but quaint) frameworks. Will our API's be comparable to crazy victorian contraptions?
For the types where that's not an option, surprisingly (at least to me), most tubes commonly used historically still have ample new old stock tucked away in warehouses across the US and Europe. There was no major problem finding all the tubes required to recreate EDSAC in the UK: https://www.tnmoc.org/notes-from-the-museum/2013/1/9/product... for example.
If we're talking "first real computer", though, and depending on your definition of "real", the Manchester Baby (1948) was the first stored-program computer. The Colossus and ENIAC machines all needed to be re-wired to change what computations they could make and the Z3's instructions came only from the input tape.
Betty Jean Jennings Bartik, one of the original ENIAC programmers was leading this work, which was one of the main reasons she got the computer pioneer award for in 2008. Better late than never I guess.
The Manchester Baby was the first computer that was built to be Turing complete from the start.
Sure? Wikipedia says:
> Program code was stored on punched film.
Don't forget the ABC computer (1937-1942) which also pre-dates ENIAC
The big problem in the early days was memory. ENIAC had nothing writable that could hold a stored program. IBM had electronic arithmetic in R&D before WWII, and that came out as a product after the war, the IBM 603 (1946) and 604 (1948). Those were plugboard programmed machines with electronic arithmetic. Workarounds for memory cost, basically. Everybody involved recognized that with enough memory, they could get rid of the plugboards, but the hardware wasn't ready.
Many memory devices were tried. Delay lines. (Slow, not random access) Williams tube electrostatic storage. (Big and expensive.) High speed drums. (Slow, not random access.) Finally core memory was developed, the first thing that went fast and had random access.
Another paradox is that although ENIAC directly caused the modern computer, ENIAC was completely different architecturally from modern computers and missing most of the things you'd think of as necessary. It wasn't stored-program or really Turing-complete (although I think those properties are overrated). It didn't have RAM or even use binary or have instructions or programs in the normal sense. Even its inventors recognized that ENIAC's implementation was the wrong approach. But despite all this, modern computers are offspring of ENIAC.
My view is that ENIAC's important "first" is that it was the first to show the world that computers were both practical and revolutionary, so much faster that it could solve entirely new classes of problems.
As far as the Zuse Z3, the historian view is that the proof that it was Turing-complete "was an impressive party trick, but diverged entirely from the way the machine was designed, how it was actually used, or indeed from anything that would have made sense in the 1940s."
If you're interested in ENIAC, the book to read is "ENIAC in Action: Making and Remaking the Modern Computer".
This had a major impact in how computers evolved in the next few years.
Konrad Zuse's Z1 was first: https://en.wikipedia.org/wiki/Z1_(computer)
Even if you want it to be electrical, not mechanical, then Zuse's Z3 was first: https://en.wikipedia.org/wiki/Z3_(computer)
The prehistory of computing has a bunch of really interesting machines like this (and Colossus, and early ENIAC really) which are technical marvels, but miss at least one of the key things which make a computer really a computer.
People often focus on the electrical vs mechanical distinction, but that's a red herring. If you made a 386 out of Lego, it would still be a computer. Similarly, binary vs decimal is irrelevant.
To me, there are three key characteristics:
1. There is a program comprising a sequence of symbols which are interpreted by the machine (so not Colossus or early ENIAC, which were controlled by plugboards)
2. There are symbols for conditional branches, or some equivalent construct, and the language is Turing-complete, or some practical projection of it (so not the Z3, ABC, or ASCC)
3. The program is stored in a memory which can be modified by the program
Once you have those things, you can do anything. The first two points are the most important, but i think the third is also really significant: it is what allows things like operating systems and linkers to arise. If the first two are the definition of life, the third is the definition of multicellular life.
The program should be in some kind of more-or-less addressable memory, to support conditional branching. That's all.
Most today are far more powerful than any among the whole first decade of practical computers. Those ran compilers in as little as 4k words of memory. The output of the compiler was not written to memory; it was punched directly to tape or cards. (On some, it physically could not be written to program memory, even if there were room: no bus connected to there.) The tape was later read into program memory--often onto a spinning magnetic drum--and executed directly, no OS in sight.
If you imagine these were not practical computers, consider that people paid $millions for them; they earned their keep. Have you ever operated a computer doing work worth that much?
This is incorrect. The Z3 is Turing-complete, as has been shown by Raúl Rojas in 1998 . It requires some hacking though.
Moreover, the technique in the paper is purely a stunt. You couldn't do practical branching computation that way. It is definitely cool, but it doesn't change the fact that the Z3 was a calculator, not a computer.
You can always play the "who was first" game if you just add enough distinguishing categories.
The invention isn't much of an invention until the whole thing is electronic. Reading off a tape doesn't cut it. That's why, you'll notice, the entire computer industry bloomed out of the work done on the ENIAC. In other words, if your client had invented facebook, they would have invented facebook.
It's well documented. I once saw schematics. The counters really are decimal, with 10 tubes per digit, set up so that only one at a time is conducting. It's not even binary-coded decimal.
Also 2015 (1 comment) https://news.ycombinator.com/item?id=8855782
We all know what they mean - they mean it was American.
And the US didn't, and here we are today on HN from California.
But anyway, keeping something secret doesn't mean it didn't happen. And it's now public knowledge for anyone who cares to look outside the history of the US.
Of course I'm banned on HN - I'm not a leftist, right dang?
I stand by what I say, and won't appease moderators.
> But anyway, keeping something secret doesn't mean it didn't happen.
My point was that Britain missed out on most of the IT industry growth because of the OSA. Instead that success went first to Boston, then SV.
I do understand that all Britain has left is its past glory, which its citizens continually remind the US about, while we're busy doing.
Very unfair. As somebody who has regularly commented here against the excesses of the left, while sometimes the silent downvoting has often been telling as to the sympathies of some, the moderation has always been scrupulously fair.
Not sure why you needed to turn snide there after I took the time to vouch for your comment and reply to you in good faith.
You can be happy about making a success of current technology, without pretending what other people did before you never existed.
Fun fact: GCHQ still refuses to talk about some of the earliest computing work on Colossus and enigma decryption. I was quite surprised to hear this when I was on a tour of The National Museum of Computing at Bletchley Park.
I hope that if/when they do decide to declassify that information that they'll hand it over to TNMOC.
 https://www.tnmoc.org/ - It's well worth a visit so when they're open again post-COVID see if you can get down there.
Personally, I don't think the fact that the Z3 and the Colossus were discovered to be Turing-complete long after they were last used should really qualify them for a title based on "first Turing-complete machine." The long time it took to establish Turing-completeness indicates that they weren't designed to be Turing-complete, and that they are is more a reflection of just how low a bar it is to be Turing-complete than the capabilities of the machines themselves. In other words, I would submit that the phrase "first Turing-complete machine" should really be understood as "first intentionally Turing-complete machine."
Another framework that makes sense to understand is the role that the computers had on later development of the field. ENIAC clearly has a massive influence, since it's the one that spawned more recognizable computers as its progeny. The influence of ABC on ENIAC only came out much later (and is still somewhat debated). Z3 had little impact on the field later because it was on the wrong side of WW2. Colossus I believe did influence the Manchester machines, but this link was not known at the time because of the secrecy around Colossus.
* today, Z3 might be considered "first real"
* today, we might be stuck writing code in a fossilized language lovingly called Plankalkül 58 by its inventors that had so much inertia back then that it stifled all further development in the 1960ies, with the result that all of our 2021 computing would be about as fresh as the 2021 B52.
Substitute ALGOL 58 and that's pretty much what happened. You can't trust those Germans with language design!