Hacker News new | past | comments | ask | show | jobs | submit login
A brief history of computers (lesswrong.com)
217 points by zdw on July 22, 2023 | hide | past | favorite | 68 comments



>And from what ChatGPT tells me, it's likely that this would have been an investment with a positive ROI.

Wonderful.

It's interesting to me that, in just a few months, I've already developed muscle memory for checking whether or not things I read online are machine-generated. The first thing I do on any website is search for "GPT", "Bing", and "AI" and stop reading if I find them.

Reading someone's writing is an exercise in trust. If they claim something, I have to be able to trust that they have done enough of their homework to back it up. If they cite a source, I have to be able to trust it says what they claim. Otherwise what's the point? If I can't rely on the author, then reading their writing requires checking everything they've said. Their writing is useless to me since I'll need to do my own research anyway.

If you write something and ask me to read it, you are asking me to trust that you have done the legwork. If you really just typed it into ChatGPT, that's more than just stupid. It's a betrayal.


At least this author states openly that they used ChatGPT. In a couple of months such honesty will be rare.


Not to mention, on a close timeframe, most texts not even having an author aside from ChatGPT plus some glue code to post them as "legit" articles and blogs and social media comments and such.


It's weird to me that you are conflating asking GPT a question related to your article and writing your article. Would you have a similar reaction if he had said "according to google"? There does not seem to be any evidence at all that the author didn't write this entire article, and the fact that they explicitly reference that they consulted GPT on some related point seems further evidence that they _didn't_ have GPT write it (I think if they _had_ had GPT write it, they would have avoided mentioning GPT at all)


Not them, but no, I wouldn't have a similar reaction if it were "according to google", because in the context of a blog posted to HN I'd expect it to mean a cursory bit of research, which is way better.


In the very near future (<5 years in my opinion), nearly all search requests are going to be mediated by a GPT like AI agent. It's already true that Bing search is better for some kinds of searches than a standard google search (which has become increasingly shit over the past few years as Google has lost the SEO war to blogspam). But even if we aren't quite there yet, referencing that you asked a related question to GPT in a an article you have written is still not fundamentally different than doing some research on google. GPT will still give bad answers to certain kinds of questions (and while the author didn't say what they asked or what exactly GPT responded, my best guess is that this was a pretty useless thing to ask GPT), but you can also find some real stupid shit on Google.

That sentence was almost more rhetorical flourish than anything else, and while it didn't land particularly well for me personally, I find it _far_ weirder that the commenter I replied to went off about authorship and trust.


GPT and Google results are extremely fundamentally different. I can use various heuristics to evaluate how trustworthy a site I find on Google is, and if I need to I can do any amount of research on the site itself, who wrote it, etc (and if I can't, then that's useful information too).

In contrast, GPT has no indication of where it got anything from, but even that's giving too much credit. It's just stringing words together. Whether it's wrong or right, I have no way of knowing. If I'm reading a website and I know the first thing it says is for sure correct (or incorrect) then I can judge the rest of it appropriately. That's absolutely not the case for GPT, because the correctness of each sentence is relatively (entirely?) statistically independent.

That's only the beginning of it. There are so many more ways that doing actual research is superior; there's just no contest.

I will be horrified if most searches are mediated by LLMs in the future and I will do my best to never use such a search product. (I use Kagi myself and haven't had a problem yet.)


Bing provides links to sources so you can evaluate the things yourself. I expect this to be the way most search will work. The AI will mostly just be filtering the searches you use. Which google already does, the AI will just do it better, since, as I mentioned earlier, Google is getting increasingly bad.

I expect even searches that are not explicit, obvious LLMs like bing search to be using AI in the background.

I really don't understand your issue here. Your search is already algorithmically mediated. The algorithms are just getting better.

And all of this is of course a tangent on the original point which is that asking an LLM a related question is in _no way_ the same as having an LLM write the article.


>It's weird to me that you are conflating asking GPT a question related to your article and writing your article. Would you have a similar reaction if he had said "according to google"?

Yes.

Though at least with Google, the author still gets a chance to somewhat evaluate the links they are given.


Seems to me the author can evaluate the information given by GBT too.

It's not like a one shot affair, and I feel that assumption is more common than would be generally useful.

Consider (from my own use of GBT):

Author queries Google, follows a few links, realizes they really need to take a step back and make new queries.

Author queries ChatGPT and essentially short cuts past that initial Google learning step I just mentioned above and follows with a combination of chats and Google searches.

End result is author ends up with a rough understanding and a set of information they can draw from and potentially continue to explore, depending on their needs.

There is also running queries on say, Yandex, Google, DuckDuck to see what each returns. I do that frequently.

ChatGPT mostly makes me a lot more productive. One can jump past that initial set of queries often necessary when jumping into unfamiliar territory.

Evaluating ChatGPT info works the same as using search does! Like doing multiple searches with various terms, exclusions, quotes and such, using multiple prompts can be powerful.

As others have said, the only real issue is trust.

Did the author realize something to the degree needed to speak competently, or just take what they were given and run with it?

These things being used like tools is no big deal. Like any good tool, there comes an obligation to use them correctly and with integrity.


Boole didn’t introduce propositional logic; what he did was come up with an algebra that encodes propositional logic.

Abstract algebra was a new snd developing thing back then, the idea that you can generalize from numbers and addition and multiplication to other structures that have something like numbers and addition and multiplication.

Boole found that if you take the two-element set {0, 1} and choose saturating addition as the addition-like operation and normal multiplication as the multiplication-like operation, you get an algebra (specifically a ring) that is isomorphic to propositional logic with its AND and OR operations.

So the idea that the number 1 can represent true and the number 0 false was Boole’s insight and the foundation of modern digital circuits.


Boole figured out Boolean Algebra, but nobody paid attention until Claude Shannon realized we could use Boolean Algebra to design digital circuits. In his masters thesis.

Very few masters theses have changed the world, but Claude Shannon's was one of them.


Note: That's not actually a ring. If you use XOR as addition as it's a ring. You could say it's a semiring, but that's probably not the most useful description of it. A more-fitting structure would be an appropriate sort of distributive lattice. Or really... a Boolean algebra! Except I also have to point out that you only included AND and OR; you left out NOT, which is also an important part of the structure.


Yes, good points!


Interesting. Thanks for the comment.

Unfortunately, I'm having trouble understanding though. The furthest I got with math is calculus. I'd love to hear a more ELI5 description of what you said. Something that doesn't require prior knowledge of abstract algebra. (I recognize that's a big/difficult ask. No obligation or anything ofc.)


Aristotle described propositional logic.


No. He described term logic.


And actually Boole is only dealing with two distinguishable states. Any two distinguishable physical states can be used to define binary 1 and 0. But in general, any N distinguishable states can be used to form an algebra. N=2 is just the simplest, most elegant, and what most real computers are based on. I've heard rumors that an N=3 computer exists, aka 'ternary' (EDITed) but I have my doubts. On the math side you can turn the integers into a discreet N-part thing with modulus.


>I've heard rumors that an N=3 computer exists, aka 'ternary' (EDITed) but I have my doubts

Rumours? This is the internet. Rumour to research is a few clicks away.

They did exist: https://en.wikipedia.org/wiki/Setun


[flagged]


Please don't cross into personal attack.

https://news.ycombinator.com/newsguidelines.html


coldtea has never once engaged with me, and now two threads in two days...I've read his comments and I find them of very low value (including this one) and frankly I'd like to discourage him.


In that case you should simply not reply.


The word is ternary, and Ternary Computer is a Wikipedia page you could read if you're interested.


No, Boole was dealing with probabilities. The first half of his investigation on the laws of thought is all ones and zeroes, but the second half admits any value in between.


Ternary based computers are far from a rumor.

They are flipping cool tbh. I wish more was done with the idea. Balanced ternary is beautiful.


> I'm confused though. Babbage came up with the blueprints for his Difference Engine in the 1820s. I know he never succeeded at actually building it, but he never really had a fair chance if you ask me.

Babbage had a chance to build his difference engine. The problem was that the engineering was a lot harder than he thought it was going to be, and he was a mathematician/economist, not so much a working engineer. The idea that if only he had had a better chance the difference engine would’ve been successful is just simply a misreading of what happened. It didn’t help that after the British government poured tens of thousands of pounds into the project Babbage suddenly decided to start pushing analytical engine before he had even finished the difference engine. That made it look like these were just wild, cockeyed schemes, when Babbage was supposed to be engaged in a practical, mechanical calculating project (to help reduce the labor expended on computing, for example, navigational tables).


A Swedish family, called Scheutz, finished the Difference Engine within a few years after Babbage gave up, and sold them very profitably for many decades.

https://en.wikipedia.org/wiki/Per_Georg_Scheutz

It wasn't too hard for the time... Babbage just kept getting distracted.


He has been said to have Aspergers/ADHD (as much as one can diagnose from biographical information, which is not that little), so this figures!


> The problem was that the engineering was a lot harder than he thought it was going to be

This is true of many (most?) innovations. E.g. the steam engine: people knew about steam power, and built primitive steam engines. Watt succeeded eventually in manufacturing one that had the right mix of reliability, power, cost, maintainability to be widely useful. E.g. the jet engine : Whittle conceived of turbine aircraft power during WW1, but didn't succeed in manufacturing a viable engine and putting it in a plane until the end of WW2.


A lot of similar things come into play if you ask questions like "Could the Romans have invented $X?"

Some cultural factors like slavery meant they were less interested in e.g. labor-saving inventions. And there probably were health and life sciences concepts you could introduce--but might have limited ability to prove. But, for the most part, there are technology trees that you can't really shortcut and, even with the right high-level knowledge, it's hard to accelerate things too mych.


Steam power was more of toy for the Romans. Some made dioramas with statues of the gods moved by steam.


I read a good description of the affair in James Essinger's "Jacquard's Web" - came down to the machinist cutting the gears not-to-spec, for the geartrains to function smoothly they would have needed precision not achieved for decades. The Royal Society was willing to fund the project but it became a bottomless money pit as Babbage kept sending the parts back to the kitchen so to speak.


reminds me of fusion energy promises, or at least "the bottomless money pit". The science is solid, but the engineering challenges delayed even proof-of-concept experiment literally until next century.


Memory.

The key issue in early computing, and the one most people miss, is how memory-limited computing was. IBM had electromechanical arithmetic working by the 1930s, and electronic arithmetic working before WWII. Programmability was still plugboards and cams, because there was nothing suitable for storing a program yet.

Take a look at the IBM 601.[1] IBM introduced this in 1931. Add, subtract, and multiply, a few registers, and programs on a plugboard. This was the first commercial programmable calculator.

But only a few registers of memory. That was the hangup. Everybody involved, especially J. Presper Eckert, who went on to work on ENIAC and UNIVAC, recognized that. All the approaches to memory required building wheels or relays or something for each digit or bit. They had arithmetic and control. If only there was some way to store lots of data...

Bulk memory suitable for program storage finally appeared with delay lines, magnetic drums, and storage CRTs. Eckert again.[2] Architecture was not the problem. Hardware was the problem. Computing needed hardware reliable enough that you could have a few thousand gates, and memory reliable enough to not drop bits. As soon as memory had been cracked, and there was somewhere to store the program, stored program computing took off.

Memory still had a huge cost problem. Memory was a million dollars a megabyte as late as 1970. But that's another story.

[1] https://en.wikipedia.org/wiki/IBM_601

[2] https://en.wikipedia.org/wiki/Delay-line_memory


>It's not what Godel was hoping for. Godel was hoping to add some support to that shaky Jenga piece at the bottom so that they could continue building their beautiful tower taller and taller, consisting of structures ever more elegant.

Gödel was a Lutheran Platonist who was personally morally opposed to Logicism and his contemporary mathematical program. He was an odd man, really, but he was in no respect a booster of, or a person working to promote Hilbert's program. He was tearing it down very deliberately.


Maybe mention Pascal[1] and Leibniz[2] as important predecessors to Babbage?

[1]: https://en.wikipedia.org/wiki/Pascal%27s_calculator

[2]: https://en.wikipedia.org/wiki/Stepped_reckoner


One interesting question (which is very hard to answer) is how many of the ideas were passed down vs. re-invented, and how much theory influenced practice:

- I think Mauchly and Eckert (of ENIAC) in the 1940's were unaware of Babbage (1810's)

- There was the (in)famous von Neumann paper describing the ENIAC and patent lawsuit - https://www.historyofinformation.com/detail.php?id=639 - this page says that "most likely" Von Neumann and Mauchly/Eckert developed it together

https://en.wikipedia.org/wiki/Honeywell,_Inc._v._Sperry_Rand....

Hm these sources are a bit vague -- my memory is that "The Dream Machine" was more critical of von Neumann. Basically it ended up that he put his name on work that wasn't really his, or he gave that impression.

i.e. the name "von Neumann architecture" doesn't give the proper credit

- Did they need Turing or Church to build a real computer? Probably not, I guess Babbage also proves that. Computation is just "out there"; it's part of nature; it's been discovered many times.

- That said, I would guess that Boolean logic is the most important theory/math in building a computer, though Babbage didn't have that either !!!


Most written history is "bunk" in that it's a long-after interpretation of events in the context of what we know today. That's why Turing features heavily -- because he's a colorful character about which many books have been written and movies made that people watch today. But Tommy Flowers is never mentioned.

The computer wasn't really invented at all. It evolved from earlier things in a step-wise manner. There were computing machines for decades prior. E.g. before WW1 there were sophisticated gunnery computers that could fire shells taking account the vector velocity of a ship, wind, distance measured optically, movement of the attacking ship. Boolean logic was used in telephone switching systems. Boolean circuits already existed both in electromechanical form (relays) and electronics (vacuum tubes|valves). So when Turing decided he needed a machine to do so and so calculations on some kind of data, Flowers didn't need to invent Boolean logic nor design Boolean circuits -- those already existed off the shelf. Teleprinters existed. Paper tape existed.


Yeah definitely agree, Taleb has warned us about such teleological explanations.

As far as I remember, Woz's biography is good evidence that you don't need the idea of "boolean logic" to design circuits. That did come after the fact -- the way it's taught, not the way it was invented

I think he just said he figured it all out himself essentially, and often did a lot better than the pros. Some of his claims were suspect to me, but I do think his claimed ignorance of prior work is genuine :)

Shannon did come decades earlier though, so the designs of somebody who was influenced by Shannon probably influenced Woz. It's hard to tease apart, but I agree with "evolution" and "tinkering" as the main explanations.

The entertaining explanations are the ones that tend to stick in our minds, but they're not necessarily true

---

The other example I think of is when I look at Debian -- a lot of it is not at all what the Unix inventors had in mind. Debian/Linux basically looked at an artifact and then came up with their own divergent ideas around them

Likewise Woz probably looked at a lot of computers, but he didn't have much of an idea what the creators were thinking -- just the artifacts themselves


Seems pretty reasonable for someone doing their own high level research. Notably missing any references to telegrams and telephone systems.


In terms of causes and forces, the influence of telecommunication on computing is really hard to overstate. In the Internet era, I think some might be unaware that telecommunication definitively came first, and in my view, effectively led directly to computers.

From the electrical engineering perspective, the earliest relay and vacuum tube computers were built out of elements developed for radio and telephone exchanges. Bell Labs developed the transistor with their phone system primarily in mind. Same with high speed digital data circuits. (Digital audio was demo'd in the late 1940s, deployed in the phone network in the late 50s).

It's not just the physical circuits; much of the theory, too. Claude Shannon was trying to optimize subnets of switches in the phone network, when he proved that binary switching logic is equivalent to Boolean algebra and so such systems could be described, manipulated, and optimized symbolically.

Similarly, both frequency and time division multiplexing date to the late 1800s with telegraphy. One of the first uses of vacuum tubes as switching elements was for multiplexing telegraph lines c. 1940 or so. (The terminology from that era is quite charming - modern Wi-Fi might be described as supersonic harmonic multiplexed radiotelegraphy.)


For example during the 1800's sport results were already being telegraphed and printed into dotted paper.


From the electrical engineering perspective, the earliest relay and vacuum tube computers were built out of elements developed for radio and telephone exchanges.

Indeed. A "main frame" was originally the housing used for the relay switches in the original telephone exchanges.


I've done a ton of research on this, and "main frame" in the computer sense originated with the IBM 701, which was built from various frames such as the power frame, the storage frame, and the main frame (which performed computation). There is a direct (but complicated) path from this to the modern meaning of "mainframe". The "main distribution frame" in a telephone exchange was unrelated.


What would be a good, more academic overview of computing history? Do you have any specific book recommendations? I'd love to read a more "citation-based" version.


Not sure about academic history, but in a single volume, this does a good job on early 20th century computer history: https://en.wikipedia.org/wiki/The_Idea_Factory


The First Computers

History and Architectures

Edited by Raúl Rojas and Ulf Hashagen

https://mitpress.mit.edu/9780262681377/the-first-computers/


IMO the best overall "soup to nuts" survey of the history of computers is Campbell-Kelly, et. al., "Computer: A History of the Information Machine."

Specifically on the relation of telecommunications to computers I will toot my own horn and recommend my book, McDonald, "How the Telegraph, Telephone, and Radio Created the Computer."


Yeah, there was a lot of work on logic in both India and the West in the medieval period, which was very influential in later thought, its just that what we conceive of as the "modern" form of logic only took shape in the 19th century. But reading medieval tracts on logic is fascinating.


Also between the middle ages and the 19th century: e.g. The Logic and Grammar of Port Royal were very influential - c.f. https://plato.stanford.edu/entries/port-royal-logic/


I meant "medieval" here to refer to any time between the beginning of neoplatonism and the industrial revolution: quite a large space, but it seems like those are the only two time periods people are generally aware of in the history of thought, since we teach kids that everything between then was the "dark ages."


Wish I had a source but I've read Mr and Mrs Boole were both involved in the study of Indian mathematics, and that "Bool" as a datatype might be more true to its namesake if it included "maybe" or "null"


> But the biggest thing is probably that it made assemblers and compilers possible. Well, I'm not sure if that's strictly true. Maybe you could still have them without a shared memory architecture. But the shared memory architecture made them a whole lot easier.

I think the actual important part is being able to address and manipulate code like it's data somehow, rather than the specific architecture. Having two separate address spaces for code and data doesn't necessarily prevent that, though it's surely simpler with only one.


You can have a compiler or assembler in a Harvard Architecture computer, but having separate instruction and data memories means that you can't generate the instructions and then execute them. You need some scheme to move stuff from one memory to another. Think of a PIC or AVR8, for example, rewriting its own Flash memory which isn't even the same width as the data memory.


The sections on the super early years were great. A lot of ideas and perspectives in there that I have not heard before. The later sections personally didn't do as much for me, but that's probably only because I am more familiar with those topics.

Thank you to the author for creating this! This style of super personal historical overview is very enjoyable. I like how the author says outright to take everything with a grain of salt and I like how they call out things about the narrative that don't make sense to them.


no mention of Konrad Zuse or the Z1?

https://en.wikipedia.org/wiki/Z1_(computer)


A decent brief history. Author should look into Vannevar Bush. https://en.wikipedia.org/wiki/Vannevar_Bush


Agreed. My history emphasized the Bush/memex -> Engelbart/Online System -> Alto -> Mac and Engelbart -> Berners-Lee/WWW lineages: http://whatarecomputersfor.net/machines-for-millions/


Funny piece, but some parts are terribly uninformed of the actual history. This part irks me greatly:

> "Why is it so helpful to have the data and instructions share the same memory? Well, for starters, it's simpler, cheaper, and it made execution faster because stuff is all closer together. It was also convenient to access stuff in a single address space rather than having separate schemes."

Why are ALL µprocessors today then using separate instruction and data cache? For early pre-"von-Neumann" computers, memory was an extremely precious resource and you wouldn't waste it some something as frivolous as almost-completely static code. Only once memory became large/cheap enough did the (not-so-surprising) idea of stored-program become practical. The benefits were many: self-modifying code (for good reasons very popular in those days, even much younger PDP-8 can't be used without it), moving more complexity of IO and bootstrapping into code, and eventually, code to make code.

Another point that is glossed over is that none of these technology changes were a slam dunk. For example, in the early days, transistors were FAR from the 1000X faster claim; they were limited, unreliable, and expensive and not all in the industry believe they would ever take over "proven technology like radio tubes". Same, it actually took a good while before silicon memory replaced core-memory. A more recent example: CMOS was originally the slow technology, only suitable for watches and cheap calculators.

Source: countless books on the early history of Z1-3, ENIAC, Cray, Intel, etc.


Not specifically computers, but if you want a very deep dive into the creation of the internet (including some bits about the earliest computers) The Dream Machine is a great, and extensive look at the history of the internet through the lens of J.C.R. Licklider's life. It was rather mind blowing to me in various ways, one of the big ones being that it seems a lot of early computer pioneers weren't only mathematicians and physicists, but also psychologists.


Agreed. It's an excellent book. But perhaps a little long if you are purely interested in computer history and want an introduction in a shorter volume. I recommend these two: https://en.wikipedia.org/wiki/The_Idea_Factory https://www.amazon.com/Dealers-Lightning-Xerox-PARC-Computer...


I don't recommend reading this. There are many gaps and a lot of important history missing, including:

1. Computation before ~1800. Abacus, Napier's Bones, Slides rules, Pascal's Calculator, motivations from celestial navigation and astronomy.

2. Modern analog computers ~1900-1950. The author seems to refer to them as "math machines" and leaves it at that, without exploring much deeper than that they were used for besides calculating firing solutions for artillery. I think the author lacks a solid grasp of how mathematical tables were used from 1614 onwards, and that analog computers were used to create much more accurate and complex tables which could be used for more accurate firing solutions. And for other purposes as well, beyond code-breaking.

>"It's hard for me to wrap my head around the fact that early, pre-general purpose computers (~1890-1950s) weren't computers in the way that we think about computers today. I prefer to think about them as "math machines"."

>"But subsequent machines were able to do math. From what I'm seeing, it sounds like a lot of it was military use. A ton of code-breaking efforts during World War II. Also a bunch of projectile calculations for artillery fire."

3. Poor description of the advent of electronic computers.

>"Then in the 1940s, there was a breakthrough.[10] The vacuum tube took computers from being mechanical to being electric. In doing so, they made computers cheaper, quieter, more reliable, more energy efficient, about 10-100x smaller, and about 100x faster. They enabled computations to be done in seconds rather than hours or days. It was big."

It was certainly a breakthrough, but the idea that computers immediately became quieter, cheaper, and more reliable is false. They were much larger, initially, compared to analog computers of the era. By almost any measure, they were also much less efficient with energy, though this may depend on what sort of calculations you are doing - I'm less sure of this.

4. Incomplete and incorrect descriptions of programming languages and the history of digital logic. No mention of information theory and Claude Shannon, digital circuits.

This is a poor analogy that misleads a reader who is unfamiliar with programming languages, it obscures the abstraction:

>"Think of it like this. It's translating between two languages. Assembly is one language and looks like this: LOAD R1, #10. Machine code is another language and looks like this: 10010010110101010011110101000010101000100101. Just like how English and Spanish are two different languages."

5. Lack of understanding of digital hardware.

The author never describes why or how vacuum tubes and then transistors allowed computers to use logic that is both digital and electronic.

The author jumbles a lot of ideas into one and does not seem to understand the relationship and distinction between the evolution of transistor technology (point-contact -> BJT -> FET -> MOSFET) and the creation of integrated circuits.

>"Before 1966, transistors were a thing, but they weren't the transistors that we imagine today. Today we think of transistors as tiny little things on computer chips that are so small you can't even see them. But before 1966, transistors were much larger. Macroscopic. Millimeters long. I don't really understand the scientific or engineering breakthroughs that allowed this to happen, but something called photolithography allowed them to actually manufacture the transistors directly on the computer chips."

6. Lack of historical context. No mention of the motivations for creating the vacuum tube or transistor: amplification and switching for use in telegraph and phone networks. No mention of the role the US government played beyond the 1860 Census, no mention of continued investments motivated by the Cold War, Apollo Program, ICBMs, etc. They briefly cover artillery firing solutions and mention code-breaking.

7. Over reliance on LLMs to research and write this.

Hard to take a history which includes this seriously:

>"And from what ChatGPT tells me, it's likely that this would have been an investment with a positive ROI. It'd make the construction of mathematical tables significantly faster and more reliable, and there was a big demand for such tables. It makes sense to me that it'd be a worthwhile investment. After all, they were already employing similar numbers of people to construct the tables by hand."

>"Anyway, all of this goodness lead to things really picking up pace. I'm allowed to quote Claude, right?"


> [In the 1980s] Microprocessors started to replace integrated circuits.

Author implies (in the quote above which occurs after the discussion of the invention of personal computers) that the early personal computers from the 1970s did not use microprocessors. This of course is false: All the early "personal computers" used microprocessors. For example the IMSAI used an 8080, the Apple II used a 6502, and the TRS-80 used a Z80. Microprocessors -- which were never intended to be the basis for entire general-purpose computers -- were repurposed for exactly that application by visionaries like Woz. Microprocessors made personal computers possible.

It would be more correct to state that in the 1980s microprocessors began to replace integrated circuits in mini and larger computers.

A subtle related point is that it would be even better to point out that by "integrated circuit" above the author really means discrete small-scale integrated circuit. All microprocessors are integrated circuits, but not all integrated circuits are microprocessors. Microprocessors are large-scale integrated (LSI) circuits or nowadays very large-scale integrated (VLSI) circuits.


There is a lot that is wrong in this article. Broad overviews are useful to people new to a topic - this would only mislead and confuse people.


> It was certainly a breakthrough, but the idea that computers immediately became quieter, cheaper, and more reliable is false. They were much larger, initially, compared to analog computers of the era. By almost any measure, they were also much less efficient with energy, though this may depend on what sort of calculations you are doing - I'm less sure of this.

Not only against the analog computers of the era. Early vacuum tube computers were significantly less reliable and less energy-efficient than electromechanical digital computers, like the Harward Mark 1.


For a more humorous overview of this, take a look at Verity Stob’s account

https://www.theregister.com/2012/12/22/verity_stob_8086_and_...


Not sure what it is about computer history, but it garners a lot of interest among computer science students. Electives on computer history have usually been packed classes.


Yes, computers started out effectively being programmable calculators and (physical) process controllers. They didn't always use binary either, because other representations of data seemed more convenient.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: