Hacker News new | past | comments | ask | show | jobs | submit login
What the Engineer Should Know About Programming (1957) [pdf] (computerhistory.org)
36 points by luu on Aug 26, 2014 | hide | past | web | favorite | 6 comments



Some of it is amusing, when considered in modern terms -- for instance, the discussion of writing in higher-level languages presenting a fifty-fold advantage in productivity over writing in assembly.

But the bit that I found the most prescient was the following:

Assuming the availability of practical micro-wave communication systems, it is conceivable that one or several computers, much larger than anything presently contemplated, could service a multitude of users. They would no longer rent a computer as such; instead they would rent input-output equipment, although as far as the operation will be concerned they would not be able to tell the difference.

In this fragment, Bemer was almost certainly referring to something like MULTICS, in which the model was that there would be one mainframe per city, and that computing time would be billed like electricity, water, gas, or any other utility.

But, I can't help but to think of a Chromebook when I read this. A Chromebook communicates wirelessly (even the lowest of the bands that it might speak on, 700MHz, is still well and truly considered microwave), and although it is technically a computer of its own, it serves more than anything else as an input-output node for rental time on Google's services or AWS.

Indeed, everything old is new again -- but hearing it from this article put it in a different light for me.


I think the most impressive prediction is something approximately like just-in-time compilation:

The actual operation of the computer will be under control of an integrated portion of the processor known as a supervisory routine. In some instances the program will not have been created by the processor prior to execution time, but will be created during a break in execution time under orders of this supervisory routine, which detects that no method is in existence in the program for a particular contingency. Although these supervisors will be on magnetic tape for a while, it is envisioned that they will be buried in the machine hardware eventually, to be improved by replacement like any other component.

This in 1957, when even concepts like "structured programming" had barely been imagined.


On a broader picture he predicted the Cloud revolution.


By Bob Bemer, later the "father of ASCII"


This was written 3 years after the invention of the first silicon transistors at bell labs. Why doesn't the Future computers section make any mention of computers becoming smaller? It even goes the opposite way and suggests one giant computer over multiple computers.


On September 27, 1960, using the ideas of Noyce and Hoerni, a group of Jay Last's at Fairchild Semiconductor created the first operational semiconductor IC.

http://en.wikipedia.org/wiki/Invention_of_the_integrated_cir...

I'd guess it's because 1957 was a few years too early for ICs, which is where the real shrinkage came from. More powerful computers require more switches, and without ICs that takes more space (and power & cooling).




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: