On the other hand, why aren't programs all written in machine language, in hex or octal? Why invent assembly language? Why invent macro assemblers? Why invent high-level languages?
Programmers are not unique in this regard. Mathematicians and logicians do not write all their dealings in English. They've developed a highly specialized notation for writing compact and precise descriptions of their ideas.
Furthermore, many layfolk might even say that the language of jurisprudence isn't quite English, despite how it looks. The jargons of many fields, like “legalese”, serve the same purpose as mathematical notation, which is itself the same purpose as programming languages: to enable ease, brevity, exactness, and precision in their respective domain-specific communications.
You can see a little of all that in the same preface by Abelson & Sussman, which goes on to say:
These skills are by no means unique to computer programming. … We control complexity by establishing new languages for describing a design, each of which emphasizes particular aspects of the design and deemphasizes others. ¶ Underlying our approach to this subject is our conviction that “computer science” is not a science and that its significance has little to do with computers. The computer revolution is a revolution in the way we think and in the way we express what we think. … Mathematics provides a framework for dealing precisely with notions of “what is.” Computation provides a framework for dealing precisely with notions of “how to.”
> Because that's how people used to read things when the A&S quote was from, in 1979.
Clearly it's not how people always read things back then, as it's not how people always read things now. People read programs, sometimes on screens, sometimes on paper, just like they read mathematical formulas łsĩ. In some cases, programs have been written on paper in some formal language that hadn't actually been implemented, simply because that language was seen as an effective means to communicate them. We usually identify it as pseudocode, ranging from “pidgin algol” to “plausibly python” to the M-expressions of the early LISP manuals.
M-expressions are still used in the LISP 1.5 manual of late 1962, despite the fact that 2.5 years after the LISP 1 manual, the LISP system was still incapable of reading M-expressions—the programmer had to translate them to S-expressions by hand before entering them. The Appendix B of the 1.5 manual gives the code for the interpreter, as well as some rationale:
This appendix is written in mixed M-expressions and English. Its purpose is to describe as closely as possible the actual working of the interpreter and PROG feature.
(It turns out to be possible to get an even closer description with a formal notation for the semantics, as was done with the definition of Standard ML, but such formalism has yet to catch on).
This emphasis on the importance of notation for the exact expression of thoughts and precice description of “ideal objects” is not particularly new, and it certainly predates the invention of the computer:
… I found the inadequacy of language to be an obstacle; no matter how unwieldy the expressions I was ready to accept, I was less and less able, as the relations became more and more complex, to attain the precision that my purpose required. This deficiency led me to the idea of the present ideography. …
I believe that I can best make the relation of my ideography to ordinary language clear if I compare it to that which the microscope has to the eye. Because of the range of its possible uses and the versatility with which it can adapt to the most diverse circumstances, the eye is far superior to the microscope. Considered as an optical instrument, to be sure, it exhibits many imperfections, which ordinarily remain unnoticed only on account of its intimate connection with our mental life. But, as soon as scientific goals demand great sharpness of resolution, the eye proves to be insufficient. The microscope, on the other hand is perfectly suited to precisely such goals, but that is just why it is useless for all others. ¶ This ideography, likewise, is a device invented for certain scientific purposes, and one must not condemn it because it is not suited to others.
(from the preface of «Begriffschrift» by Gottlob Frege, 1879, translated by Stefan Bauer-Mengelberg).
In 1882, Frege further explained: “My intention was not to represent an abstract logic in formulas, but to express a content through written signs in a more precise and clear way than it is possible to do through words.”
> People don't code as if code was primarily for people to read.
I agree. I am often guilty of this too, although I usually forget about it until I try to read a program I'd written some time ago and discover that it requires some careful study to figure it out.
It's a shame, really, because we should be writing readable code. But after I'd read this statement, I was thinking: how do people code, then? And I was reminded of this little bit from Paul Graham's essay “Being Popular”:
One thing hackers like is brevity. Hackers are lazy, in the same way that mathematicians and modernist architects are lazy: they hate anything extraneous. It would not be far from the truth to say that a hacker about to write a program decides what language to use, at least subconsciously, based on the total number of characters he'll have to type. If this isn't precisely how hackers think, a language designer would do well to act as if it were.
It is a mistake to try to baby the user with long-winded expressions that are meant to resemble English. Cobol is notorious for this flaw. A hacker would consider being asked to write `add x to y giving z` instead of `z = x+y` as something between an insult to his intelligence and a sin against God.