"You have to do basic science on your libraries to see how they work, trying out different inputs and seeing how the code reacts." Is this not an atrocity, to be fought to the last bullet?"
I think it's an acceptance of reality. There is a phase change when systems become too complex for a single human to ever understand them in their entirety. Dealing with systems at that level is a different beast than dealing with simpler systems where "what you see is all there is".
"What creativity can there be when your medium is shit mixed with sawdust?"
The same sort of creativity artists possess, who work with media that are idiosyncratic. It's a different mindset. I see the reddit-gen programmers talk about things on a completely meta-level. To them, MySql is the hardware. Is it a bad thing? Not necessarily. Some of them hopefully will dig down the stack and be the low-level heroes. But that should (and probably can) only be a small percentage.
Now I guess one could argue that those sorts of heroes are what MIT is supposed to produce, but as has been mentioned, this course is not just for CS students. So the real question is, can the CS heroes of tomorrow survive an introductory course in Python? Well, consider that they have probably been modding games since 10, hacking PhP at 12, realizing at 14 they need to learn a 'real' language (C#, Ruby, Python), by 15 I bet they've abstracted out the fact that languages are not all that important; they're just the tip of the iceberg of complexity.
> To them, MySql is the hardware. Is it a bad thing?
Yes! It is. A programmer's finite mental capacity for dealing with complexity is consumed by an ever-greater load of accidental complexity - inconsistencies in the medium learning about which teaches you nothing. They are holes in reality. The thing that made modern technology possible is that physics, chemistry, etc. do not have "bugs." A correctly set up experiment will yield the same results when done a week from now, in Australia, etc. And in the end, there are unifying principles governing it, which will explain everything that happened in a way that teaches you something permanent. A medium with "holes", like a buggy programming system, is a universe where "science does not work":
This statement is true when the languages are all equally crippled - as they increasingly are, through their reliance on an ever-sinking foundation of 35-year-old rotting OS.
I believe I categorically disagree with every point you make. You are pissing in the wind if you think you can treat a system with 100M LOC the same way you wrote assembly on a 6502. They are different beasts, with totally different emergent properties. It's like claiming that you can understand the weather if you just stick to thinking about individual molecules.
[edit:] I believe the xkcd strip can be read two ways. Unknowable complexity equals magic (a riff on Arthur C Clarke). I strongly believe that exactly what's been wrong about software is our clinging to the idea that it is reducible and manageable. We don't think that way when we work in the physical world; we do rely on science, but it's empirical science, with statistically useful models at appropriate levels of abstraction. We don't rely on quantum mechanics to build bridges.
> you are pissing in the wind if you think you can treat a system with 100M LOC the same way you wrote assembly on a 6502
I would be, if I actually believed that it is possible given current technology. However, I believe that the problem is technological rather than fundamental, and that we can arrive at a solution. Let me give you an example of a problem that was once thought to be fundamental and inescapable yet ceased to be viewed as such. Before the advent of high-quality structural steel, no building (save a pyramid) could exceed five stories or so in height. Towards the end of the 19th century, metallurgy advanced, and the limit vanished. I believe that we are facing a similar situation with knowledge. The conventional wisdom is that the era of "encyclopedic intellects" (and now, human-comprehensible software) is gone forever due to the brain's limited capacity for handling complexity. Yet I believe that the computer could do for mental power what steam engines did for muscle power. I am not alone in this belief - here are some pointers to recommended reading, by notables such as Doug Engelbart:
Additionally: why do you assume that any system built to date necessarily must contain 100M LOC? Does this not in itself suggest that we are doing something wrong?
> clinging to the idea that it is reducible and manageable
All of technological civilization rests on the assumption that the universe is ultimately "reducible" to knowable facts and "manageable" in the sense of being amenable to manipulation (or at least, understanding). The alternative is the capricious and anthropomorphic universe imagined by primitive tribes, run according to the whim of the gods rather than physical law.
> We don't rely on quantum mechanics to build bridges
I must disagree here. QM is entirely relevant to the science of material strength, which is ultimately a branch of metallurgy - arguably a discipline of applied chemistry. In any legitimate science, there really are "turtles all the way down." And when there is a discontinuity in the hierarchy of understanding - say, the situation in physics with the need to reconcile general relativity and QM - it is viewed as a situation to be urgently dealt with, rather than a dismal fate to accept and live with forever.
A computer, unlike the planet's atmosphere, is a fully deterministic system. It is an epic surrender to stupidity if we start treating programming the way we treat weather prediction - as something that is not actually expected to reliably work in any reasonable sense of the word, or to obey any solid mathematical principles. It is a disaster, to be fought.
I think we're talking past each other, though there is still a fundamental disagreement in principle. Of course I believe that science 'works', in the sense that in theory, everything should be reducible to a lower level description. High-level source code reduces to machine code; weather reduces to quantum mechanics. However, there's a critical decision about what level you choose to work at when you attack a problem. While you may know that QM has something to do with why certain steel alloys are strong and light, when you build a bridge or a building you don't work at the QM level.
In the practical world of software development today, we do indeed have tens of millions of lines of code 'under' the application level most of the time. Even on Linux, where every one of those lines of code is theoretically accessible as source, the practical fact is nobody has the resources to read, comprehend, and manage all of that code during the lifetime of a typical project. Therefore, you are necessarily in a position where you have to deal with most of that codebase as black boxes, which have a statistical likelihood of doing what they are intended to do. You have to program defensively, hoping for the best but assuming the worst. You have to rely on your pattern recognition skills and your intuition to guide you. You have to treat the chaotic complexity you sit on top of like a surfer riding a wave. You cannot afford to take a fully reductionist approach. It's just not possible anymore, full stop.
The exception to this would be a small, embedded system with a minimal operating core and limited memory. But wait -- that system today is almost sure to be networked up to the bigger, chaotic systems that populate the metaverse. So again, at the level of communications, you have to treat the components you communicate with as black boxes.
If you disagree, give me a feasible scenario in which you could in fact take control of the situation in a reductionist fashion. I propose that any such scenario is a toy problem, that will not scale to real-world deployment.
I don't disagree with the necessity for fundamental elements ("black boxes") on top of which to build. My only complaint is with the universally shoddy state of these black boxes. Specifically:
The modern computer is the first time mankind has built a system which is actually capable of perfection. You are more likely to have a spontaneous heart attack and die at any given moment than the CPU is to perform a multiplication incorrectly. All of the "hairiness" which your computer's behavior exhibits is purely a result of programmer stupidity. It is theoretically possible for the entire stack to work exactly as expected, and for each element to seem as inevitable and obvious-in-retrospect as Maxwell's equations.
It is possible to build a system where the black boxes aren't fully black and can be forced to state their actual function on a whim. It is possible to expose the entire state of the machine to programmer manipulation and comprehension in real time:
The Lisp Machines existed, and were capable of everything modern, "hairy" systems are capable of, plus much more. You cannot un-create them. They are proof that the entire software-engineering universe is currently "smoking crack" - and I sincerely wish that it would stop doing so.
we don't need lisp to know the state of the machine -- in the Linux case, we have source code. We can _theoretically_ debug everyone else's code and know exactly what is going on.
"It is theoretically possible for the entire stack to work exactly as expected, and for each element to seem as inevitable and obvious-in-retrospect as Maxwell's equations."
Yes, and it is theoretically possible according to the laws of physics for broken glasses to spontaneously reassemble. But they don't.
Have you ever actually used a Lisp Machine? There is an emulator available (ask around.) All of these things were actually possible there - not merely theoretically.
Maybe I'm not making myself clear. I realize you can in principle debug everything, know the state of the machine completely, etc. What I am trying to point out is that it is too much information to handle at that level of detail. It doesn't matter if it's written in Lisp or Swahili. And the fact remains that even if you convinced 90% of the next generation of programmers to use Lisp, you still have 50 years of legacy code lying around in other langauges. These are practical facts that can't be washed away with a new (or old) programming paradigm.
I think it's an acceptance of reality. There is a phase change when systems become too complex for a single human to ever understand them in their entirety. Dealing with systems at that level is a different beast than dealing with simpler systems where "what you see is all there is".
"What creativity can there be when your medium is shit mixed with sawdust?"
The same sort of creativity artists possess, who work with media that are idiosyncratic. It's a different mindset. I see the reddit-gen programmers talk about things on a completely meta-level. To them, MySql is the hardware. Is it a bad thing? Not necessarily. Some of them hopefully will dig down the stack and be the low-level heroes. But that should (and probably can) only be a small percentage.
Now I guess one could argue that those sorts of heroes are what MIT is supposed to produce, but as has been mentioned, this course is not just for CS students. So the real question is, can the CS heroes of tomorrow survive an introductory course in Python? Well, consider that they have probably been modding games since 10, hacking PhP at 12, realizing at 14 they need to learn a 'real' language (C#, Ruby, Python), by 15 I bet they've abstracted out the fact that languages are not all that important; they're just the tip of the iceberg of complexity.
So yes, I think they will survive.