Hacker News new | past | comments | ask | show | jobs | submit login
SICP: the end of an era (2021) [video] (youtube.com)
361 points by smitty1e on Feb 4, 2023 | hide | past | favorite | 389 comments



We changed the url from https://irreal.org/blog/?p=11127 to the source it points to. It's worth reading too!


What I took from SICP is something, no one else and no other ressource has taught me and it has enabled me to start a long possibly never ending journey of learning. I will be forever thankful for having learned the lessons I took from it. I think I would be half the engineer I am today, if I had not read and worked through the part of SICP, that I did work and read through.

What I use most of those in my software development: Avoiding mutation when affordable and feasible (which is often), proper abstraction barriers and modularization, thinking about functions and how to compose them and solve problems with them, instead of going "I heard a noun, I need a class!", how to solve problems recursively (although The Little Schemer did have a hand in teaching that as well), and some kind of idea of minimalism.

Aside from all that, SICP has shown me a different way to learn about math stuff. It has shown me, that there is a way, in which I understand math better than in university lectures. That way is through computer programming. If I can write the code for it and explain what the code does, I usually can also understand the math.


> there is a way, in which I understand math better than in university lectures. That way is through computer programming.

Do you have examples of mathematical concepts that you developed an understanding of this way?

It is not exactly the same thing but it reminds me of a 1986 interview with David Blackwell[1] in which he went into great detail about how computers helped his own mathematical learning:

  I have a little computer at home, and it's a lot of fun just to play with it. [...] Jim MacQueen was telling me about something that he had discovered. [...] He has an interesting algebraic identity. [...] Also, I had a conjecture that some stronger result was true. I checked it for some numbers selected at random and it turned out to be true for him and *not* true for what I had said. Well, that just settles it.
[1] https://projecteuclid.org/journals/statistical-science/volum...


You may be interested in the ideas in Papert's book Mindstorms and the application of Logo to education through "microworlds". Given rules and systems to work within (or possibly to define themselves), students are given the opportunity to explore the systems and the way things interact. Most people only seem to remember Logo for its turtle graphics, which allows an exploration of geometric and (through the connection of geometry and algebra) algebraic concepts. But there were also microworlds for physics (dynaturtles) and others.

----------

https://dl.acm.org/doi/pdf/10.5555/1095592 - PDF of the book.


> Most people only seem to remember Logo for its turtle graphics

i got taught a tiny bit of logo in elementary school - just the very basics of moving the turtle, drawing a straight line, turning an angle, and drawing polygons.

The final challenge of the class was to draw a circle. It blew my mind when i figured it out.

I suspect that must be the inception of my love for programming.


Papert says in the introduction that he learned the intuitions behind calculus from playing with mechanical gears.


I really wish schools would offer various ways to reach for a concept. In programming, as college students, it's not rare to understand an idea better through use of different paradigms. It seems an efficient way to tickle the brain.


After SICP Sussman wrote Structure and Interpretation of Classical Mechanics. The text is available free online, and it covers some (advanced) areas of classical mechanics by way of simulation.


He did and it is, but it may be worth noting that the code in SICM is what he calls "Programming for the Expression of Ideas", as expressed at this Lambda Jam talk: https://www.infoq.com/presentations/Expression-of-Ideas/

Another interesting piece he wrote is The Art of the Propagator, which is also freely available. His CSAIL page has more of his publications: https://groups.csail.mit.edu/mac/users/gjs/


There's a talk with slides that includes an overview of several of these projects:

https://www.infoq.com/presentations/We-Really-Dont-Know-How-...


I've been through the book twice, both times with a group (always thinner at the end) and doing all the exercises. It was well worth every minute.

I got a much deeper level of comfort with approaching "hard problems" and it was a bridge to complexity analysis, programming language theory, etc.


Slightly off topic but how did you find a group of people to do this with? I’d love to but I can’t imagine anyone in my social circles/work acquaintances doing this with any book.


Believe it or not, both times were via HN. Someone set up a mailing list and wiki and we met every week.


...and an IRC channel. The wiki wasn't much used, IRC was where everything happened. I think the way to do it today would be a discord server, and you need more people than you think to start; they will drop off fast.


> thinking about functions and how to compose them and solve problems with them, instead of going "I heard a noun, I need a class!"

Are captured variables of a closures a 'noun' or a 'verb'? In the end they are equivalent, it's the same thing - state by another name.

And recursions are no fun, when you run out of stack space.


While in the end captures variables and classes might be equivalent, captured variables seem low-fat while a class is usually instantiated (otherwise please use a struct, enum, record, namespace, ...) and comes with usually mutable members. Setting them in a non-mutating functional way would probably require to create a whole new instance of the class (object). Often a class is an overkill of a concept for something simpler hiding in it (like a record or struct) and too many people do not think about making things simpler once they typed the word class.

A function on the other hand forces one to think of function calls and not some kind of state that was set earlier, splitting time in parts of before setting that state on an object and after. A function (when using the term more strictly) will always give the same result for the same input. I get more guarantees about my program than I get when classes and instances of them are mutated.

Recursion can solve problems very elegantly at times. When I use it in other non-TCO languages, I always think about stack depth and consider externalizing the stack as an option.

One thing mathematical, that I understood much better through SICP was hiw Church numerals work and how they could serve as numbers in theory. Another one was derivatives, since one writes a symbolic calculation of derivatives in SICP. Then the Newton method for finding zeroes. Basically any such topic, that one needs to implement in the exercises of SICP, because one has to get familiar with how it works more, in order to implement it.


Interesting perspective

https://github.com/MoserMichael/jscriptparse - this is my pet project, it's an educational / shell scripting language. It tries to allow for concise expression, and you have a REPL/shell.

Ideally I want to fit this niche as well, where you can solve these problems in a concise manner (well, maybe slightly more concise than python or javascript, and with a subset of the features that you would expect with a programming language for grown ups)

Here is the tutorial: https://github.com/MoserMichael/jscriptparse/blob/main/PYXTU...


I like the "What I learned" section. Maybe I should steal that idea for my own projects, even if only for my future self and maybe to put in words any revelations I had during development. One could even think about putting all kinds of thoughts and conclusions into such a document or a separate section of the readme.


another thing, closures acts as anonymous classes, and partial application yields naturally safe linear state sequences.

f a -> b -> c

you get three steps until all information is known and computation can happen

you didn't have to define anything, didn't have to think about it, while in OOP you'd have

    class F:
    
      setA
    
      setB
    
      setC

      logicThatCanBeCalledAnytime
here you either get a random logic with unknown a,b,c state

or to think very carefully what states things should be

or design safeguards around every methods to ensure things are known... (and AFAIK no mainstream OO language even makes an attempt at having metalevel state transition checks easy to define and ensure..)

9th circle of dante


Most dynamic languages including Scheme have unlimited recursion (limited only by total memory).

Once you've gotten used to being able to use recursion, stack depth limits seem as lame as olden-days limits on string length (255 characters was common in Pascal).

Javascript is a frustrating case where unlimited recursion is technically possible in modern engines, but they still often limit it to 10000 or something.

Python limits it arbitrarily, just to force you to use its fancy iteration system.


> Python limits it arbitrarily, just to force you to use its fancy iteration system.

Python has an arbitrary and low default limit because it isn’t optimized (because stack traces are considered important), abd call depth is an effifiency issue that risks blowing up without a limim because of stack depth, and failing fast in the likely-erroneous case of deep call stack given all that is just sensible. Python also. Lets you alter the limit at runtime; ita a soft, not hard, limit. The hard limit is stack space.


People often talk about accidental infinite recursion, but that seems like a rare mistake. I might have done it twice in a 40 year career? It's much rarer than an accidental infinite loop, which (almost) no language tries to protect you against. It doesn't seem worth giving up useful recursion.

Python's "solution" requires changing a global, which is an ugly thing for a library that wants to recurse to do.


You or the code you write must be extremely unusual. I might have blown the stack a thousand times, in many different languages. Probably most of the time I wasn't intentionally recursing at all.


It’s not quite arbitrarily. That’s an uncharitable definition. Python a) handles function arguments in a way that would make recursive functions awkward to use and b) cares about stacktraces being meaningful, so TCO would be a worse tradeoff towards that end as well.

There’s trampolines in almost any programming language to implement recursion if necessary, and python has itertools to build what Scheme calls streams, for arbitrary computation.


>And recursions are no fun, when you run out of stack space. It's been a while since I read anything about this topic but they get optimized out by the compiler in most cases, right?


You are probably thinking of tail-call optimization, which is when a recursive function returns the result of calling itself as its last action. These algorithms and functions are called tail-recursive.

Compilers and interpreters can (more easily) optimize tail-recursive algorithms to avoid the normal cost of creating a new stack frame and making a normal function call, which means the recursive algorithm has performance more like that of an iterative implementation.

https://en.wikipedia.org/wiki/Tail_call

https://llvm.org/docs/CodeGenerator.html#tail-call-optimizat...


That is possible, if the recursion call is the last statement of the recursive function (tail call optimization). Now the recursion call is not always at the end of the function, so that this optimization does not always apply. https://en.wikipedia.org/wiki/Tail_call

Now Python doesn't do tail call optimization out of the box (surprise). but there is a module that is adding some magical decorator that fixes that: https://pypi.org/project/tail-recursive/

(actually need to look how they implement this decorator, it must be some serious hack.)


You can do tail call optimisation when the last thing you do before you return is a call. That tail call is not necessarily recursive, let alone a call to yourself. Whatever it is, its position means your current stack-frame can be reclaimed before you jump.

If you can only optimise calls to yourself, you have a very limited form of tail call optimisation. It's an important point in the context of Scheme, where they made a big deal out of experimenting with continuation passing style. In CPS, all function calls are tail calls, and you want to optimise them all, even though most are not recursive.


Not usually, no. Some languages support automatic tail-call optimization (when a recursive call is the returned value of a function, you can effectively replace the current stack with the recursively-called one), but in general, there are limits to how "sufficiently-advanced" real compilers are.

Eliminating recursion might be difficult to detect if multiple functions form a mutually-recursive set, or if data structures have to be introduced to compensate for no stack (e.g., converting a depth-first search from recursive to iterative form requires a stack).


I never understood newtonian mechanics taught in high school physics until I wrote a simulator for it. The code is generally ~100 lines in python but you really get the feel for how the equations work.


There is a reason Newtonian Physics was invented and found huge success throughout the 1600s and 1700s. It's such a simple framework that a wide variety of consequences can be obtained by just playing with the equations by hand, even without a calculator. They are just linear first and second order differential equations.

All the questions you asked while writing the code, can be asked to the equations and the consequences worked out almost immediately.


I definitely agree about how it may appear to young programmers that they're casting magic spells. I had the fortunate experience early on of building my own virtual CPU and RAM, then I learned assembly, then C, then algorithms, then OSes, then about Unix. I'm not trying to toot my own horn, I've always been a little slow and struggled through this material. However, learning from the bottom up has granted me a lot of insight into how things work and an ability to learn pretty much anything software development related. SCIP has a different approach to "bottom-up" learning than I took but I think it still applies. Maybe with all the tools out there not every programmer needs to learn from the bottom up, but someone needs to learn how everything works to build good tools, and if you can't learn that from MIT, I don't know where.


I disagree. The only reason modern computing can exist is because of layered abstraction. You don't need to understand the details of JPEG to write a computer game. You just cast the `jpeg.open()` spell.

Can you imagine if web developers needed to understand everything about how computers work. You must learn how a full adder works before you can use `+`. Oh and you'd better learn about resynchronization before you start using that SoC you bloody amateur!

I imagine everyone draws the "you should know the details about how this works" line juuust below the level that they know about.


> I imagine everyone draws the "you should know the details about how this works" line juuust below the level that they know about.

Yes this is usually what I see in tech spaces. I believe discovering _where_ that point is is largely moot and instead the focus should be on _having the ability_ to drop abstraction layers. Some day you’ll find a bug O(n!) and it may just be in a layer or two below.


I think the other reply to my post is on to something when they say there are many paths to mastery. I have just laid out one path. I don't know what the ideal general learning path is, but I know what worked for me. There is certainly a level where you gain enough knowledge to remove magical thinking entirely and you "learn how to learn", and there's no one way to reach that. I have not seen anyone reach that level by learning only specific abstractions, but I can only talk about my own experience.

I also don't think every programmer needs to follow this kind of path. All I'm saying is someone has to write "jpeg.open()" in the first place.


> All I'm saying is someone has to write "jpeg.open()" in the first place.

Of course but my point is that where do you stop? Someone had to design a multiplier. Someone had do invent a diode.

The point at which you say "I don't really care about the level of abstraction below this" is more or less arbitrary. Knowing more is better of course, but there isn't really any level that you have to know.


I would just expect MIT, of all places, to go quite low. I suppose everything has to come to an end, and MIT can't have the most intensive course forever. I just can't help but feel it's representative of what's happening everywhere.


Yeah you have a point. I would hope they have a more introductory module available first but I feel like SICP should be an advanced module for people specifically doing compsci. (I have no idea how American unis work; maybe it's already like that.)

I definitely remember when I did an engineering degree at Cambridge they had us doing assembly and typing decimal opcodes into a numberpad on an ancient microcontroller dev kit. I understood it but it sure was tedious and off-putting. And I say that as a typical HNer who has programmed forever and I even ended up doing CPU verification professionally. I imagine everyone else absolutely hated it.


Intuitively I find this view compelling, but I suppose it is somewhat arbitrary. After all, you can start in on integrated circuits and go many layers deeper from there to fundamental physics. And at the same time, we keep moving to higher and higher levels of abstraction with their own complexities to worry about and that's where most will end up working. Maybe the same curriculum doesn't quite make sense.


I think it's very hard to use an abstraction effectively if you don't know why it exists and you think it's magic.

I have seen programmers who don't understand, for instance, how key/value stores are implemented, come up with all kinds of improbable explanation about what might be happening to their code when it goes wrong, because they blame the data structure. Perhaps they even know the old idiom "don't blame your tools", but they've reached the end of their knowledge - something must be going wrong because of that magic data structure. Meanwhile, it's some kind of side effect they missed. Magical thinking. Sure, they learn eventually what to do when something goes wrong with that data structure, but they come right back to the same problem with everything they don't understand. Taking the magic out of anything, even if not always directly used, is extremely powerful.

I didn't say this before but I also have experience down to the level of designing and building circuits. I think the bottom-up model falls apart here because thinking about the physical properties of electronics is much, much harder than just dealing with idealized logic gates. You can learn it later, but its application is extremely specialized in my opinion.

I don't know, maybe I am biased because of my learning path, maybe students don't need to go as deep, but I can't help but feel that if I did it anyone can.


I have always claimed that the best way to understand regexes is to implement them yourself. This goes for hashmaps and other basic data structures as well. In general this advice has been very poorly received.

Going to the lower level (building the digital from the electronic) is to cross a much more hermetic abstraction boundary, perhaps this is why it's less broadly useful.


> building my own virtual CPU and RAM, then I learned...

Sounds like you know exactly where :-)

Being committed to a project or goal for a long time, taking your time with the foundational material, seems to be the most direct path to mastery in any field.


The problem is not the learning path doesn't exist, the problem is finding it and knowing if you should follow it. When you get started, you don't know what you don't know.

That's why I say I got lucky... I somehow found this learning path and decided to follow it without really knowing if it would be useful.

Yes, I think that dedicating yourself to any project for a long time will teach you a lot, but you can also end up with a lot of holes in your knowledge depending on the path that you take.

For instance, the most common mistake I see people make regarding learning programming is focusing too much on specific languages and frameworks and never learning anything that could be called fundamentals.

I suppose that's what this is really about. Fundamentals versus specifics.


The trick should be how to set things up so other people get lucky in the same way.

I could give you the same sequence of projects and goals I had as a curriculum, and it wouldn't work for you.

As an individual, if you follow this path, I think the answer is to continually be going back to fundamentals, because they will never stop being an inexhaustible font of inspiration for new ideas.


You bring up good points. In my opinion it's difficult to put together any kind of significant curriculum, and it's inevitable that it's not going to work for everyone and some people are going to struggle with it. However, there's probably some tipping point where the material just doesn't work for anyone.

I don't know how to account for that. I don't know what the curriculum should be, or how to make it more digestible. I just know that, as as person who hires other programmers and got a taste of both a software development program and my own self learning, I don't like what schools are putting out, and MIT's change in course material seems to be a drastic turn in that direction.


I am coming to the uncomfortable conclusion that organizational habits (hiring and promotion practices, VC preferences for engineering team structures, etc, etc) are more significant than what universities teach (or perhaps, ultimately, the former determine the latter) and we need a movement towards professionalization, and probably some kind of guild-like structure to gatekeep (that is, establish a competency floor) if there is to be any improvement over the status quo. Andrei Sorin's recent massive tome, though a bit of a rant, influenced my thinking quite a bit here.


That's an interesting idea, I'll give it a read. The big problem I see right off the bat is lack of available talent and hiring pressure, but I reserve my judgement.


I'm editorializing a bit, his formulation of the idea is a little different from mine, but enough time in the industry makes it hard not to conclude that something is eroding the professionalism of the field, and I think his ideas about what it is are probably, if wrong, at least wrong in a very interesting direction.


> I had the fortunate experience early on of building my own virtual CPU and RAM, then I learned assembly, then C, then algorithms, then OSes, then about Unix.

I suppose what's missing from that experience is the reality of programming "in the large", where modularizing code and creating well-defined interface boundaries becomes quite critical. The easiest step forward coming from C would be learning something like Rust - or, historically, C++. This is when the broad high-level patterns and approaches that SICP discusses at length in a theoretical way become practically useful.


Maybe I'm not smart enough, but I consider building CPU from NAND gates, writing a simple OS for the machine and writing a complier for a C subset for it can do a lot more about de-mystifying computing. I tried twice to read SICP but I don't get the hype. I guess it's fine to not read it after all.

Disclaimer: I still consider nand2tetris too trivial for the de-mystify work. A full project on the same scale of UTokyo's CPU project [1] is a lot better.

Disclaimer: I only completed the hardware part of nand2tetris so I don't claim that I have done the CPU+OS+compiler stuff. But I do find it a lot more interesting than SICP.

[1] https://www.is.s.u-tokyo.ac.jp/isnavi/practice01-01.html


The approach you describe is excellent, but it builds upwards from the physical foundations of computing.

SICP builds from the mathematical foundations of computing (sort of). The end game is to take students from zero through:

1. Understanding several styles of programming (though most often in terms of "what fundamental operations does this language provide?").

2. Being able to build an interpreter.

3. Being able to build a compiler.

For a two term intro course, it's extremely agressive.

I encountered it much too late to benefit much—I'd already been working on compilers at Lisp companies for a couple of years. But as an introductory text, it's a bit like the Feynman Lectures are for physics: probably too hard for almost everyone. But the right people, at the right time, will really love it.


I forgot to include, that the last part of SCIP is indeed very interesting, and that's the only part that I completed when studying the course in Python. Compiling theory is always interesting and that's part of the syllabus I'm looking forward to drill a bit deeper into in the future.


That would explain why I never got SICP. I can’t follow the math.


In particular, while I can visualize code executing in my head, I cannot just look at a formula and do the same: the formula is too static.

Formulae are a beautiful top-down model of some narrow, idealized slice of reality.

Understanding is built bottom-up.

For example, the raw torment of differential equations could have been clarified if the Prof. said something like: "Your car going down the street is a differential equation. Your position, x, your speedometer velocity is dx/dt, and the accelerator pedal position d/dx(dx/dt). We will now set about abstracting this past all casual understanding."


And then you learn that most differential equations are not explicitly solvable, and the world is actually coded in partial differential equations, anyway.


> In particular, while I can visualize code executing in my head, I cannot just look at a formula and do the same: the formula is too static.

The right lesson to learn from this isn't that math is too hard, it's that you're using the wrong tools. The usual target architecture of math (so to speak) is a rewrite engine, not a register machine. You should almost never be "executing" anything.


Don't sell yourself short. The traditional mathematical guild has put religionists to shame with their penchant for obfuscation and ambiguity. Computing science math on the other hand is rigorous, concrete, and accessible to anyone who can, well, write a program.


It’s not actually. I have short-term memory issues that make holding math in my head next to impossible. Things get transposed or forgotten mid-operation. The only way I can do math is to have everything written down, including formulas. Even then, I may not be able to figure out how to change both sides of an equation. I can’t convert miles per hour to time required to travel X miles, even with paper in front of me. (I wish I could. I love hard sci-fi.)

I don’t have these issue with programming because it doesn’t use my logical memory, it uses visual/spatial memory. I see function calls as a series of lines creating decision trees.

You may be surprised how little math matters in practice.


I think I know what you mean. In high school I struggled most with maths that required non-trivial algebraic operations, and more often than not I'd just make some mistake while working out the steps / answer. Most people are able to get around the issue by holding more steps and doing it in their heads, but somehow I wasn't able to do it. I'd make some trivial mistake when copying the mathematical expressions and end up in a dead end. I wouldn't know whether it was a typographical mistake or whether I'm using the wrong approach.

With programming my compiler tells me when I'm off base, and I get instant feedback with testing. I can save all my intermediate steps and approach the problem with trial and error if needed. I have muscle memory with Vim (it's become my cyborg short-term memory buffer). I often wonder how much better I would be at maths if I had a Wolfram-alpha cyborg device attached to my brain :P


Use vim with Maxima calling it as if it was an interpreter from %! instead of using a Maxima session. Map it to a key binding. Or better: set readline to vi-mode, and you'll use the maxima prompt almost as easily as editing a vi file.


It’s not actually. I have short-term memory issues that make holding math in my head next to impossible.

Next to the vast plains of mathematics, all of us have inadequate short term memories. This is why we write everything down.

The goal of studying any piece of mathematics is to commit it to long term memory. Not in the sense of memorizing dates and times for history class, but in the sense of muscle memory.

As for “how little math matters in practice”, that view is all too common among programmers. It is not a view shared by engineers or scientists in any other field. Without math we wouldn’t have any of the tools and amenities of modern life. Even the Amish teach their children math.


This isn't what I was referring to. It's not storage but retrieval that's broken.

I can't work out how to convert miles per hour to minutes per mile, even on paper. My brain confuses and transposes the operations necessary to do formulas. I have difficulty working out where to put the variables. This can happen with the instructions in front of me.

There's a certain kind of logical/calculative abstractness I can't process without a lot of effort. A lot of math (and LISP syntax) seem to require this. Matrix operations and accounting don't. (Discrete math was fun!)


> As for “how little math matters in practice”, that view is all too common among programmers. It is not a view shared by engineers or scientists in any other field.

Phrases like this are a red flag. They're usually thinly veiled anti-intellectualism, promising to free you from the oppressive gate-keeping of centuries of built-up knowledge. In reality, it's a pop culture prison.

The same ideas are prominent in guitar, where you "don't need to understand theory or how to read sheet music." Which is true, because you can find chords that sound good together with enough time and patience. But it's also not the full truth, as it is obvious that more classically trained musicians tend to bring a lot more to the table from both a technique and a composition standpoint.


And very often, what appears to sugary pop music, was made by very musically educated people. You can't go very far with power chords only.


One example of the kind of song I think you're talking about is "That's What Love is For" [1] by Amy Grant. I once read someone describe it as being like a quaint little Michael Bolton song (can no longer find the link). But the chord progressions and the way it moves between keys are fairly sophisticated for pop music.

[1]: https://www.youtube.com/watch?v=uLVV2TaI4Wo


You might be surprised how deeply math is connected to what you're doing!

It sounds like you're doing some visual reasoning with decision trees, which means you're doing computer science (understanding the execution of an algorithm) using the same intuitive geometric understanding that is extremely effective in computational complexity theory (drawing the evaluation of a sequence of function calls, for example, we can see that it's going to take up a certain amount of space, possibly growing with n in two different dimensions, for example, indicating n^2 growth by purely visual reasoning). It's been trendy in mathematics for a while to disparage geometric reasoning as non-rigorous (which, to be clear, it can be), and even to regard anything that's not purely algebraic as non-mathematical, which is a shame. You can certainly do mathematical reasoning without it being algebraic, rigorous, formalized, or written down.


If people don't understand the math they are supposedly skilled at using, then it is math that is failing them, not them.


No. Math is math. Your brain must be adapted to it. Go play with PostScript and geometry, make some figures, do some trig, try do display some integration curve by hand. Then go back to Algebra. The dots in your brain will connect and you will be enlightened.


I had no idea that I also do this. The visual logic thing that is. I also have short term memory issues, but those seem to be improving with lions mane and bacopa. Having a high resolution large monitor also helps, the more I can fit on a screen, the easier it is to work.

Double edit, if your interested in learning some math, geometry is basically algebra, visualized. Algebraic proofs didn't really click for me until I took a different math class in college on geometric algebra.


> Having a high resolution large monitor also helps, the more I can fit on a screen, the easier it is to work

One of the most important things I learned at the Tufte seminar I attended was to let the optic nerve do as much work as possible to free up the short term memory for the hard parts. Big screens or big printouts are a great tool.

> geometry is basically algebra, visualized

This isn't still part of basic math instruction? It was literally middle school math where I went to school! We were frequently tested on both the algebraic and geometric solutions for a given problem. That explains a lot really.


It was split out when I took it, it wasn't until I took some math history geometry class in college that it all made sense.


> I don’t have these issue with programming because it doesn’t use my logical memory, it uses visual/spatial memory

It pleases me to be able to tell you that your visual/spatial reasoning is isomorphic to the symbol pushing that you're calling math. You're literally doing the same thing, just seeing it differently. This was a major part of my mathematical education, I guess that's not taught anymore?


Mathematical notation is a human language that nobody wants to teach.

The usage of random smybols confuses people because they think there must be a meaning or reading for each symbol but the truth is it's all arbitrary but nobody tells you this.

They don't even teach you the Greek alphabet even though it is essential in college/university. I wonder how many people are weirded out simply because they don't know the Greek alphabet.


> Computing science math ... is ... accessible to anyone who can, well, write a program.

it most definitely is not.

maybe it's the notation or the sudden appearance of it on the page, or something I don't know about, but math on a page becomes a literal wall that I can't overcome when reading.

I can explain any concept I understand well enough to teach others, via analogy, metaphor, or demonstrative example, including algorithms I use when I write software. I can't even understand software I write on a mathematical level; forget describing it or teaching it with math.

math is just out of reach for me. it has been since high school and it will be for the rest of my life.

if I have to describe an algorithm I am using in a program by using math, then it will never be described by me, except via the program source code.

if you try to share an algorithm with me and you use math to define it, you might as well just save your time because you are as likely to extinguish the Sun with a single eyedropper of water as you are likely to communicate with me in any way using math.

I loved math when I was young. I would spend time on weekends making up numbers and then calculating their square and cube roots on paper for fun. My high school math teacher hated calculus and he taught us all to hate math in toto when we were in that class. he sucked the enjoyment of math out of everyone effortlessly and completely.

I will not be rejoining the mathematically literate in this lifetime, and it is something that I desperately want to learn.

to say that math is simply accessible to any given person is just plain incorrect.


You’ve convinced yourself that you can never do it. It may interest you to know that the greatest mathematicians also feel that they are groping and may never accomplish what they are trying to do.

Two comments: notice that the equations written physics text books always use single letters for variables - possibly with sub/superscripts.

This is to keep the notation compact. If they used long variable names as programmers do, the essence of the relation described would disappear in the clutter.

That’s why math notation is as compact as possible.

(APL is an example of a programming language that tries to emulate a mathematical style.)

Second, try to look for the shape of an equation and what it’s trying to tell you about the relationship between the variables.


Noting that olympian runners and the physically disabled both want to run faster doesn't help.

I get really angry when math types just drive by and claim everything is easy if you practice and have the correct mindset. Everyone has something they aren't good at, and that's OK IMHO.


I'm not a "maths type." My passion has always been for programming, not what people think of when you say "math." Of course the Curry-Howard correspondence tells us that programs are mathematical objects, but that's not what comes to mind for most people.

I never said it was easy, I said you can do it. If you're smart enough to write a computer program you definitely have the required cognitive horsepower. Now if you want to only do the things in life that present no difficulty then that's your choice, but for your sake I hope you choose to push your boundaries and grow.

In high school I was taught that if the algebra is hard to understand then look at the geometry, and if the geometry is hard to understand then look at the algebra. Some people are definitely better at one way than the other. Definitely learn where your strengths lie and use them, but don't assume that you're bad at something because you're bad at one of many ways of doing it.

And yeah, you probably don't have the raw brainpower of a von Neumann or a Gauss (I sure don't), but so what? I'll never be a competitive weightlifter but I still lift because I enjoy seeing what my body can do. And I'll never win a Fields medal, Turing award, or probably ever even get a paper published, but I still play around with predicate calculus and other discrete math because I want to see what my mind can do. And as a happy side effect having some physical strength makes me more useful to other people, and knowing some mathematical reasoning does too, by making me better at writing correct programs.


thank you.


You protest too much. You surely know it is not out of reach, but you would have to find your own way back into the joy of it and no-one else can tell you what to read or where to start.


it's out of reach.

easy for you does not equate to easy for me.


You wouldn't have gotten into math the first time if it was just easy, we like it because it's hard.


The process you’re describing is demystifying computers, not computing.

It’s the path of: “Here are basic building blocks, here’s the machine from them, here’s simple things you can do with it, here’s a bare metal language for it, here’s a more advanced language. OK, you’re ready for a course about doing something non-trivial.”

It’s a totally valid approach, but the approach of computing first goes: “Here’s data, here’s operations on data, here’s more advanced composition of those operations, here are some languages created out of those operations, we can finally do compelling stuff. OK, you’re ready for a course about the details of how machines enabled this process.”


And that's why you get computing graduates who are still baffled by concepts like pointers, arrays, and sequential execution of statements. They can cargo-cult their way through some basic coding exercises and they can talk about computers, without really understanding what's happening when you type "x = y + z;"


I've spent considerable time studying both approaches [SICP-like, nand2tetris like, and I'd include a third OS-centric approach as in MIT's Xv6 courses] (SICP and SICP-adjacent I would also include programmes based on Forth and self-bootstrapping Smalltalk), one is the bottom-up approach from the CompSci point of view and the other one from the ElEng point of view.

The general bottom-up approach is dead or moribund, as it's the concept that an educated professional should know more or less all the relevant aspects of his or her discipline to a decent standard.


Petzold's Code book is also a highly recommended bottom-up, although it focuses more on hardware than software.


Yeah that's also a good one, but I think whoever completed nand2tetris needs to go for some university level textbook + lab material for the next level? Like a proper book for computer architecture, some reading for FPGA and HDL, and two books, one for OS and one for compiler.


Nand2tetris is much more advanced than Petzold. But yes, there’s a lot more to learn after that.


I recently finished reading it and highly recommend. I was unable to put it down and read half of it in one sitting. The title is indeed a bit misleading though, it's really about electronics and hardware.


> The title is indeed a bit misleading though, it's really about electronics and hardware.

That's what code is made of.


The software part of nand2tetris is quite a bit harder than the hardware part. I didn’t bother to buy the book for the first part, but I needed it for the second part.

SICP doesn’t really teach hardware, and it teaches you a sophisticated functional compiler, rather than the imperative toy Java of nand2tetris.

They are both important.


In college, SICP was the singular course that changed my mind about computing. Before, a math teacher tried to teach me C/C++ and I just didn't get why anyone would be interested in computer programs.

SICP is more like how a mathematician would teach computer science. Thus for people with a AP math and/or physics background entering university (such as at many EECS or ECE departments that put CS together with Electrical Engineering majors), the LISP style formalism of functional programming and recursion / mathematical induction make a lot more sense. It's a gateway to the more theoretical parts of computer science as well.

Whereas if someone is less motivated by mathematics and more motivated by wanting to write the next app or webpage, then a SICP style course would be a lot less meaningful. On the other hand, if I had started college with a course on C/C++ or even Java at the time, I would've disliked it and likely gotten turned off from studying computer science forever.


I read SICP and never got the hype. Seems like there is a certain personality type who really takes to SICP but not everyone learns best that way. I found it much more intuitive to learn about computing from the hardware up rather than from the abstract, mathematical way of thinking SICP uses.


Sincerely no snark intended. But this comment and another about "bottom-up" via Assembly and C kinda show that maybe you've missed the point.

We can blame SICP itself for that maybe, but it's all in the name: Structure and Interpretation of Computer Programs. It's abstract because it's not about actually physical computing — nor is it trying to take you to the same place as learning computing from hardware, "bottoms-up" is meant to but via some other mathematical means.

It's trying to teach you to reason about computer programs by it's structures — that they aren't just a list of instructions.

Reading Shakespeare won't teach you grammar but they'll help you be a better reader and writer in their own ways!


Yes, I understand the motivation for SICP. There are similar trends in mathematics where some people like an abstracted, axiomatized presentation and others like something more intuitive. I'm in the later group as are many others. I don't think either way is superior in itself, nor do people who learn from one approach or the other turn into better programmers.


I don't think we can have one without the other. For example in Petzold's Code book there is a chapter about Boolean Algebra. In a following chapter, we then vastly simplify our previous circuits of logic gates, replacing them with other simpler ones. This happens after proving they are equivalent with our new found algebra talents.

Imagine your computer being 3x (estimated?) slower and hotter without this advancement from the "propeller-head" pure-thought-stuff people. There are undoubtedly wins like this across the whole industry.


A lot of books start at the low level and work their way up to higher abstractions. SICP goes the opposite direction: it starts close to math and then moves down towards the machine. So maybe you would enjoy the book more if you started from the last chapter [1] and worked backwards.

[1] "Computing with Register Machines": https://mitp-content-server.mit.edu/books/content/sectbyfn/b...


There are SICP lectures on YouTube (I think it was classes to HP engineers?) that I find pretty entertaining. The flavor offered by the lecturers make it more worth it and can introduce you to some interesting vocabulary.


In the OP video, Sussman explains that SICP was one of four courses at MIT that taught students how to combine small H/W and S/W components to build computers and programs from the ground up. SICP built atop those primitives -- chips and assembly that compose functions -- and introduced the students to the primitives of computational models, from concrete procedural to abstract functional, in which lambda functions lead to higher order theoretical models of computing that will be encountered in EECS courses that follow... the gang of four.


I didn't get on with SICP the first few times, but loved Tannenbaum's works and eventually the Dragon Book.

Everyone's path through these esoteric texts looks different, but all these books are magic.


I wouldn't recommend the Dragon Book to anyone who wants to learn how to write compilers, especially for the first time.

I'm not entirely sure I can recommend Wirth's Compiler Construction, but I definitely like his simple approach.


I somehow don't like the simple approach, I feel like learning a quick trick but my brain cannot encompass the whole idea. The Dragon Book (at least the 40% I've read) inspired me more.


I guess everyone is different. I loved the Dragon Book, and I bought quite a few books on the subject. It was the one that helped the most for me.

But I had been programming, self taught, for years at that point.


It was around 2001 and I was 16 or so. Still a classic though


I really liked his point about how you should find people who actually want to teach the course. In my experience at school, too many professors didn't actually like teaching and therefore taught a very perfunctory curriculum. The courses that I enjoyed were the ones where the professor had an explicit approach to the course that was based around their perspective vis a vis the material. For a course that's focused around teaching programming, you need a professor who has a perspective on programming. I remember talking to the head of undergraduate CS, who was frankly more of an applied mathematician than a computer scientist, and being frustrated because she fundamentally didn't get what it meant to teach programming. She complained that students wouldn't read the textbook, which was a rather boring Java textbook. She also didn't seem to understand that an intro course needs to be an introduction to thinking about computer science problems, to modeling and solving problems with programming, not just a literal introduction to the act of writing code.

So unless you have a professor who is willing to devote a lot of time to education, which is not necessarily encouraged, and who has these opinions on programming, on computer science education, then you won't have a good intro CS class. Which really means you won't have a good CS program.


I see the end of the SICP approach (not just SICP itself) as the canary in the coalmine for the end of University education itself for technical fields.

Killing SICP is the admission that the artist/creator-engineer is dead and the technician/plumber-engineer is the only engineer that matters for the economy.

IMO University itself is failing that very utilitarian litmus test if what you need is people with piece-wise qualifications that can be as easy to swap out as humanly possible, then there are already existing models of education providing just that better than University degrees.

People are paying top dollar for stuff that can be done much better with perhaps a combination certifications and online programmes. Much more efficiently, cheaply, and with just as much access to the same information, teaching manpower etc etc.

If I were a teenager now rather than back in the 90s, I honestly wouldn't know what to do. Degrees remain important as CV items and have a lot of vestigial traction, but they're terrible value as simultaneously their price has shot up and their nominal utility has plummeted. You are forced to make a very risky call as a teenager as to where will things be in 4-5 years time in what you expect your discipline to be in the medium term.


> Degrees remain important as CV items and have a lot of vestigial traction

I really don't think that's true anymore besides for starting. But once you start, the degree bears only some weight when aiming for higher roles that hold a political status/value.

I agree with the rest of your post, but I also think there should be a wider distinction between a degree in computer science, software engineering and "computing".

E.g. in Italy we have a degree for all three: computer science leans more heavily on the side of theory and math, software engineering is mostly an engineering degree with more programming-related exams and computing (which we call "Informatica") is the one which leans more than the rest on the practical side of programming while having much less exams on the theoretical and engineering side.


Here in Spain it's "ingeniería informática" it's what is means. a bit of EE, huge on CS, and a good chunk on math. No software engineering. The software trade degree it's another thing, and legally it can't be called an engineering degree.


Don't miss Sussman's talk linked from the post: https://www.youtube.com/watch?v=OgRFOjVzvm0

This stuff resonates with me deeply. The reality of the modern computing world is a clusterfuck that no single person understands, especially because, like Sussman said, parts are entirely secret/proprietary anyway. Therefore I think an engineer should be skilled in a) removing the unnecessary bits of that clusterfuck that are not necessary to their problem and b) getting stuff done without having a complete understanding of the whole system. (b) sounds scary as fuck, but I think that's the (sad?) reality we live in in the software and hardware world.

Embrace the cluster; embrace the fuck.


So should the new course be called ECEF?


Abso-fucking-lutely.


This HN topic should link to the actual video on youtube rather than a random blog post discussing it.

The unreliability, unpredictability, and incomprehensibility that Sussman describes is only getting worse as we shift to ML-based everything.


It could be argued that the industry does not want people to know too much of the foundations, because the knowledgeable are harder to sway with fads and superficial arguments or keep under control. They want programmers to become fungible "resources", while something like SICP espouses the complete opposite philosophy.

That said, I think SICP preaches a bit too much about abstraction, although that's something a lot of programming texts do, and overabstraction is quite endemic in the industry even without SICP.

The fact that SICP also became a meme --- which you'll notice if you search for images of it --- may be another reason why it seems to have a cult following. The whole "anime girls holding programming books" meme started with SICP.


> They want programmers to become fungible "resources",

Capitalism is a graph search problem linking supply and demand. Don't imagine otherwise, for one second.

It's genuinely, EXTREMELY, easy to hire c++ programmers with knowledge of assembly. There's buckets lying around.

> industry does not want people to know too much of the foundations

Foundations of Engineering are:

Evolution of Useful Things: How Everyday Artifacts-From Forks and Pins to Paper Clips and Zippers-Came to Be as They Are.

Skunk Works: a Personal Memoir of My Years at Lockheed.

The Unwritten Laws of Engineering.

They're not the foundations of computer science.


Are you agreeing here?


Who do you think is "the industry", and why would they prefer to hire PhDs if they are afraid of smart educated people?

How can you write that paragraphing trashing "industry" for not liking SICP, and then immediately segue into agreeing with them, and accusing them of being too much like SICP, while pretending you are superior?

How can you make any sense out of what you wrote?


The industry includes large organizations of middle managers, who prefer to hire PhDs because they have demonstrated near super-human abilities to submit themselves to the whims of overbearing bureaucracies.

The bit about overabstraction I read as a side-swipe at OOP and (perhaps?) a personal preference that maybe starting at the pinnacle of abstraction isn't the best way to present the fundamentals.


I went through SiCP much later and enjoyed all the intricacies and discoveries while at the same time realized that it would have little impact on how I go about day to day problem solving. The video explanation very much aligns with how I see programming has changed. Now we have ecosystems and library managers and perform composition at higher levels.

SiCP still has great value to those interested in program language development or making/using DSLs and compilers, but this is a minority of the general software development/engineering audience.


My impression is that engineers who've gone through the SICP have a much better overall sense of the structure of things and how they should be built.

Having worked with a lot of SICP-trained and non-SICP-trained engineers from MIT, I think I would prefer to work with the former. That said, it could completely be other factors.

Computer Science's recent ascension to "prestige degree" status over doctors, lawyers, etc. has had and will continue to have a lot of negatives for our industry.


I'd like to take that as data but there are too many potentially confounding variables.


In the comment I agreed as much.


But in phrasing it as 'prefer to work with SICP-trained' belies that, which might better be expressed as "were in CS at MIT" between 1980-2000. Edit: sorry meant as clarification, not a dig.


Oh, I didn't really take it personally. And you're absolutely right, there's certainly a kind of ageism undercutting my statement.

I'll try a little to counter a little that with an observation that experience isn't always everything.


The trend of using Computer Science programs as software development vocational training is pretty terrible, in particular at an institution like MIT.

the word 'science' is in there for a reason and SICP was excellent at teaching people foundational comp-sci knowledge. Replacing this with python because of a great library ecosystem is like replacing linear algebra in a maths course with numpy.


An analogy is that computer science is like if universities had an "optical science" program. The professors in this field would be interested in cutting edge research on microscopes, cameras, and telescopes. The problem is that most of their students aren't actually interested in that but want to get a mainstream job as an optician helping people figure out what glasses they need.


But "optical science" actually is taught, in the physics and engineering programs.


    students aren't actually interested in that
    but want to get a mainstream job as an
    optician
The ophthalmologist is more specialized than the optometrist.

Eye surgery has replaced the need for glasses in cases.


Funny you should pick out the word "science" which Sussman in the video @4:20 says "more like science, you grab this piece of library, and you poke at it..."


He says "like science" but he doesn't say that it is like physics. Sounds more like highly risky field xenobiology with the grabbing and poking.


The empirical process of the scientific method is what came to mind as opposed to the more mathematical theory of computation would be my interpretation.


Sussman went on to write a textbook that investigates classical mechanics starting from the Lagrangian and proceeding by way of simulation experiments in Scheme. I think the distinction is not between science and non-science, but between the "analytical-synthetic" process he mentions and the approach of understanding a living system, hence the preponderance of ecosystem metaphors (Python has the better ecosystem than Java, etc). It might be interesting to think about a science of library ecosystems (NPM, etc) but mostly this probably boils down to "exposure to the bad or inapplicable decisions of others".


> SiCP still has great value to those interested in program language development

Programming language development and implementation turns out to be a far more broadly useful skill than it sounds, because, e.g., lots of input processing turns out to be reasonably isomorphic to implementing a programming language with semantics defined by the input format and use case.


I've heard a joke some time ago that I can't stop thinking about: every program is either a compiler or a database. I still haven't found any counterexample.


> every program is either a compiler or a database

or both!


Or it's just a CRUD website.


That's a user input to SQL compiler


A load balancer? Operating system kernel?


A load balancer is a compiler. Instruction scheduling and register allocation are probably the most similar tasks.

An OS kernel is a database, or rather a DBMS. Query planning fits well.

Arguably, a load balancer is a primitive query planner. Perhaps DBMSes are compilers?


It seems that these arguments can also be applied in reverse, so it's quite a loose interpretation, but I do see the logic in them.


Programming hasn't changed, people teaching it have. There used to be Cobol and SQL back in the day too, they just didn't teach it at MIT.


I disagree, programming has changed greatly.

Today programming is mostly about picking existing components and gluing them together. Those components are themselves mostly gluing together of other components, to many levels. While it is still possible to write code from scratch, that is usually not a productive use of time.

SICP is an awesome book, but it is probably better for a pure Computer Science course after you have had other courses.


> While it is still possible to write code from scratch, that is usually not a productive use of time.

The thing you use to “glue components together” is as much code as the components themselves are, and the techniques used in connecting them run the full gamut of programming skills.

SICP may not be the best pedagogical tool available today, because the space of options in programming pedagogy has changed, too. But the skills it teaches are no less relevant.


Creating things from scratch is still definitely a thing, particularly for high performance code. Sure, there are plenty of data structure libraries, but squeezing out that last bit of performance often requires something bespoke.

For example, I recently wrote a custom CSV parser because I needed something that did no heap allocation but without mutating the original character array (using explicit string lengths).


This is a ton of fun! If you have the double-quote escaping then you have to mutate the data in place, but if it's just a table of numbers then not.

Usually CSV parsing doesn't matter, but when it does, it matters a lot.


Your comment gave me flashbacks to FORTRAN data files with fixed width data descriptors.


I would put it like this: consistent small gains programming is about gluing components together. There are plenty of non-incremental (big impact or flop) programs that couldn't have been made by just gluing components together. E.g. BitTorrent, AlphaZero, and seL4.


What's the practical difference between gluing stuff together and creating it from scratch? When we code in C, we're gluing the standard library with others, in a way.


In some ways SICP is like a medicine for left-pad-like sicknesses. By teaching how to do things oneself, it makes one less dependent on the simplest things being done by others. This in turn has the potential to avoid dependency hells and bloated software. That, in my opinion, is more needed than ever, especially in some ecosystems and states of ecosystems of programming languages which some languages are quickly approaching.


This made me sad. Sussman is saying -- almost literally -- that MIT just threw in the towel on teaching actual understanding of how things work and instead went with treating engineering artifacts as magic black boxes that no one, including the people who created them, actually understands, and that this is OK. It is not just that no one does understand them, no one can understand them, and so we're stuck with them because if no one can understand them, no one can improve upon them.

Which is, of course, a self-fulfilling prophecy.


I heard a story once, that for a while France was teaching math to kids from first principles. Like trying to teach about number sets before diving into other topics. It was hard and abstract. They stopped.

Instead they do what most people do: they teach concepts with a lot of papering over of the details. Then, once people have a stronger grasp on things, they go into first principles. It’s easier to understand commutativity if you can do addition already!

Similarly, you can teach people black box stuff at first, then dive into details. it’s ok to use a high level language before understanding how to run stuff on a machine.

Let’s not forget something important: Python and Lisp are basically abstracting away the important machinery anyways! Sussman has said it himself, computer science is about computers the same way that astronomy is about telescopes.

And of course it’s not like SICP is the only class all these people are taking. One intro class. Meanwhile people get to do fun and engaging shit while learning. And they will learn more stuff later!


Sure, but we're not talking about elementary school here. I think Python is a fine language for teaching first graders about programming. But this is MIT. If MIT students aren't expected to understand how things work under the hood, what hope is there for the rest of humanity?

What I lament here is not the fate of 6001 specifically. It probably was outdated and in need of a refresh of some sort. What I lament here is the rationale that Sussman gives and apparently buys into, that the new approach is acceptable, and it's acceptable because treating things as magic black boxes is an inherent feature of modern engineering. No one understands, no one can even be expected to understand, and that's Just The Way It Is, and it's OK.

Well, no, it's not OK. People can and must be expected to understand because the magic black boxes aren't actually magic. They were created by mere mortals not that long ago, some of whom went to MIT and read SICP. If SICP is dated (and it is) the thing to do is to update and improve it, not to discard it.


Isn’t this literally the first class you take? Like there are. 3.5 years left to learn all the other shit? This is a class for people with little to no programming experience. And it will include many people who will not be programmers (shocking I know).

“Give people tools to do shit” does not preclude “learning all the details”. And it’s not the last class people take!


It also expected that you have considerable earlier knowledge, just not on programming itself (For example, at least some exposure to calculus is expected, as that's what some early examples use).

The early lessons of SICP are more like teaching what's 2+2 in my opinion (substition model of function application, for example).

In terms of practicality of the content, honestly I'd say SICP was at least as bad as most intro lectures I've seen, and honestly should stay but be combined with something like companion labs with "battery included" builds to play with things.


In the OP video, Sussman mentions that SICP was one (the first) of four core courses in EECS that introduced essential concepts in programming, circuits, signals and systems, and computer architecture. I think all four completed by the end of year two and were prerequisite for all junior/senior EECS courses thereafter.

It's hard to imagine the EECS program does not still require this material to earn a BS in CS or EE, but requiring four prerequisite courses probably interfered with non-EECS students who wanted to take junior/senior level EECS courses but did not want to have to run the "gauntlet of four" before doing so.


The problem is that MIT used to be the high road for engineering and now it is a prestige degree and grants access to various roles etc. They gave in to the bureaucrats because it was inevitable that things would go that way anyway. At least they put the bullet in SICP and had a hand in the new curriculum.

Now there is a generation of software engineers worried about losing their jobs to ChatGPT because they resigned to the fact that ChatGPT can probably glue things together better than they can long before they do the necessary work to understand ChatGPT from first principles. But this is branded as elitism, highly anti-bureaucratic.


Unfortunately the view that we don't understand our own tools, while premature, will come true soon enough.

Ask somebody at DeepMind exactly how AlphaGo beat Lee Sedol. At the end of the conversation, you will get a blank look. Don't even bother asking what inspired SD or DALL-E to generate a particular image.


Sounds like you are thinking of “New Math”, which was tried in many Western countries including the US: https://en.m.wikipedia.org/wiki/New_Math

In many ways SICP was from the same intellectual tradition as New Math.


As some one who loves SICP and first principles, I had a new math curriculum and loved it. Being able to derive the laws of arithmetic is usually a lot more useful than just knowing some particular instance of calculation.

To be sure, I also did math contests in high school to bump up my speed at math, and in software will do timed speed tests (self devised on certain types of small tasks) also to bump up my speed in a particular language.


You do know how the "first principles" came into place, right?

They had arithmetic first. People counted chickens, sheep, coins first. Then, mathematicians invented more abstract rules that incidentally were able to derive arithmetic, and called them "first" principles. Chronologically they came last.


I love axiomatic set theory and the history of maths and science. I love knowing how technology developed over the last few thousand years, Egyptian land measuring and trade accounting etc., and as much quantum field theory as I can understand (not that much, it's very hard). I like knowing about DNA and the history of animal husbandry. Deriving calculus and complex analysis from ZFC gives us tremendous intellectual power as far as the essentially and inherently true. Understanding how our culture has been lifting the vails of ignorance thru science and experiments and cultural evolution and so on is also important.


Do you have a source for France teaching mathematics from first principles?


IIRC the quote is from Dijkstra, not Sussman?


Sorry, I believe I heard Sussman say it (and they probably correctly attributed it), you’re likely right


You can read some of the story of France's experiments with Bourbakism in elementary education here, under the section about pedagogy:

https://en.wikipedia.org/wiki/Andr%C3%A9_Lichnerowicz

tl;dr: New Math began in France, and they were the first to recover from it.


Sounds apocryphal to me.


I think it’s a question where do you start.

It the late 90’s computers and operating systems were already so complex that you did not start CS curriculum by understanding how they really work. You took those for granted to get to the interesting topics first and then later went for the details.


Exactly this.

As a fulltime functional programmer... I cannot ask university graduates to learn functional programming to the depth of SICP teaches.

Even the "Scala redbook" is 50-100 hours of work.

Junior graduates, have so, so, so much information to cover.

Bashing a "good" development workflow/flow state into them. Communicating with correct nouns. There's SO MUCH work for them to do. Plus, output they're paid to produce.

Functional Programming, manipulating higher order functions, is useful for reasoning about truly complex concurrency.

Hard to justify the time investment otherwise.

There's SO much work to do in software development.


SICP is better for full-time programmers with some experience than it is for recent HS grads, however motivated they might be.

As I understand it, no freshman class ever made it through the full text anyway, so it was always understood that you can go further later, and come back to it, similar to how much undergrad calculus is taught.


And yet the best stuff is in the last chapter.


The part where you write a virtual machine followed by a Scheme compiler for said VM? It's so cool.


So true.


And what exactly were "the interesting topics"?

I was a grad student in the late 80s and early 90s, and I remember having classes that took this approach. It took me years to recover and repair the damage. Understanding how a real OS works is not actually that hard. It's actually easier if you start with that than what I went through, which was a bunch of abstract academic bullshit that left me with no idea how it was supposed to make contact with reality.


The best way to understand how an operating system works is to build a simple one yourself. But you can't build one before you have learned how to program.

I believe it's a bit similar case with many other topics nowadays. Having some basic Python skills unlocks opportunities in other fields, such as machine learning or robotics.


In the late 90s? Surely OOP, code re-use through encapsulation and inheritance, the relational model (though on the way out), and making XML of all the things should have been some of the "interesting topics" at this time.


This is what everyone is saying, Sussman and MIT included... But it is not true. It is just how Windows is stitched together + no sources. And now systemd in Linux - it was just first big application of cgroups and noone else was doing serious work with them. RedHat was realy leading force.

Everything is build from pieces and pieces are explainable. Ocasionally you have hard algo but still you can understood what it do becouse it do exactly what simpler algo for that problem do.

Btw. there is not so much material in SICP - some tricks like lazy functions implementation and symbolic math. From that point of view it's good introductory course to compilers building :)


// engineering artifacts as magic black boxes ... //. Alas, that ship has sailed a long, long, time ago. As far as the average programmer goes, CPUs have been black boxes since before they were born.

There are just too many people adding too much value to keep up with everybody. Programming is changing radically (as it does every 10 years or so anyways) and I'm actually impressed that Sussman has the vision and courage to move on.


Actually, I'm not at all convinced that a lot of value is being added. Yes, there is a lot of change, but it's far from clear how much of that change represents actual progress. Instead, you get a lot of latest-and-greatest which, upon sober reflection turns out to be bullshit, to be replaced by another round of latest-and-greatest. Lather, rinse, repeat, in a cycle that takes about 5 years or so. CORBA. UML. Agile. OO. And a lot of the churn is because too few people actually understand the fundamentals, and so very few people recognize the bullshit for what it is until it has consumed vast amounts of resources.


Wait till GPT-10 comes around and the primary purpose of CS jockeys is to use it to write programs that make absolutely no sense but they just work.


Happily, the insolubility of the halting problem provides guarantees full employment for software engineers until the end of time, because "just work" is undecidable.


my junior devs have been doing that on my teams for years anyway


Sussman is wrong. Happens to the best.


I suspect Sussman told only the more palatable part of the story. Students pay for those degrees, so they have a big say in what they buy. It's likely that both EECS majors and non-EECS majors who took EECS courses believed the four course EECS basic prerequisite sequence no longer served their needs as well as it once did. Profs have only so much power to shape minds, especially if the minds are paying over a quarter of a million dollars for the only product the profs are selling.


The very way he tells the story practically lampshades all the behind-the-scenes intrigue that will never see the light of day.


This is sad But not because of What you say.

I think the course has merits. BUT treating engineering artifacts as magic black boxes is the ONLY way we can manage complexity. Attempting to engineer something highly complex while trying to understand everything is a fools errand.


> treating engineering artifacts as magic black boxes is the ONLY way we can manage complexity

No, it isn't. And this is exactly the kind of throwing-in-the-towel that I'm talking about.

The other way to manage complexity is to include it as one of the considerations when designing a system. You can design for simplicity just as you can design for any other desirable feature (speed, cost, power consumption). Complexity is not an unavoidable consequence of the laws of physics, it a consequence of poor choices and complacency.


It is the only way.

Imagine if you were designing an operating system: GNU Linux.

There is no one person in the entire world who understands All of GNU Linux as a whole. Most people understand what they need and everything else is an abstraction. Sussman was right.


It's not the only way. The alternative is to recognize when complexity is a consequence of extraneous structure versus the things actually needed for the system. Rube Goldberg devices are complex, and the bulk of their complexity is spent (though intentionally humorously) to accomplish simple tasks. Many, if not most, real world software systems are the same way. They're Rube Goldberg devices but done accidentally rather than intensionally through the accrual of many decisions (both good and bad) over years and decades with no or insufficient effort to distill and simplify them.

Treating everything as "magic" black boxes as you want is tantamount to sweeping dirt under a rug. It doesn't address the problem, it just creates a wrinkle for you to trip over later.


>Treating everything as "magic" black boxes as you want is tantamount to sweeping dirt under a rug.

No it's not. Why do game developers often not write an entire game engine from scratch? Instead they use out of the box solutions like unity and or unreal?

Because they cannot hope to build an entire game engine that is as complex as existing commercial solutions within the available time and budget.

This isn't just software engineering. All forms of engineering work this way. Not just engineering either. Society works this way.


Sometimes you don't need something as complex as existing commercial solutions.

But you are correctly repeating the standard wisdom of the day, which is that first-principles understanding is inefficient (because people don't have time for it) and the best thing is to treat software as pluggable units.

> All forms of engineering work this way.

As far as I know, engineers still learn physics.


I mean is the standard wisdom wrong? The standard wisdom is right.

The way I see it is this. Sicp is good to know. But as an introduction to programming python and applications are more relevant to the modern age.

>As far as I know, engineers still learn physics.

So? Human society is overwhelmingly made up of specialists. Engineers who learn physics doesn't change the fact that within engineering, there are specialties.


The standard wisdom is absolutely right.

The standard wisdom tends to keep the bureaucratic beast in control, not you. That is why it is the standard wisdom, and not the outsider minority opinion. That is why increasing your professional competence, year over year, with SICP and similar courses of study, is anathema to any engineering culture that has been totally taken over by bureaucracy, which is a majority. That is why there is a consulting industry serving Uncle Bob's ideas that comp-sci is not actively harmful to know about but certainly not something you need to be encouraged to learn. According to this school, anyone making a claim that "every programmer should study X" would make their co-workers uncomfortable, and this is the kind of toxicity that must be managed out of a modern (that is, bureaucratically-controlled) company.

The standard wisdom is not for you if you want to look back on your career and say that you did meaningful work, that you tried to advance your field, even if you failed, or that you worked on one of the hard problems of your time. If you want to look back and say that you navigated FAANG and retired early, then the standard wisdom and making yourself amenable to the super-organism may be just the ticket.

SICP, by the way, isn't something good to know, because it's not something you know. It's something you do, and the experience (all the exercises) will rewire your brain, unless you've already had sufficient experience so that those patterns are already wired in, in which case you may find it kind of trivial and beginner-ish. The difference is whether you're oriented towards things like writing compilers as scary and best left to specialists, or whether you regard them as basic tools of any programmer, which is more empowering for you, not for the bureaucracy, which diminishes in purpose as individual variance in competence grows.

The point about engineering is that there's no movement saying that engineers should stop studying physics, and consume physics knowledge in special packets prepared for the use of the "engineering industry" by physicists. Yet this absolutely fantastical idea is how the work done by computer scientists is supposed to be distributed to software engineers. They need not study the fundamentals, because they need not understand them, just consume the artifacts produced by those who do. Most of the low standards of modern software can be traced directly to this attitude.


>The point about engineering is that there's no movement saying that engineers should stop studying physics, and consume physics knowledge in special packets prepared for the use of the "engineering industry" by physicists. Yet this absolutely fantastical idea is how the work done by computer scientists is supposed to be distributed to software engineers. They need not study the fundamentals, because they need not understand them, just consume the artifacts produced by those who do. Most of the low standards of modern software can be traced directly to this attitude.

Do whatever you want. Nobody is stopping people from studying multiple fields. The main point is that the human brain has limitations. In general the majority of people can master only a few specialties. This is the MAIN reason behind the "standard wisdom".


> As far as I know, engineers still learn physics.

That's interesting, do they learn about all the properties of newly discovered quarks and derive Newtonian physics from quantum theories? Because if that's what they do, it's new to me.


I believe engineering majors at any prestigious institution will be up to date on every one of the currently-known quarks, yes. If your point is that it's not the same as a physics major, that's also true.


Linux is a bad example, complexity-wise, being based on the behemoth of terrible design that is Unix.

Look at Alan Kay's work with VPRI to see what a few brilliant minds can accomplish when looking at things from radically new perspectives.


Yeah, agreed, but sadly VPRI blew up for an unknown reason and the work was never completed or properly published.


You think elegance in design can negate complexity? Complexity has no ceiling.

Good design can only lower complexity. Complexity itself cannot be negated. I can point to a million more examples, here's one:

Red Dead Redemption 2 the video game. Name one programmer who understands every single system in that game.


Those magic black boxes need to be implemented too. Relying on the strategy "someone built them in the before times so we don't need to worry" has severe failure modes.


As if the techniques in SICP was the only way to build those black boxes?

Do you honestly think someone who just learns python really well without touching SICP can't build a library?

They can. It's just they can't build the library in the idealized lego like modular way that SICP is promoting. This style is definitively better, but it is in no way required.


I think this is a reasonable position, but the reason people are downvoting you is it's a defeatist position, and there's reason to be more optimistic.


It feels good to be naïve.

It bogles my mind how, after years of learning about complexity theory, people who learned CS cannot comprehend the fact that maybe this wonderful world is perhaps more complex than a single person could comprehend.


Well that's certainly a perspective, but you don't know what you don't know.


I just regurgitated Sussmans position from the article/video...

Sussman co-wrote SICP.


Your words were "the only way" and "fools errand", what I actually see in the clip is that the course was due for a refresh, both the original professors quit, the administration had no idea what to do after that, and the department head had to teach it while they figured it out, meanwhile committees were formed and in the end Python won because ("for reasons no-one understands") it had all of the libraries.


Sussman exactly said what I said. He quit because engineering changed, the course was no longer relevant.

Perhaps the words are too harsh. But is there any other way to build something more complex then you can understand? Using pre-existing libraries is the only way.

Additionally if you tried to say build a triple A game without using any libraries would that not be a fools errand?


He says "although still important intellectually, it was no longer relevant to the kinds of things people needed." What people? The students? Working programmers? Companies? The university administration? Notice how he tells the anecdote about quitting "I suppose Hal and I basically walked into the department head's office sometime in the end of 97 and said, 'We quit. On this. Okay? Figure out what to do'.", and then the department couldn't figure out what to do.

The story here isn't that a course that had been the same for over a decade was out-of-date, the story is that the organization didn't know where to go from there, they still don't know, and the people that were originally passionate about teaching the class were of retirement age, and both the profs and the textbook were retired from the class at the same time. If you want to read more into it than that, then we're reading the tealeaves of university administration decisions that not even the people making them are often in a position to understand.

How do you know that building something more complex than you can understand is necessary, or is a good idea? Feynman said "What I cannot create, I do not understand," which seems like a better approach to me.

I recently ran across Jonathan Blow's effort to create a new programming language, because he noticed that C++ is bloated to the point of buing unfit for purpose, and supposed that he could do better, despite being just one person (or perhaps, because of that). As I understand it, he's interested in working on games with the engine that he's writing in his new language. So the jury is still out on that fool's errand. Of course, he's still using the standard libraries provided by the OS, and he is after all writing an engine (and not just a standalone game) so your point stands.


Bro, let's say the world embraced FP and it was the dominant paradigm and the techniques and patterns of SICP were taught in EVERY computer science curriculum.

The change Sussman described that happened in engineering WILL STILL occur. He stated it didn't just happen with programming. It happened across the entire stack from the hardware all the way to software. What happens in this parallel universe? Most people won't spend the majority of there time using the techniques they learned in SICP.. instead they'll be trying to figure out how to get all these arcane libraries to work.

>How do you know that building something more complex than you can understand is necessary, or is a good idea? Feynman said "What I cannot create, I do not understand," which seems like a better approach to me.

How is what Feynman said a technique? He stated a fact about himself. If he can't create it, he can't understand it. So I'm pretty sure Feynman doesn't fully understand how an airplane works because he's never created one. This is not a technique this is fact. That being said, pilots ALSO don't understand how airplanes work. But that doesn't mean pilots can't use an airplane; in the same way I can use a library.

>I recently ran across Jonathan Blow's effort to create a new programming language, because he noticed that C++ is bloated to the point of buing unfit for purpose, and supposed that he could do better, despite being just one person (or perhaps, because of that). As I understand it, he's interested in working on games with the engine that he's writing in his new language. So the jury is still out on that fool's errand. Of course, he's still using the standard libraries provided by the OS, and he is after all writing an engine (and not just a standalone game) so your point stands.

It's a bit of a fools errand depending on how you look at it. If he had a boss, and was working for a company he'd be fucked because it'd take too long. Let's be real... he's more doing that for fun. If your goal was to fly an airplane, do you start off by building one? The only type of person who would do this is someone who does it as a sort of "fun" project.

We all know C++ is a gigantic piece of garbage and we all can sort of visualize the better way in our imagination, but it doesn't change the fact it is still the most practical choice for many applications. Speaking of C++, Rust was(and may well be) a replacement for C++ started by mozilla.

While rust was a great contribution to the community it ended up being pretty bad for mozilla from a business perspective so they dropped it. For mozilla yes it was a fools errand. For the engineers working for mozilla to develop rust it was a smart choice to work on cool shit on someone elses dime.


First of all, pilots absolutely do know how airplanes work, and they make you study the aerodynamics and pass a course before you get your first hour of flight time, even as a student going up.

Feynman understood how airplanes work better than I understand how the internet works, and that's reasonably well.

The way it's a technique: first understand it, then you can try to create it.

Did Feynman understand commercial jets well enough to build one from scratch? Certainly not. But if dropped off by time machine a decade before the Wright brothers, could he have saved everyone some time and gotten something off the ground, given similar resources? Yes.

When pilots don't understand how airplanes work, you get MCAS, and we can all agree that can't be allowed to happen again (and was not entirely the pilot's fault, as they were lied to about the systems, in the service of getting one by the regulator).

Rust was good for Graydon Hoare, as he got his ideas about safety out into the world. Later he burned out. Mozilla is one of the champions of mismanagement and squandering of every kind of technical and non-technical opportunity, and somehow the Rust juggernaut became its own thing that's quite extraordinary and hard to characterize. I'm sure there are a lot of stories there. As to the people choosing Rust today for non-systems programming work, because of fashion, that doesn't seem to be good for anyone, but opinions obviously differ there as well.


>First of all, pilots absolutely do know how airplanes work, and they make you study the aerodynamics and pass a course before you get your first hour of flight time, even as a student going up.

But they can't build an airplane. This follows Your quote from Feynman.

>Feynman understood how airplanes work better than I understand how the internet works, and that's reasonably well.

Feynman can't build an airplane. Again you quoted him Admitting that if he can't build it, he doesn't know how it works.

>The way it's a technique: first understand it, then you can try to create it.

Except No pilot ever uses this technique. They fly a plane, but they mostly never build one.

>Did Feynman understand commercial jets well enough to build one from scratch? Certainly not. But if dropped off by time machine a decade before the Wright brothers, could he have saved everyone some time and gotten something off the ground, given similar resources? Yes.

What does this have to do with anything? Basically the point is the logic you following from that quote doesn't apply. A pilot CAN know how a plane works, so can Feynman, AND they both don't have to build one to know. The whole point of the plane and pilot thing is to show you the flawed logic.

>When pilots don't understand how airplanes work, you get MCAS, and we can all agree that can't be allowed to happen again (and was not entirely the pilot's fault, as they were lied to about the systems, in the service of getting one by the regulator).

They don't have to build an airplane to know about MCAS do they? That's the point here. They don't even have to know how to program. That is essentially what MCAS is: a program. Instead pilots deal with an abstraction. They have a very rough idea of how it works. Sort of like how programmers deal with libraries, but they don't have to build the libraries from scratch.

>Rust was good for Graydon Hoare, as he got his ideas about safety out into the world. Later he burned out. Mozilla is one of the champions of mismanagement and squandering of every kind of technical and non-technical opportunity, and somehow the Rust juggernaut became its own thing that's quite extraordinary and hard to characterize. I'm sure there are a lot of stories there. As to the people choosing Rust today for non-systems programming work, because of fashion, that doesn't seem to be good for anyone, but opinions obviously differ there as well.

Good for the community. Epic business mistake. A fools errand for Mozilla who basically conducted a charity for the community.

>As to the people choosing Rust today for non-systems programming work

Performance is a metric worth going after even if you're not directing calling any system calls. Honestly it's not even that hard or far away from a non-systems programming language.


Conducting a charity for the community is Mozilla's mission, so that part worked out. How good it will be for the community remains to be seen, at least it is probably replacing C++ on a lot of new projects.

Nobody's saying you have to build an airplane to know how it works, the point is you have to know how it works to either build it, or fly it effectively. You won't know why it stalls if you don't know what makes it stay up.

The point of the time travel story is that Feynman understands it to the level that would let him recreate the technology from scratch in a society without powered heavier-than-air flight, not that he can compete, John Henry-like, with a Boeing assembly line. That's a good enough level of "can create" to reasonably say you understand something. Nobody outside of Boeing has the built-up engineering expertise to replicate their assembly line, and any equally advanced design would have to be evolved forward over many years.

The problem with MCAS is that the abstraction combined with a lack of specific awareness of the issue made it unclear to the pilots what was going on exactly, or what that meant. Abstraction isn't the problem, but abstraction that masks relevant parts of what's going on (like how many distinct sensors are going into a reading) has to be treated with appropriate respect.


Mozilla's mission wasn't to conduct charity to the point of self annihilation which is what would've occured had they not spun off the rust division. Non-profits cannot sustain themselves without positive cash flow. Just look at Ikea, they're a non-profit.

>Nobody's saying you have to build an airplane to know how it works, the point is you have to know how it works to either build it, or fly it effectively. You won't know why it stalls if you don't know what makes it stay up.

You said it. You quoted Feynman saying it. So if you didn't mean that then you made a mistake here. Ok the. I agree with the rest of your post then.

However my main point still stands the systems we build as humans are too complex for a single person to understand. In order to deal with this complexity most of programming involves investigating and testing libraries that abstract away this complexity. It is more.important to know this then it is to construct highly modular fp style code.


It depends what you want to do. If you want to work on problems that we know how to solve and that we have solved many times, then the npm xenobiology skill set is sufficient; if you want to do things we haven't done and don't know how to do, it isn't. I've acquired knowledge by poking at libraries and databases and OSes, and those are sunk costs. Apart from the instrumental value at the time, most such knowledge has had no lasting or further use to me. I've acquired other knowledge and skills from books including SICP, and those have continued to pay lifelong dividends with compounding interest over decades.


Domain knowledge can shift but not always. npm and javascript are around to stay. Same with most of the linux API.

It is true that the fundamentals that SICP teaches will be around forever, but the fact of the matter is, if I'm an employer I'd rather hire a specialist in some arbitrary ephemeral domain relevant to my business then to hire someone who is a generalist.


You're in good company, that's an extremely popular perspective. Those of use who are more generalist often have to find our own ways. The rewards are usually worth it.


There's probably a million lines of code in your self driving car today. You might think knowing high school physics help you become a better driver, but you're delusional if you think you know how your modern car works.


I certainly do not own or want a self-driving car, at least not until they work!

High school physics certainly does help you understand momentum, which is the cause of all crashes.


Majority of drivers don't need to understand physics to drive. It doesn't help.

Instead humans rely on visual intuition, which everyone has with or without learning physics.


>The point of the time travel story is that Feynman understands it to the level that would let him recreate the technology from scratch in a society without powered heavier-than-air flight, not that he can compete

see my remark above


Planes flying along with siphons are bad example to ask theoretical physicists - most of them wouldn't be able to give you the right answers. Feynman was closer than most because he did think about things like what would happen if a rotating sprinkler sucked in water, etc.


All you need is very rudimentary fluid dynamics, some vague idea of attack angle, and the conservation of momentum with a compressed column of air under the plane. I think most theoretical physicists could come up with that.

The siphon one is funny because many people will start with air pressure, which is wrong. I'm wondering if you have some experience asking people these questions?


Yes, first and second year undergrads when I taught intro labs and diff eq.

My understanding is there is a rather tortuous explanation involving viscosity (for lift and wing shape)


Like DOOM?


> Sussman is saying -- almost literally -- that MIT just threw in the towel...

The truth would have been more embarrassing: he just got sick of all the fucking parentheses.


Before I read SICP I had used OOP, functional programming, and lazy evaluation to varying degrees.

In school and independent study I learned how to use those concepts to write programs, but I didn’t really have a clear conceptual understanding of what they really are or how they work under the hood.

I read SICP to learn more about lisp and was surprised to learn a lot about all of those concepts as well. It’s all explained very clearly and there are code examples you can toy with to wrap your head them.

I found it such a gratifying book to read through. And I really regret spending time learning Haskell before reading it, so much needless head scratching.


I find it funny that the author dislikes "casting magic spells", and yet cherishes the book which states that we conjure the spirits of the computer with our spells.

What I don't find funny is that people keep suggesting SICP instead of a much more digestible How to Design Programs — https://htdp.org/


I only looked at the beginning of HtdP, so I cannot say a lot about it and maybe that fact l, that Scheme was sort of new to me when I worked with SICP also had a part in it, but: I found SICP beautifully written as well. That John Locke quote at the beginning? I thought it sounds like a philosophical quote, and then I saw the name under it: "It really is by a philosopher! By John Locke! WTF!" and it fits the content of the book so perfectly.

I did not get the same feeling at the beginning of HtdP.


The difference is whether you are the one conjuring the spirits by writing your own spell, or whether you are the one casting the spells that others have npmed.


This change is necessary because young people aren't interested in the fundamental abilities of computers, instead they're only interested in the applications. If you grow up playing video games and using social media then you won't appreciate programming if you're made to start from the bare metal. There is a certain kind of person who's interested in this, but most people instead are better motivated by demonstrations that relate to them. Robotics is cool regardless and allows the kids to "smell blood" early on, it lets them be dangerous, and then maybe one day some of them will be motivated enough to slow down and understand the old principles that underlie all of this.


> This change is necessary because young people aren't interested in the fundamental abilities of computers, instead they're only interested in the applications.

I don't think it's fair to spin this on lack of interest. My take is that young people are smart and they are goal-oriented, and their goal is to actually provide value instead of wasting time with irrelevant low-level details that matter nothing and are practically meaningless.

And they do it just like the generation before them did. No one bothers with knowing how to put together opcodes once they got around to use compiled languages. Some people do quite well for themselves in the software engineering field without touching a compiled languages even once, and thus can't be bothered with subjects like linking, how to create and consume static lib, rpaths, finding symbols, etc. You can have high-paying careers in software engineering and never be bored with the difference between a stack and a heap. Popular tools like the StatsD daemon are written in Node.js, and no one can be bothered about it.

So why pretend that knowing how things work from the metal up is relevant to get an understanding of how things operate? I mean, most services don't even operate on metal, but on interpreters running on containers launched from virtualizations.

To each its own, but let's not fool ourselves into believing that in a world of countless layers of abstraction it's relevant to dive into the lowest of levels.


Yeah, you can always ship a shitty SaaS or plug libraries you don't understand into each other and still make a lot of money. But you honestly cannot do much more than that if, say, you don't know the difference between the stack and heap. Statsd is like ~4000 lines of code, which is absolutely tiny; it's a trivial codebase. You can't write fast JavaScript if you don't know how arrays are represented in memory. You can't optimize SQL queries if you don't know how a database works. Hell, you can't even create good code in general if you don't know how to compose functions properly.

The countless layers of abstraction may be useful, but without them you are basically stumbling through your day and hoping everyone else can be smart enough to let you continue to not learn anything.


> Yeah, you can always ship a shitty SaaS or plug libraries you don't understand into each other and still make a lot of money.

What you're failing to understand is that the SaaS or plug libraries are not shitty. They might even be better than anything someone who reads SICP like the bible can possibly put together. Why? Because they focus on the right level of detail and abstraction instead of wasting time and effort on irrelevant and useless details. And they deliver it far faster.

I repeat: the StatsD daemon is written in Node.js. it's a >100MB Node.js application to receive, aggregate, and send UDP packages. The StatsD daemon is perhaps the most popular metrics sidecar out there, and no one cares it's not a 1MB C app.

> Statsd is like ~4000 lines of code, which is absolutely tiny; it's a trivial codebase.

Don't you get the point? It's a Node.js app that took a few lines of code to out together to handle UDP packages. Writing the same thing in C would not be hard and it would be far more efficient, but it would be entirely pointless because slapping together a Node.js application performance wise is already good enough. What does this say about the cargo cult beliefs of SICP fundamentalists ?

> You can't write fast JavaScript if you don't know how arrays are represented in memory.

Sure you can. You just hear that arrays are faster and use those not bothering with any irrelevant low-level detail. And who exactly advocates using JavaScript for a UDP server and mention performance needs with a straight face?

> You can't optimize SQL queries if you don't know how a database works.

Don't you get the point? The point is that you don't need to waste your time bothering with performance tuning if your goal is designing a working system. Bothering about irrelevant details like that is a waste of time. Decades ago someone smart already stated that premature optimization was the root of all evil. The first rule of software optimization is "Don't". And here you are trying to argue that being bothered about low level details is relevant for the sake of optimization? Don't.


You don't realize that Scheme used a garbage collected high level language at a time when C would have been the norm. This is like having 40000 npm dependencies today, convenient and very practical.

The smart person was Knuth, who said 97% of the time, you shouldn't think about optimizations, but you shouldn't be complacent but look out for the 3%. This is the same Knuth who gives all his code examples in MIX, the assembly language he created for this purpose.


The problem is not one thing in particular, but composition of these things, which are getting more and more bloated, because no one applies the fat-reducer to them. I don't know what statsd does, but the point is, that when using it from another program, you suddenly have all that baggage of dependencies and insufficiencies as part of your program. This trend continues until things are horribly bloated. Best place to see this trend is modern websites, that merely show some static info, but require the viewer to download megabytes of JS bloat, and otherwise only show a white screen.

When composing fat, most of the result will be fat as well. Ultimately we do not gain much from our more powerful machines, because at the same time we make it so, that for the same functionality the have to process much more.


I didn't say there was anything wrong with statsd itself, just that it's not a particularly complex application. You can produce a working, useful (to some people) web application/website/whatever people call this shit nowadays but it will not be very fast if you don't optimize anything.

Slack is a web app that is incredibly slow, but still useful: yes, I can send people messages and it doesn't immediately crash, but it also takes up ~300mb of RAM on a fresh tab, which is completely absurd. I still use it because I have to but if it was optimized by people who had a clue what they were doing, it would be significantly faster, and use less memory.


Of course it's not strictly necessary to learn the various levels of abstractions in computing. But some people just do it out of curiosity and wanting to know how things work from the ground up. These people typically tend to be smarter and better, but it's true that not everyone needs to do this to have a good career and make lots of money.


> But some people just do it out of curiosity and wanting to know how things work from the ground up.

And that is perfectly fine.

Some people also get a kick out of learning Esperanto or Klingon, but that doesn't mean those are critical subject areas that you need to to excel in any professional domain.


I agree with you completely, and this was what I tried to convey. You have misunderstood me.


You misunderstood history.

"young people aren't interested in the fundamental abilities of computers, instead they're only interested in the application".

Young people back then were the same, but back then applications were far simpler and were built using the fundamentals of computers.


I also agree with this. You have also misunderstood me.


As said young person.

The issue is:

I truly, truly, love programming and working with computers.

I truly, truly, cannot see myself writing software in a systems programming language, ever, for the rest of my career. I'll only work in GC languages.

I love reasoning about code, logically. I'm studying compilers/mathematics in night school, that's how much I love it.

But... I learnt Rust, learnt assembly. I set up a Raspberry Pi Pico as a hardware debugger as a fun project. That knowledge's usefulness is... tenuous. I appreciate the assembly mental model, but it's only a rough mental model.

I truly, truly, cannot see myself using non-garbage compiled programming languages for any of my future project ideas.

Knowledge of the underlying machine is irrelevant. Same as knowledge of the electrical resistance threshold of your transistors before a bit flip.

Computing is changing, fast, as we ascend layers of abstraction at an extraordinary pace. Most knowledge is highly specialized to your level of abstraction.

SICP/functional programming HAS to justify itself as relevant to each level of abstraction.


This is the right perspective, and you should continue to pursue your own projects using the languages that make you most productive. Don't study what does not interest you.

The problem is if you change majors to engineering, the first courses are not going to teaching you what is exciting and flashy, they are going to teach you the very boring, mostly unmotivated maths that you need later to build bridges that won't collapse. SICP represented the engineering approach, by starting from first principles (more logical than mathematical, but appealing for similar reasons) and then proceeded to an understanding of interpreters, compilers, register machines, and so on. Giving up on SICP meant giving up on the engineering approach which produces programmers, to the software engineering approach, which leaves out the hard parts of engineering and produces software users, who plug together libraries that are written by others.

> Knowledge of the underlying machine is irrelevant.

As you look back on your assembly experience later, you'll realize this is false.


I agree.

I simply think, the assembly model will be replaced.

There will eventually be a new, "better," set of principles. For reasoning about library gluing.

> Knowledge of the underlying machine is irrelevant.

> As you look back on your assembly experience later, you'll realize this is false.

Better to clarify: truly deep knowledge of the underlying machine is irrelevant.

At this point, I'd say I'm immensely richer from "thinking" about assembly. Nowhere else would I have truly learnt about CPU cycle times to copy data from register to register to call stack, and back. The concept of "time to copy data" is highly present in networking and other fields.

Important to know that "C" and other high level languages abstract over it.

I'm not entirely sure my knowledge of JTAG debugging/Rust borrow checker is ever going to be substantially useful. The time/benefit equation of doing all that work was... less than ideal.

In hindsight, maybe an afternoon of assembly copying data from place to place, seems like "all" the big picture learning I needed?

Then again. Maybe I invested far more time than undergraduate courses spend on the topic, regardless.


On the JTAG points, I (sadly) don't have enough hardware hobbies under my belt to appreciate that pain. On the Rust borrow checker front, I too share this same sunk cost and the have the same lingering regret. Uniqueness types have been done before and better, but it's interesting for systems programming, and already better than C++. Just don't do applications work in it, unless you would pick C or C++ for the same job.


I don't like "shaming" people for not learning "fundamentals", and I believe people should learn whatever they want to at their own pace, but since you asked --

Sometimes you run into bugs (or leaky abstractions) that force you to go down a couple layers. Perhaps you run into a bug in the C/C++ code of your language's compiler. Sometimes a dependency (written in C of course) of your stack gets a nasty CVE and you need to figure out the implications (often being able to read the patch will help). Ever heard of "Meltdown" and "Spectre"? Those were bugs in CPU design which require understanding of how low level operations work.

In junior roles nobody will bother you with these questions, but senior devs are expected to know how to understand and deal with these issues if relevant, even if they're normally using a super high level language, because they don't exist in a vacuum, they depend on these low level things to form the whole "stack" so to speak.

In terms of education in schools, time is limited, so there's obviously a need to choose the most relevant topics to focus on. Learning fundamentals has always been at least occasionally useful though. I mean, you could probably do 99% of your job without knowing this stuff and you'll probably even have a great career. Personally, I just hate to feel ignorant about the things I do for a living...


> I truly, truly, cannot see myself using non-garbage compiled programming languages

Unfortunate typo.


For me, an older dev, I felt that knowing assembly and other "not useful" techs had a general positive effect on my skill as a dev.

Does that still resonate at all?


I had a similar trajectory, but would not describe that learning as useless or irrelevant. That knowledge informs my every decision, and understanding hardware or algorithms is why I'm sometimes able to find a better technique or abstraction for a project than other folks. Plus it's just more satisfying.


Yes, this fits my understanding and know I'm not judging you or people like you. I understand this because I'm the same way, but from my own era.


In truth, I don't hold my view strongly.

Software is a YOUNG field.

An undiscovered ocean of knowledge absolutely awaits us.

Far more knowledge remains to be found. Far more focused on "what haven't I learnt."


This change is necessary because young people aren't interested in the fundamental abilities of computers

Respectfully, then these young people shouldn't be attending MIT. You go to a top university to learn about the fundamental principles; if you just want to build stuff by assembling components, go to a trade school.


Each generation has different motivations, regardless of our ideals. They can still ultimately produce many people who will care about the same things you do.


That's not how tertiary education works.

University is where students choose the classes they want to take.

Students are 100% responsible for their learning. They vote with their feet.

Faculty must JUSTIFY why the knowledge they teach is relevant to undergraduates.

To students, and increasingly, the external advisory board of the faculty.


To the administration, yes. Not to the students (except perhaps morally).

The University decides what you are required to learn to get the credential, the wider society determines what the credential is worth, and influences the institution to produce more or fewer graduates of the various specializations.

The students get to pick which institution they go to and which credential they will try to get, and then they must learn the material that is required. The computer science department is not trying to get students to "vote", in fact their interests generally align with having fewer, but more promising, students.

The institution, to the degree that it desires revenue, generally desires more graduates, because each pays tuition, but as highly qualified graduates as possible, so as not to tarnish the brand.


I agree, elite schools (like MIT) have traditionally grounded their curricula upon a basis set of essential principles like "great books". At MIT, those were calculus through diff eq, physics, and engineering basics like statics and dynamics.

However, since around 1990, I suspect high school AP courses routinely overlap with said principle courses. So dropping MIT's four course intro to EECS probably became unavoidable after so many freshmen with AP course credits expressed frustration that taking those four EECS courses ("the MIT way") duplicated courses they'd already taken for credit in high school.


That is probably true. I think this is always a problem for elite schools, because you will always get the really motivated students who started a subject early and had a genuine interest in it, and these kids will never need what you are teaching in the introductory courses. The SICP approach was deep enough that anyone could find something to learn in it, if the teachers were willing to let the students run ahead at their own pace.

They also could have taken it as an inspiration to really make those courses advanced introductions to basic topics again (starting from counting, for example), and show that the AP course still had to leave out one or two interesting things, and how wide each of those fields really is.


SICP is the opposite of starting from the bare metal.


Back in the mid 90s when I studied CS/EE (in Sweden) it seemed like LISP (and SICP) in undergrad education was primarily used as tool of bringing everyone down to the same base level, so that the 50% of students who had already spent ten years programming iteratively wouldn't have a gigantic advantage.

I dropped out after two years partly because of this. I was learning so much more at my part time job - it was insane how effective that job was at teaching me programming. It set me up for life. I can't even summarize it without it sounding made-up, I just realized.

(This problem has nowadays been fixed at my "alma mater". I think they're doing Python now.)


My experience is the opposite: I only appreciated the marvel of SICP (at least that's how I experienced it at the time) because I already knew other programming languages. Something "clicked" that didn't click before. I have the feeling that students with no previous programming experience didn't feel this.

If that's true, it also makes sense to start with Python and move SICP to later in the program.


I know two people who worked through SICP. One of them had so much fun with it they worked through it twice cover to cover in their free time. In both cases they reported to me that SICP was quite a revelation to them, since Scheme is about as simple a practical programming language as one can get other than maybe stack languages like Forth. That is, while [un-|simply-]typed lambda calculus and Turing machines are simpler models of computation, I wouldn't exactly call them practical models of computation. In one case they said that it really gave them real purchase on what computation really means. That is, what is it really about when one strips off all the type theory, build tools, and dependency hell. My sense is that SICP is better left to after getting a couple years of experience doing computer programming to appreciate though for most people though.


If the fundamentals of the course change, shouldn't that be a new course? If the previous course existed, would that still be worthwhile for some students even if most move to the new course?


There are negative consequences to bifurcating your students like that. Practically speaking, what you want is a set of students with common experiences for the first half of undergrad, which serves as the foundation for the second half of undergrad.

You really want to think of the curriculum of the program as a whole, rather than focus too narrowly on individual courses.


Because real software engineering is less about building things up from first principles and more about combining and integrating pieces of software that already exist. Sorry, but it's the way of the world.


It is often true, but also see http://steve-yegge.blogspot.com/2007/06/rich-programmer-food...

I recently was with a group of dev ops folks that had written a 100k of terraform code. It worked great, nice improvements for consistency of cloud deploys, and decreased time to implement and all that.

Problem was when terraform would implement breaking changes in new versions. We had so much code to fix, the group just wanted to stay back at version N - 1. But I knew where that would end, in five years they would have buggy unsupported old terraform needed some big heroic project to upgrade, with risk and so on.

So I wrote a simple Python script that was a recursive descent parser/transformer patterned on expat with callbacks. It was able to transform all the code and bump the terraform version at the latest version.

It took a while to write, but was tremendously fun, gave us a powerful tool for transforming and therefore owning the 100k+ LOC of terraform, which other wise threatens to become a congealed unmaintable blob.

Even writing custom off the shelf services can have big payoffs, in efficiency or in operational ease. There are two ways in which the body of existing software is full of good solutions; there are a ton of turn key solutions, from Hadoop to NoSQL to Airflow to the other fifty nine Apache projects, Kafka, etc. etc. etc.

Also, modern languages have so many excellent facilities for highly scalable, cheap and reliable software that is shall we say designed precisely o meet a given customer need. The custom solution can be faster better and cheaper than the turnkey open source system, which usually has a high cost of complexity and a high cognitive burden for ops to understand the configuration space and so on.


So who are the people building the pieces of software that ordinary developers are combining and integrating? The next Linus Torvalds, Bjarne Stroustrup, John Carmack, etc. is going to need to know the first principles like the back of their hand to build the next generation of operating systems, languages, and platforms. That's "real software engineering"; what most developers do is merely coding.


Those people are the ones who can't be stopped, and they will continue to exist as long as people continue to pursue understanding for its own sake.


The curriculum isn't being removed from existence, it's just being removed from the introductory classes. There are probably more people making their own OS from scratch in 2023 than in 1993 anyway--we have many more people with access to computers, and tons of free resources online for people who want to learn computer architecture, operating systems, assembly language, etc. This is not some kind of hypothetical supposition that "there must be people out there"; I know for a fact these people are out there because I keep meeting them, and some of them are damn young.

The idea of genius men making stuff from first principles is just some kind of Hollywood schlock version of programming anyway. It's the stuff you see on NCIS when they make an episode about hackers. All the good stuff has been a team effort, even if it starts out as a solo project. Maybe with a few exceptions like cURL.


Yes, a lot of us just link black boxes together. But what about the people who make the black boxes? How smart are they?

Maybe we all do not have to know first principles. But somebody does.

And if MIT won't teach first principles, who will?


This reminds me of an Asimov story:

https://en.wikipedia.org/wiki/Profession_(novella)


So people (MIT students, the elite) shouldn’t learn Real Analysis because Wolfram Alpha?


I think you're missing the point. Everyone is free to learn real analysis, and it's ok if it's a part of the standard curriculum.

But if you're aiming to provide the best education to future professionals so that they are able to excel at their field, you'd serve them far better if you taught them how to effectively use Wolfram Alpha.

The truth of the matter it that the bulk of the work in software development, and what makes more economic sense, lies in growing a system through accretion and adopting higher level components developed and maintained by third parties.


You're decribing a "trade school", not MIT. Which is fine too if that's what you need, but serves an entirely different purpose.


> You're decribing a "trade school", not MIT.

Nonsense. MIT engineers don't waste their time thinking about assembly instructions when they are tasked to design a distributed system. Competent engineers know how to switch to high-level concepts where they make sense.


Heh, no, not nonsense.

The difference between trade schools and universities is in the depth of fundamentals. Always has been, still very much is.


> The difference between trade schools and universities is in the depth of fundamentals.

It's definitely nonsense. You're failing to understand that today's "fundamentals" are not the parlour tricks you have in mind. You're in a better position to get ahead in providing software development services if you're able to put together a customized script that launches traefik, nginx, rabbitmq and a few nodejs services fully dockerized than if you're able to tell what opcodes your compiler generated out of C code.


Now you're describing sysadmin work. You don't need university for that. University is for getting a universe of exposure to ideas, to learn fundamentals to their depth. You need to study some fundamentals to be able to create those kinds of programs like nginx or rabbitmq.


But there are still highly specific tasks out there that require knowing stuff like what opcodes your compiler generated out of C code. Not every job or team is building a web app with nodejs and the like.


Excel at what? You need to build a strong foundation in order to innovate and to discover new stuff.


> Excel at what? You need to build a strong foundation in order to innovate and to discover new stuff.

I agree. The problem is that you have a gross misconceptions about what "strong foundation" represents in this day and age.

I knew guys who fail to stop and think about "foundations" and instead use that as a scapegoat to shoehorn their gatekeeping logic and as a form of ladder-pulling. It's the kind of character who think that playing trivia games is a good way of asserting whether a candidate is a competent engineer.


As always, the question is: a strong foundation in what? And why?


Obviously the AI researchers in MIT need a strong foundation in GPU and parallel computing design and semiconductor technologies in order to innovate. Only those in trade schools `import pytorch` /s


It's more like mechanical engineers shouldn't learn much/any about abstract algebra because now we have (newly born) mechanical engineering doing mechanical engineering instead of mathematicians doing it.

You don't need to do the theory in order to understand the practice anymore.


Maybe so, but it's starting to look like AI can take over a lot of that combining and integrating.


Even more, usually the biggest mistakes are made by not "using what is standard", any type of programming from first principles is usually a huge mistake. People generally arent smart enough to build something from scratch in a way that beats whats already been tried and tested


When I was just getting started as a programmer, everyone was using Apache. Fortunately people who do stuff don't spend their time listening to the naysayers on hn.


To make an informed choice whether to apply a first principle solution you have to understand said first principles. Otherwise it's just an excuse for incompetence.

And to internalize the principles you have to at least use them in a didactic setting.


I was talking to our junior developer one day. He was working on a Delphi project of some kind.

I asked him how it was going, and he said it was going well. But that he was confused about some things.

I asked them what they were.

He said, paraphrasing, "What's the difference between disk and RAM?".

On the one hand, it's great that someone without such fundamental understanding was able to accomplish something. On the other hand, it was a bit disturbing.

I've run in to several folks who do not understand first principles, and some times are combative when I explain how I "know" something "can't work" or "didn't happen". I'm sure we've all heard those things. "What about <absurd thing>?" "No. Just...sigh...no."


Tho the difference between disk and RAM is collapsing. It used to be (90s) bog standard advice to never write to disk in the call path of a customer request. Now, there’s hardly anything that isn’t writing to disk all the time; but not spinning platters, just SSD.

So maybe their confusion is less bad than it seems at first? Maybe?


While the numbers are changing, there are still 2-3 orders of magnitude between RAM and SSD latency: https://colin-scott.github.io/personal_website/research/inte...


And a similar gap between cache and RAM, but a software engineer unclear about the finicky business of cache optimization would not seem as dumb as the person in the original post.


I think there are (at least) two types of learning modes. One way is to teach a bunch of implementation up front and save the “why this all works” for years down the line. Some brains like a build up from basic principles. My brain works the second way. I didn’t “get” math until well into my adult years when I started with foundations (ala New Math) and everything else just clicked into place.

The way we generally teach math in the US (I can’t speak globally) is sort of the worst of all worlds. Kids spend inordinate time crunching through arithmetic algorithms without any particular deep understanding of what all the symbol manipulation means. Well in 2023 you are going to let a computer do all of that number crunching. There is very little intellectual or practical advantage in grinding our fussy long division problems by hand.

I get that we sort of tried starting with set theory and a nation of people who said “what nonsense this isn’t how I learned math” rebelled to the degree that New Math is now just a joke punchline. But we are still teaching math as if computers/calculators don’t exist in the early years and that’s just laziness.


Computer science is like Maths. Do you need to know real analysis and delta epsilon proofs to do calculus? It just amounts to gatekeeping if you are using an abstract and higher level course as an introductory class. Introductory classes should be fun and approachable. The Harvard CS50 is a good example - https://m.youtube.com/watch?v=YoXxevp1WRQ&t=101


If the problem with SICP (with both authors teaching it) was that too many people were failing out, I would expect they would have mentioned that.

The Harvard course welcomes non-majors and is part of a liberal education, it's not comparable to an introductory course for majors at a competitive engineering school.


> To me, it seems more like casting magic spells. Younger engineers using those libraries—almost always without understanding how they work—are just casting spells without any understanding of the magic behind them.

Isn't casting spells one of the key metaphors of SICP, the wizard book? Concepts as objects, spirits controlled by their names, and abstraction barriers preventing a consumer from needing to know implementation details?


Yes, and SICP was the book taking you through the abstraction barrier. Toto showing you the Wizard of Oz, behind the curtain.

The point of the bureaucratic approach is that how you use the spells together is more important than individual skill in making your own spells. This is more comfortable to bureaucrats than outright talk of wizardry, where clearly a wizard can do things that a bureaucrat cannot even explain.


I read some of SICP in my second sem of CS undergrad, and in hindsight it provided a really nice summary of stuff to come later like synchronization

Also the exercises are very well done and some of the harder (yet doable) ones made uni workload seem incredibly easy in comparison

But personally not really on the lisp bandwagon, not out of dislike but because because the syntax feels too unnatural compared to C and the lack of practical applications


One of the worst things is that the great online course for SICP (6.001 iCampus) is now also retired, and MIT is not willing to recover it. I remember seeing a message that their server crashed and they were not able/willing to recover it, but that seems to have disappeared as well.

It's really sad as this is the course that actually made me love programming.


SICP to me is more about LISP though. If it uses Pascal, or any "basic" programming languages, than it's practical for further generation.

Why ? You want to implement yourself without any "advanced toolits" from LISP.


As a Zoomer who grew up with Python being a dominant teaching language I don’t think I’ve even heard Pascal mentioned. I’ve even taken classes in QBASIC (which was replaced with Python…)


I lost interest in that course when they said: "So we need blah blah from LISP because LISP is bleh bleh". Lol, why MUST i need LISP to learn basic concepts ????? The authors got something wrong from beginning i think.

Basic concepts should be taught and built from SCRATCH !


The book doesn’t stop at basic concepts. A Lisp-like language was chosen because it allows code to be manipulated like data (homoiconicity). If you aren’t interested in the interpretation of computer programs part of SICP, then this book isn’t for you.

If you are interested in the ICP part but Scheme doesn’t work for you, check out Crafting Interpreters. It’s truly excellent and you can read it for free at the author’s website.


I don't see those 2 things are related though: LISP vs Interpretation of computer programs. That's the issue here.

See ? The book title has no mention on LISP, it's to teach Interpretation of computer programs right ? Why do readers need to learn something else (like LISP) ?


Probably because computer programs need a concrete representation, to actually have a structure and get to the business of saying things about it and interpreting.

Something other than Lisp can be chosen. (I seem to recall the MIT course is based on Python now.) Using something else would just add a lot of bulk to the book about parsing; you'd need chapters on automata theory leading up to lexical analysis, parsing techniques and so on. The output of the parser would be Lisp, and so then the structure and interpretation part would begin there.


They answer the “why Lisp” question right at the start:

> If Lisp is not a mainstream language, why are we using it as the framework for our discussion of programming? Because the language possesses unique features that make it an excellent medium for studying important programming constructs and data structures and for relating them to the linguistic features that support them. The most significant of these features is the fact that Lisp descriptions of processes, called procedures, can themselves be represented and manipulated as Lisp data. The importance of this is that there are powerful program-design techniques that rely on the ability to blur the traditional distinction between ``passive'' data and ``active'' processes. As we shall discover, Lisp's flexibility in handling procedures as data makes it one of the most convenient languages in existence for exploring these techniques. The ability to represent procedures as data also makes Lisp an excellent language for writing programs that must manipulate other programs as data, such as the interpreters and compilers that support computer languages. Above and beyond these considerations, programming in Lisp is great fun.


I don't get the feeling that the next gen of programmers (if the species isn't extinct) won't have much to learn from SICP ... and I'm saddened by that. In particular, trying to introduce some programming to kids seems like a herculean task since they've seen gaming nirvana already - like how do we deal with "when will I be able to make a game like Minecraft?" when 2D sprites and gaming logic do not spike any interest?

It's like a computer is a laboratory for doing mathematics and mathematics needs to move from proofs towards empirics to be of interest ... vague thoughts, but yeah.


Universities should be focused on teaching programming, not languages. Scheme is excellent for this. Instead of switching to Python, they should start out with Scheme and switch to Python at some later point.


sounds like they are trying to compete with bullshit coding camps

computer science education is not about doing something fashionable but about teaching the right fundamental non sexy skills.


why no battery-included scheme ?


Starting in 2000, I decided to help build out Scheme to "batteries included". (Initially, portable Scheme, and then eventually I focused on what's now called Racket.) https://www.neilvandyke.org/racket/

What's "batteries included" changes over time. In 2000, I needed batteries like Web scraping and crawling, various parts of a Web application server, CSV import, multimedia file handling, PostgreSQL access, embedded API docs, embedded unit tests, package tools, so I made batteries like that. (I also did some later non-open-source work of more specialized and sometimes fancier batteries.) (Racket already had portable desktop GUI.)

The batteries I need today, I'd have to roll up my sleeves and build for Scheme, since they include things like: Wasm targeting for in-browser front-ends, polished Android and iOS apps (ideally doubling as desktop GUI apps, and mobile-friendly Web apps), GPU/TPU targeting, good interfaces to cutting-edge ML libraries, various data science tools integrations. I'd be happy to build these new batteries, except grinding Leetcode and copying&pasting Stack Overflow would pay the rent much better.



Sadly, no. For one thing, despite several attempts, there's not really any good mobile deployment story for Racket at the moment (especially for iOS).

The Racket team appears to be more interested in (primarily) education and (secondarily) tinkering with programming language internals than making something that can be used to produce polished, packaged, distributable binaries (and let me stress here that this is perfectly fine... it's not a criticism, just an observation. Different people are interested in different things, and there's not anything at all wrong with that).

Things that would have to change in order for for me to consider Racket fully "batteries included":

1) Capable of building binaries for all major platforms (including iOS), including stuff like code signing.

2) A standard GUI and widget library that, again, works on all major platforms. The days when cranking out a command-line binary was good enough are long gone.

3) At one time there were some licensing issues that precluded use on iOS. I think that this may have been resolved by switching to the new Chez-based system, but am not 100% sure of that.

There are probably some other things.

Let me stress once again that I think Racket is great! It's just not something that easily be used for commercial software development in its present form.


In that case almost nothing is batteries-included. You can't turn a native iOS app into a web app. So the iOS SDK isn't a batteries included development platform.


> So the iOS SDK isn't a batteries included development platform.

That's right. It's not. And?


If you have a requirement of GUI apps and reject the iOS SDK since it's a single native platform and reject things that only build to the web since it's lacking integration with platforms that native apps have, you're left with a bunch of stuff like Flutter, React Native, Qt, and HaXe. None of those have great ecosystems like each platform individually of the web, iOS, Android, Mac, and PC.

If you decide to settle for a limited ecosystem like React Native and others, I think it would also be fine to settle for Racket, which as you say, doesn't provide tools for building apps with iOS and Android's native UI components (though non-native UI components can be packaged into an installable mobile/desktop app and the LGPL is no longer an issue since Racket re-licensed almost everything).

If you decide you gotta have GUI apps and you gotta have a full ecosystem, going with XCode would be one option. Stanford chose that. I would have liked to think that MIT with their past involvement of open source wouldn't choose such a walled garden. However, this expectation of mine has eroded as I've watched scandals unfold at MIT. If you choose the web, that's clearly great for commercial software, but the students may lose out on important trends like AR & VR. Developing directly for Android might be a good compromise, since at least it's possible to run a fully open source Android device.

Here's an example of what I'm talking about with React Native's ecosystem being limited: https://www.twopicode.com/articles/this-is-why-we-stopped-us...

Edit: I see that the market's mostly rejected web-based installable mobile apps, though the mobile web is certainly a big thing. I would argue that the market has also mostly rejected React Native and Flutter. It works, but um, yeah... https://www.thecodeside.com/2021/08/03/why-flutter-is-not-yo...


There's clearly a continuum here. The XCode ecosystem is more batteries-included than Racket, though it's not fully so (at least by my definition). For one, it does run on mobile devices (albeit only Apple mobile devices) and it does have standard GUI capability. Racket has neither of those capabilities.


> For one thing, despite several attempts, there's not really any good mobile deployment story for Racket at the moment (especially for iOS).

And what about Python? Is there a good mobile deployment story there? I'm not trying to be snarky, it's been a while since I did any serious Python work. I'm genuinely curious.


Is there anything for mobile aside from Swift, Flutter, or React Native?


At one time there were numerous packages that let you run JS-based code across both mobile and desktop (e.g., Cordova). These seem to be mostly moribund nowadays, though.

I think there are a couple of options for running Python on mobile.


Has Racket somehow taken over the commercial software world when I wasn't paying attention?

How many Racket apps in the Mac App Store? How many Racket apps on Steam? How many on any app store? I know there have been one or two, but would be surprised if there were more than a couple of dozen.

How many non-hobby web sites are using Racket as a back end? Any?


Your point no. 2 is a rather high bar.


I don't see why. Stuff like Qt has been around for a very long time.

Note that I didn't say it had to be a native GUI.

https://en.wikipedia.org/wiki/List_of_platforms_supported_by...


I got into Scheme because of Guile.


Is it possible to follow the SICP book using Guile?


I did that a bit more than a decade ago and everything worked except for the "picture language" section from chapter 2, but that's a very minor part of the course. There may have been an occasional definition I had to add to get code to work, but it was certainly nothing difficult.


Look into Chicken Scheme. The language offers a repository of "eggs", which cover all kinds of functionality, from SRFIs (akin to Python PIPs), to object systems, to library wrappers.


Like Gauche? It is kinda slow, unfortunately; even by Scheme standards.


Gerbil?


The docs for Gerbil are horrific. I tried using it the other day and couldn’t even work out how to call a process.


Huh wow, I find this video a bit shocking. I took a course from GJS in 2020 and in my recollection his answer to this same question was a somewhat oblique and resigned "Well, the best language doesn't always win."


Sounds nearly the same to me, obscured by an added layer of organizational retconning.



People should learn assembly first. Lisp is not how computers really work. You will learn some terrible habits that will kill your RAM by sticking to Lisp functional principles instead of procedural principles.


The original video is from 2016.


Feels like we as a society just want to skip to the end. Fundamentals are how we got here and we still need them to keep growing.

An example related to a lot of the debate I'm seeing in these comments: Anytime anyone talks about wanting to build a their own engine to build a game, the majority of replies boil down to basically "you're an idiot just use an existing engine." There is nothing wrong with using one that already exists, but we need to stop discouraging people from/pushing people away from understanding the core of ideas, and start making it easy to find that path and explore it.

We need future generations who are equipped to start from first principles and explore in directions prior generations never thought to if we are to continue to advance.


Granted I’m a cynic by nature but I think that’s just the design.

We have an elite class that is the first principles thinking in group. They imagine and build the future. They enable the next group to build the consumer level tools to enable this upper middle class (the startup founders who build yet another clone of a clone aka nothing novel, engineering VPs etc.) of people to employ the rest (Uber drivers, warehouse workers) who need to stay low to do the work.

A massive drone class who does what they’re told to keep the hive going.


This is an incredibly dark thought, but I don't know that I can discard it out of hand and that makes me sort of sad.


Growing up without running water and electricity and then growing up in the US has been a mind boggling journey. In and out groups are all I’ve known my whole life. :/


Andrei Sorin's book (it's available free online) might be interesting to you. It's quite long but if you skip around in chapter 7 (on software) you might find some ideas worth keeping around.


This is sadly everywhere now, e.g. "data scientists" are just people trying to do a statisticians job without any formal background on probability theory.


That's exactly how I feel about this fake term. The job description should be something like "data analyst". Very little people doing that reach the level of scientist. You need to publish useful research and get feedback on it to call yourself a scientist.


I have a friend who got a real degree in data science and we talk sometimes about the people on reddit who ask the questions that boil down to EXACTLY what you are talking about. I just don't track that space as closely so mostly hear about it second hand.


Well, it really depends. Do you want to build a game or do you want to learn about building games. For the former, you’re better off using an existing a game engine. For the latter, building a game engine would help tremendously. If you simply wanted to build a game, and that’s the main goal you’re wasting your time reinventing some thing that has already been done repeatedly, it’s reliable and it’s readily available and it already does what you need.


I covered this in my comment. I didn't say using engines is bad, I said discouraging people from wanting to build them out of hand is.


Most people asking how to “build a their own engine to build a game” should be discouraged from that. They’re probably either not thinking deeply about the trade-offs or are not in a position experience-wise to do so. Just because 0.5% should be encouraged doesn’t make the common response that you see wrong as a default.

(Someone who shipped a game on a custom engine 25 years ago; would not do the same now.)


> Most people asking how to “build a their own engine to build a game” should be discouraged from that.

Why? You didn't even ask why they want to build their own engine. If the purpose is to learn their underpinnings, then they should not be discouraged to do so. This telling people they should be discouraged to build engines is just stupid and unfortunately very common based on gross generalizations of what "most" people "want". Why not ask what the person's motivations are instead of assuming they are a brain-dead code-camp weekend motherfucker who all they want is to make an "app"?


It’s right in their sentence. They’re saying they want to end up with a game.

If someone came to you asking “how can I write a word processor so I can write a letter?” what would you advise them?

“I want to build a new game engine to learn how to do it.” “Right this way; here’s how I’d approach starting that long journey.”

“I want to build a new game engine so I can write a game.” “You should probably skip the engine step and go straight to making the game.”


Give me a few million dollars to live off of first, and then I can pursue many deep-dives in and of themselves. Until then, I will maximize ROI by combining things to deliver value fastest and furthest. Money gives freedom.

-- This is (rightfully so) the mindset of non-academics who don't want to squander time / opportunity.


There's an "intellectual immune response" effect, when a group that asserts that you don't need to know something meets someone demonstrating the passion to learn it.


Yup we need those people. But how many others WOULD have done the thing if they weren't brow beaten into "you are an idiot if you do this"?


It's a way of solving a (tribe-wide) coordination problem, because if enough people do the thing, the balance shifts and it's not acceptable to ridicule anymore. The opposite of being curious about something and wanting to study it, is wanting not to know it (for example, how I feel about kubernetes). This can easily turn into not wanting others around you to know it.


A few unfortunate things about software development practice lately being mostly plugging together components:

* People make huge things out of often poorly-chosen components and frameworks.

* People 'engineering' the system don't actually understand how it works.

* Much/most of your time is spent figuring out a pile of library/framework bureaucracy that someone else created. Not even useful concepts that you are smarter for learning, just figuring out some muddled version of an old model, the particular forms of made-up incantations around, and discovering and working around the shortcomings of their design and implementation. Often you find the thing you're using is actually poor design and implementation, and you might feel stupider for having learned its bureaucracy.

* This plugging together of components can be done well, and sometimes is, but it's also a little too convenient for developers who think software engineering is like turning in homework assignments in school. In which the goal is to get your homework (sprint task) past the grader (PR approved), and that's the end of it (what's analysis, planning, and architecture?). So you do Web searches of Stack Overflow (and maybe some Copilot open source laundering), to slap together some components, and no one even seems to expect you to know how it works. Where's the pressure to do software engineering?

* Perhaps some of the churn of these components is due not to technical/business merit, but to the fact that putting the latest possibly-rising component on your resume can improve your marketability in an employment environment oriented around frequent job-hopping. It's a symbiotic relationship between people who spit out "new" components to plug together, and people who twist projects to put the new thing on their resume.

That's not exhaustive; just a few off-the-cuff that are annoying this weekend. :)


> People 'engineering' the system don't actually understand how it works.

The problem is that the complexity of systems is growing beyond the capacity for people to understand in the first place, and the solution is to make it possible to do meaningful work in systems where people only understand a part of it at a time.

In a way, software engineering is catching up with the rest of the world, which already operates that way. When you build a skyscraper or a bridge, maybe you have a structural engineer who understands how concrete and steel work, and you have a geotechnical engineer who understands how soil and sand work.

My personal observation is that programming has long attracted people who like the kind of absolute certainty that you can have in smaller programs, where the behavior of programs is known to a mathematical certainty. If you're careful, you can build larger and larger programs this way... but at some point, the program or system is too large, and you are forced to think about things in statistical terms. You have to accept that your program has bugs in it. Some people who are otherwise brilliant programmers are simply too uncomfortable with the jump from the certainty you have in small programs to the known uncertainty you have in large programs & systems.

It used to be you could make a decent living working within the certainty of small programs, but nowadays you are going to have to plug things together.


Programming in the large does not necessarily involve uncertainty. A CPU microarchitecture or an OS are large, complex systems, but we expect them to perform reliably. Well-understood modularity boundaries are of course quite important for this. It's only when we expect systems to do things that they weren't designed for at the outset (such as keeping private data from being accessed via side-channels, if you're designing a modern CPU) that real problems exist.


>microarchitecture or an OS are large, complex systems, but we expect them to perform reliably. Well-understood modularity boundaries are of course quite important for this

We expect them to mostly approximate correct operation but know that there are bugs at pretty much every level. See Spectre, rowhammer, dirty pipe, etc.


> Programming in the large does not necessarily involve uncertainty.

Can you provide some reasoning to support this, or elaborate? This seems obviously false to me, maybe I don't understand what you're trying to say here.

When we have goals like wanting to keep private data from being accessed via side-channels, there are some major problems preventing you from being certain about it. You are probably not going to be certain that you know which data is private and which data is not private. You are probably not going to be able to enumerate all possible side-channels--new ones keep getting discovered. Then, suppose you've found a way to mathematically express what data is private and what side channels are and what it means to leak data, do you expect to somehow apply formal methods to the entire damn system?

Maybe someday. Not today.

Today, you apply defense-in-depth strategies. Mitigations upon mitigations upon mitigations. You try to keep any one bug from bringing down the whole system. You do analyses of different types of bugs and see how they affected the system, and figure out ways to make the system more resilient.


There is a radical idea discovered in the 70s, that you can just buy computers and run your own code on your own machines. Rumour has it that some still practice this ancient art.


If you're happy with the experience of using computers in the 1970s and 1980s, you are of course free to continue using computers that way. Meanwhile, all the people who want to build a "quick web app" are building on top of an enormous tech stack which they don't understand. Your application will be affected by everything from the way that your TCP packets get routed around the world, to the arcane rules which your browser uses to decide when to repaint the page, to the byzantine rules for text segmentation and script-specific shaping engines used to let end-users interact with "plain text" content.


The question is whether this is necessary. I agree with you that people are doing all of those things. It's not, strictly speaking, necessary, it's just a popular choice.


A CPU does error check and handle cases where it ends in a state that was not intended. OS do that in spades, and we wouldn’t accept an OS that didn’t embrace it has bugs and stuff go wrong.

At that scale, the question isn’t wether there’s behavior we didn’t understand, it’s how to try to make the best deal with them.


> The problem is that the complexity of systems is growing beyond the capacity for people to understand in the first place

This is the argument, but it's a question whether this is a choice that we are making as a society and an industry, to not understand, and to choose technologies that abrogate the possibility of understanding, or to choose understanding, and focus all our efforts on enhancing the ability of understanding. To build bicycles for the mind, or bricks for software engineers to stack.

It's a keen observation about the people who like certainty, but as systems get bigger these people can also often reason mathematically even about the uncertain cases, the statistical terms as you put it. There are even more people who would rather navigate politics than mathematics. Internal politics becomes more important in relative terms if everybody can agree that understanding the fundamentals is a waste of time.


>* People 'engineering' the system don't actually understand how it works.

This is not unfortunate. This is literally the only way to manage complexity. You cannot hope to understand everything.

I think the course in its old form is good. But what sussman says is legit.

Software engineering is now more of a scientific exercise of testing existing structures then it is an axiomatic construction from primitives.

In fact this is how ALL of society works. Specialists each construct their own universe and nobody truly understands everything as a whole.


With more understanding one has more potential for optimization. And I don't mean performance or memory. I mean conceptual optimization. Reducing a solution of a problem to the minimum amount of concepts needed and finding a set of correct concepts to use in the solution. Once such a set is found, one might not need to think more about their details. The beauty is, that one could! Provided, that those concepts are implemented in the same clean minimalistic ways, understanding them is possible as well.

When one does not understand the details and instead just pokes around in the code "until it works", one will forever be blind for that kind of optimization and the solution created tends to have leaky abstractions and become even less composable when trying to use it to solve future problems. Reusability is reduced and we keep reinventing things over and over, but not getting a clean version out of it. This is part of why a lot of software we have is so brittle and resource hungry. We do not bother with proper understanding enough and cluelessly throw together balls of mud.

EDIT: On a second thought: Software engineering should not be like sciences in the way that we poke the "system" and see what happens. Why? Because sciences only do this, because the world existed before humans existed. We have not made it and are are inside of it. We have no choice but to poke and try to figure it all out. With computers this is not so. We have build these machines. We can and should understand what we make them do.


>With more understanding one has more potential for optimization. And I don't mean performance or memory. I mean conceptual optimization. Reducing a solution of a problem to the minimum amount of concepts needed and finding a set of correct concepts to use in the solution

Except what I'm saying is that human brain has biological limits on how much and what it can understand. Eventually a system can become so complex the only way it can be developed is through the efforts of multiple people specializing on specific components.

>When one does not understand the details and instead just pokes around in the code "until it works", one will forever be blind for that kind of optimization and the solution created tends to have leaky abstractions and become even less composable when trying to use it to solve future problems.

For a sufficiently complex system This cannot be Avoided, simply because the system is too complex to be fully understood by a single person. I am 100% talking about conceptual optimization.


I get what you are saying and it is definitely true, that systems can become too big to keep in mind completely. However, with proper and non-leaky abstractions it should be possible to reason about the system at each level of abstraction, without having to think about the lower level. The thing is then, that one could descend into the level 1 lower than one is looking at, if needed, and find out, that a simpler primitive could be used to cover more ground.

For example: Exceptions can be implemented using continuations. A return statement can also be implemented using continuations. Both can be implemented using the same concept. On the layer of exceptions and return statements, they seem like different things, but we can look under the hood and see, that we could use continuations, a single concept, to express both, if we wanted. With such optimizations one can reduce the number of concepts on needs to keep in mind lower and the system more understandable.


Well, theoretically one could instead expand the engineers' capacity for understanding, but that seems to be harder.


What do you mean expand? All brains have a upper limit in memory and capacity to understand things.

If you exceed this limit, there's nothing that can be done short of science fiction cybernetic brain augmentation.


Do you think that, as an industry, we're in danger of hitting this theoretical limit?


In danger? We already hit the limit, and we're waay past it. Tons of software we produce is so complex it already has no hope of being fully understood by one man. Look at the operating system you're using to read this!

It's not even a software thing. Society itself is segmented into specialists. Carpenters specialize in wood, they don't need to understand blacksmiths and vice versa for blacksmiths.

Why did society segment itself this way? Because the body of knowledge and skills of each trade is so complex that few can fully master more then a single trade.... This occurs simply because our brains aren't smart enough.


As an industry we are capable of producing software that is completely incomprehensible to anyone, yes. That is not the argument. You haven't demonstrated that this is a good idea, or that there isn't a better way this software could have been created.

The division of labor does not have to be an infinite regress, in fact it cannot, which was obvious to Smith. How many different trades should replace the one that SICP or other fundamentals-first approaches can prepare you for?


You wouldn't be able to reply to me if it wasn't a good idea. The amount of software just to transmit your words to me and others is already too complex for anyone to completely understand. It's not just software most of the technology you use in your life is already are too complex for you to understand. Modern civilization is built on abstraction. If you think there's a better way, then you'd be arguing against modern civilization which is a different conversation all together.

The thing that replaced sicp is not multiple trades and skills. There is one skill that functions as replacement. Rather then constructing programs from primitive axioms you employ another skill.

Science and investigation. You poke and experiment with a library until you bend it to do what you want. It's a hard course to teach or even design though. How do you teach science/investigation vs. the principles of first order logic? The later is straightforward the former is not.

Still, even though it's more elegant and straightforward the later subject is much much less relevant to what a modern software engineer does on the job.


I think before making any broad statements here I'd want us to be clear about what it means to completely understand something. I was once asked the classic interview question about what happens when you type a domain and hit enter in a browser, and my answer (which I attempted to tailor to every level of understanding in the room) was good enough that the co-founders met me at street level and I had an offer before I could get 20 steps away from the building. Could I tell you what's going on on a line of code picked at random from the browser I'm using, or any random library it depends on? Much less likely. But do I understand, for example, why and how the DNS works? Yes.

Arguing against (parts of) modern civilization is the only thing that might improve it.

SICP is about abstraction, that's where the book starts.

Science and investigation is hard to teach, indeed, but if you start with a platform that is inscrutable and unreliable, you're not going to be doing as much science.

Look, I'm not saying poking at libraries, or programming by way of SO, NPM, and ChatGPT is bad. I'm just saying that if you think that's the scientific way, and the way advocated by SICP is less scientific, then I think you should re-check the steps of your reasoning, because I'm pretty sure that's not true.


>SICP is about abstraction, that's where the book starts.

abstraction is a simple concept. Everyone understands it. SICP goes deeper then that but it doesn't explicitly say it. What sicp is about is abstraction from the ground up. Every entity in your program is essentially modular and re-usable unit.

This is opposed to procedural programming where a list of procedures isn't modular due to mutating state. Though even in the traditional style of programming, abstraction can still be done. You can encapsulate both the procedures and mutating state into modules. This concept is well understood.

>Science and investigation is hard to teach, indeed, but if you start with a platform that is inscrutable and unreliable, you're not going to be doing as much science.

Unreliable? They used python. Python is pretty reliable. More so then C++. Python is definitely inscrutable but not much more inscrutable then scheme. What you can do in scheme you can do in python. What you can do in python, you can't necessarily do in scheme. It's the restrictiveness of FP languages where the modular composition pattern starts to arise.

>Look, I'm not saying poking at libraries, or programming by way of SO, NPM, and ChatGPT is bad. I'm just saying that if you think that's the scientific way, and the way advocated by SICP is less scientific, then I think you should re-check the steps of your reasoning, because I'm pretty sure that's not true.

The way advocated by SICP is not scientific. It's more logical and mathematical in it's approach. You start off with primitives (axioms) and you build theorems from the primitives.

Modern programming is the other way around. Top down. You have a complex library with shitty arcane documentation then you have to constantly try things out with it and poke at it to get it to work properly. This is indeed science. You make a hypothesis and you test, the result of the test falsifies or confirms your hypothesis. It is extremely hard to formulate a efficient hypothesis and sometimes hard to come up with a way to easily test it.

On the other hand FP style programming while beautiful is significantly easier then trying to decode a blackbox.


SICP starts with abstraction, and gets there via closures, which are an abstraction that encompasses and goes beyond everything that OOP can do with regards to encapsulation. You can do as much mutation as you want in Scheme, SICP uses this to explore some ideas. It does take a primarily SSA or non-mutating approach, but it's a far cry from hairshirt-wearing pure FP. I mean, the book has you simulate a register machine. It's not afraid of mutation.

Python is fine, and if they used the Scheme-like subset of Python, it'd be a similar book. But they picked it (the committees that Sussman mockingly describes as doing the things committees do) because of the libraries. So the idea is that you'll be doing "science" on libraries, which replaces (in the 90s) having students reading man pages (look how out-of-touch Sussman already sounds here! nobody reads man-pages anymore, now SO is preferred, it being better to get a solution than to consult a reference).

Sussman is not praising the experimental approach, where you need to probe your libraries like a science experiment before you can use them, where now you need to do science before you can do science, he is mocking it.

People using shitty libraries with shitty documentation, and building systems on top of these foundations, are doing science that didn't need to be done, solving problems that didn't exist before those choices were made. It's frustrating work that gets in the way of real work, that is, work that engages with the complexity inherent in the domain. It's an inefficiency that is self-imposed, not so much by the individual programmer, but certainly by the industry as a whole deciding to go in this direction.


>Sussman is not praising the experimental approach, where you need to probe your libraries like a science experiment before you can use them, where now you need to do science before you can do science, he is mocking it.

He neither praised or mocked it. He just stated it's the status quo, and he quit teaching SICP because it was no longer relevant. If he mocks the science approach I'd imagine he'd continue teaching the course.

>People using shitty libraries with shitty documentation, and building systems on top of these foundations, are doing science that didn't need to be done, solving problems that didn't exist before those choices were made. It's frustrating work that gets in the way of real work, that is, work that engages with the complexity inherent in the domain. It's an inefficiency that is self-imposed, not so much by the individual programmer, but certainly by the industry as a whole deciding to go in this direction.

This is the only direction it can go because as frustrating as it is, it's BETTER then building from scratch. You can't hold a job without using libraries this I guarantee you, if you're building all your shit from language primitives then you're not going to employed for long.

>SICP starts with abstraction, and gets there via closures, which are an abstraction that encompasses and goes beyond everything that OOP can do with regards to encapsulation. You can do as much mutation as you want in Scheme, SICP uses this to explore some ideas. It does take a primarily SSA or non-mutating approach, but it's a far cry from hairshirt-wearing pure FP. I mean, the book has you simulate a register machine. It's not afraid of mutation.

SICP doesn't go beyond OOP at all. It's all abstractions that can be done with OOP languages. SICP overall teaches restrictive patterns of FP to promote the math like programming style. Yes it does mutation but that's not the main theme.

Closures aren't a big deal either, OOP languages have lambdas now and the pattern is more or less isomorphic to object instantiation.


He's not saying that the SICP was no longer relevant, but that the people (e.g. administrators) wanted something else.

You'd be surprised what people can do without libraries.

SICP covers patterns, like in chapter 3 with the `make-account` example, that can be used to do things that might be difficult in OOP languages. It doesn't go into much of that, because it's not what the book is about. Anyway, whether you regard the SICP style as something FP-like and restrictive is a matter of perspective, to me it's neither one.


No watch the video. He quit and then the head of the department ended up teaching it for 10 more years. The department did not force him out or force him to change.

I know tons of great things can be done without libraries. But logically speaking programming is simply too complex for most people not to use libraries in their jobs.

Sicp teaches restriction and that's better. You want to avoid mutating state in your code as much as possible..if you can't avoid it you need to scope it and segregate it. This is the ideal technique to maximize modularity and code re-use.

What I'm saying is this, although it's an ideal most programmers just end up stringing together a bunch of procedures and it still works. It's just not modular. More important for the programmer is that they can figure out how to utilize library code. Elegant code is less relevant nowadays. That's why many programmers can get away with not even knowing about sicp.


We both watched the same video. From my perspectice I see a subtext of the administration having its own (dysfunctional) process and getting a suboptimal outcome that's coming through very clearly. I mean, he admits the department still doesn't know what to do. He doesn't say that the bureaucracy went its own way and did something incoherent, because he doesn't need to. Ask any grumpy old professor to watch that video and say whether or not he's griping about his administration, and they won't even need to watch the video, it's just assumed. It's not important that you agree with me here, I'm just sharing my perspective. You can also watch the video and see that he's just repeating the departmental story, which is the first-order meaning of what he's saying, and it's not wrong.

They taught that class from 1980 to 1997. Nobody can say it didn't have a long and unexpectedly successful run. It's understandable if they were legitimately sick of teaching the same material.

Agreed on your other points.


Your reading too much into it. Literally they taught the SAME course for 10 years after he left? That action doesn't even MESH with your logic. MIT wand the admin WANTED to continue SICP. It's Sussman who wanted to stop.


No, they taught it from 1980-97 with the original instructors, then for ten years the department head taught the introductory course (from unspecified materials, possibly different), then they went to Python with another set of new teachers. Hard to know from the outside what the admin wanted.


Did you get as far as discussing what's happening with regard to the electric field near the gates of the transistors as they're switching or did you, perchance, skip a few levels of abstraction that you or the interviewers don't understand especially well?


I kept my answer to about five minutes, so no to the transistors, although I started with a switch closing in the mouse or keyboard, so I was able to give some sense of a lower level.


Did you go into debouncing?


No, I don't think so, but DNS caching was mentioned, which is a somewhat similar optimization.


I hear this kind of complaint a lot, but in general it seems to be railing against a strawman.

Which systems, specifically, do you see being built in these ways?


I think this is, and bare with me, because of the infiltration of radical individualism since the 1960s.

Donald Trump can fake his way to president, anyone can build a SaaS and fake their way to money with a loose collection of APIs. Hustle culture outweighs solid fundamentals and rigor.

Superficial, yet effective. Lacking rigor but marketable. Become a developer in 4 weeks (seen lots of that on YouTube). Go to a boot camp and learn Javascript. Fake it until you don't need to make it. Don't learn about weights and activation, just load it off HuggingFace and make an amazing landing page that ignores accuracy.

It's not a world where SICP fits. This makes me sad.


I don't know... LLMs are just functions from text to text. Seems like the same skillset is still valuable. You just have more amazing functions at your disposal.


SICP was cancelled 15 years ago near when Obama was elected.

The people who build the powerful functional apps you use every day are nothing like Donald Trump.


You totally missed the point


SICP blew my mind the first time, but I think it's most useful when you already have some hands-on coding experience (as I did). It's why we don't start teaching math with set theory.


Yeah I taught myself how to program for about 1-2yrs, but I didn't really learn what it meant to be a programmer or the ideas unpinning programming languages until I fully dug into SICP and watched the classic 1980s videos of Sussman's class.

But I don't think I would have appreciated without my prior exposure to programming. That said, I've also watched some MIT 101 math classes and they also seem like courses better suited for people who know/appreciate the basic content (beyond typical high school math). So maybe MIT selects for kids smart enough to grasp it.


Not so much about smarts but about exposure. Most of the kids who get into MIT for math didn't start doing proofs at MIT, but in maths olympiads and such at a much younger age. Still, even given the formidable qualifications of the MIT undergraduate CS class, they probably overshot at the time with SICP. (I also read SICP for the first time well into my programming career, and completely agree with your assessment.)




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: