Scheme is a good language for SICP because it's simple. You can build a Scheme interpreter as a class project. You can analyze it formally. Etc.
Python is a good language because it's readable and writeable. But it doesn't work for SICP since it's too complex for that. Python also intentionally omits things critical to SICP (like tail recursion).
Calling this book "SICP in Python" would be like taking your favorite poem, releasing it into a different language, and finding that the translators wrote a completely different book, with a different theme, to make it rhyme and the rhythm hold. Just something different with the same name.
Yep, python is probably one of the most human-readable and writable languages out there.
There are some dark corners, like the site above illustrates, but it is pretty easy to avoid them.
(It is still not the good fit for SICP, but that’s a different conversation)
You can only avoid dark corners as a writer. As a reader, you may have to peer into dark corners.
Most of the issues given on the linked-to page are not simple issues of readability; they are real pitfalls. A lot of the examples are actually readable. You will not easily avoid every single one of those pitfalls if you're coding in Python, even if you lint the code.
It turns out listarg is bound to a list which is not freshly instantiated each time the function is called (with no corresponding argument), unlike listlocal. The expression is evaluated at the time the function is defined, not at call time. The value is stashed somewhere and that value is used for initializing listarg by default.
I learned about this from ... running pylint3 on some code which found a buggy use of such a list.
This is probably that way for performance because Python doesn't have true literals. [1] is more like (list 1) in Lisp; it's a constructor that has to be executed, producing a newly allocated object; it is not like '(1) which is just a literal object that can be embedded into the compiled program image. Python literature incorrectly refers to [] as a literal, which is bad education: a disservice to newbies who deserve to understand what is a literal. The fact that you can do "x = []" and then safely append to to it proves that it's not a literal, because literal is an abbreviation of "literal constant", which is also something newbies should be taught.
Students of CS must absolutely learn the crucial difference between variable initialization and assignment. Python conflates the two.
x = 42
def fun():
x = 43 # defines and binds local x.
This was not even fixed in Python for a long time; now you can assign to the global one with a global statement. The concept is bad here and damaging to newbie brains.
After taking a course which used SICP, I was so inspired I wrote a functioning Scheme interpreter in a couple of evenings - in C, with no external libraries. It was pretty limited (no tail recursion), but could run examples from the book.
That is not going to work with python - even just parsing the program will need a whole bunch of extra knowledge.
Except that 90% of Python programmers fail to answer simple questions like:
Given:
def extendList(val, list=[]):
list.append(val)
return list
What do the following print:
print(extendList(1))
print(extendList(1))
print(extendList(2))
print(extendList(3,[]))
I do not blame them. This sort of behavior is error-prone. You can make sure that you do not use such a code in production with code reviews but it would be also great to not having such things in the language.
For what it’s worth, I’m not a Python programmer and I got that correct.
The answer is:
[1]
[1, 1]
[1, 1, 2]
[3]
It relies on knowing something about how python applies default arguments.
I’ve only written about a hundred lines of python in my life, so possibly I just got lucky - still, I would have thought an actual Python programmer should get this?
It is different from how Ruby and Javascript handle default arguments. I'm surprised Python does that, since I would expect function arguments to be reset to their defaults each call. That's a major side effect.
It's pretty odd behavior, yeah. It's easy to work around (set the default to None and then set the actual default in the function body), but I don't know if I've _ever_ seen anyone want it to behave as it does now.
It does make sense to evaluate the default value for the parameter at the point of definition for serveral reasons.
First of all, the default value does not have to be a literal value, and if you wanted it to be evaluated at call time the function would need to capture all of the default values in a closure.
Under the sane semantics implemented in every major language other than Python, if you want to capture a snapshot of something at definition time, you can put it into a variable:
Don't touch stable_snapshot and everything is cool. This is the rare case. Of course
def foo(arg = wildly_changing_variable)
means we want the current value.
I don't understand your "closure" comments; either way, things are being lexically closed. It's a question of when evaluation takes place, not in what scope or under what scoping discipline.
In fact, Python's treatment requires the implementation to have a hidden storage that is closed over; the equivalent of my stable_snapshot variable has to be maintained by the implementation. The definition-time evaluation has to stash the value somewhere, so to that it can use that one and not the current.
You could easily generate that behavior manually though if the default were the other way, and it would target the common case instead of the rare case.
> the default value does not have to be a literal value
The default isn't a literal value when it is [], by the way.
If [] were a literal, then this would not be safe or correct:
def fun():
local = []
local.append(3)
The fact is that whenever [] is evaluated, it produces a fresh list each time. It's a constructor for an empty list, exactly like set() for an empty set. It just has slicker syntactic sugar.
Python just calls that a literal because it looks like one.
Looking is being, in Python. Except for all the pitfalls.
are those statements executed one after the other? In any case, it's definitely a pitfall, one that has been fairly widely publicised, even with checks embedded into various tools/IDEs.
TBH, magic sauce of SICP is Scheme. Even if is using languages other than python (common lisp, go, rust), it will simply miss the point. Idea is that you start with minimal (abstract) language constructs, progress through various problems expanding language on top of those minimal constructs and finalize everything building minimal language on top of that language. You basically metacirulate.
Python (go, rust...) aren't minimal, nor abstract enough to achieve above. For example, most of the SICP problems can be nicely solved in python, but if you are going to add evaluator, you'll need to pull parser/ast modules, which are story on it's own.
If you're looking to learn cool stuff, I highly recommending finding a copy of the original SICP in Scheme and working through it. It will expand your mind to new ideas.
I was going to drop a link to my favorite pdf version (with improved typesetting and graphics), but sadly the download link appears broken :( https://github.com/sarabander/sicp-pdf
This relates heavily to the "what language should be taught in schools" argument. I usually answer "Java". And it's not even that I particularly like Java, it's just that the discussion often neglects that the course involved is all about classes, getters and setters, inheritance and so on. It's nonsense to shoehorn this into many other languages.
But why is the course “all about classes, getters and setters, inheritance and so on”? It sounds like a Java class before the language is even chosen. No wonder the best choice ends up being Java.
When I took CS in school, these were not the central themes. On the last day of class the teacher showed us a short program in this funky new language called “Java” and we all had a good laugh at how it tried to make everything about objects, even where it
made no sense.
Even if you’re only
trying to teach object-oriented programming, I can think of better languages.
I think it really depends on if you're teaching CS or programming. If I was designing a CS curriculum I would almost certainly start with something like Haskell, Lisp or Scheme. If I was designing a programming curriculum I would almost certainly start with Java, C# or Python.
At the end of the day your average programming job at your average company is “all about classes, getters and setters, inheritance and so on”, and if you want to prepare students for that, you should probably focus on that.
Dijkstra once said: "It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration."
I find much the same to often be true for many Java programmers. Programmers who start out with a course all about classes, getters and setters, inheritance and so on often end up with their minds wedged such that they never learn to program properly.
Java has one way to do everything. That's reasonable if you're running a large IT department and want programmers to be interchangeable between projects. That's a horrible way to have people understand the richness of computation.
You want new students to learn at least two ways to do abstraction and to structure code, and ideally, to learn many. Then, if they do Java for a random bank or something, they'll see where Java is on the spectrum. If you start with Java, you end with a closed mind.
Dijkstra was empirically wrong about that (generations of perfectly good programmers were taught BASIC first in schools) and about most of the other witty and self-congratulatory quotes in that collection: http://www.cs.utexas.edu/users/EWD/transcriptions/EWD04xx/EW... Perhaps the most amusingly wrong one is the claim of FORTRAN, which sits at the core of the scientific Python stack, as "hopelessly inadequate for whatever computer application you have in mind today [i.e., 1975]: it is now too clumsy, too risky, and too expensive to use."
Dijkstra was good at many things, but there's no need for an argument from authority when we have actual data and not just opinions made to sound stronger because they are phrased more aggressively.
Or to put things more accurately, he exaggerated for humor. It's a communications style. He was wrong if you read it too literally.
If you read him less literally, the basic point he was making was right.
If you start out in a language like BASIC, plenty of programmers never make it to the other side. And it applies much more to Java than to BASIC. With BASIC, everyone who goes into programming will move on at some point (there aren't BASIC jobs out there), and they'll be forced to learn something different. With Java, plenty of people learn it, work whole careers, and never know any better. That doesn't just prevent them from coding in Ruby on Rails or whatever else -- it makes them worse Java programmers too. They understand WHAT, but they don't understand WHY. They can't reason about things like abstraction from first principles.
You want to start out with a broad view of computation. From there, you then want to narrow. Java is okay for a junior-level course on OOP, but it's really, really lousy for a first exposure to programming, or a freshman course.
As a fan of Python, my reaction to seeing this course was: "oh cool, a course on interesting things in a language I'm comfortable with".
I was then somewhat discombobulated to see the Python bashing in the comments.