This is the textbook used for the introductory CS course (CS61A) at Berkeley. The course material is available at http://cs61a.org (The course uses Python 3, Scheme, and SQL). There are some neat projects (students are working on building a Scheme interpreter now)
Given Python's limited lambda implementation and Guido's distaste for FP I find it odd that Python should be chosen to teach SICP let alone replace Scheme as a teaching language.
There's a lot of pressure to teach languages that are widely used in industry, which is why so many students come out of university knowing mostly Java. There's also a lot of bias against "old" languages, and for newer, trendy languages.
Teaching SICP in Python is just a further development of that trend.
Software engineering is a very trend-following, path of least resistance, bandwagon-jumping profession. If "everyone" is using language X, that's where most engineers (and managers) want to be. Universities are just satisfying that need.
It's a wonder that Scheme lasted in universities as long as it did. It'll be interesting to see how long Python lasts.
There's always Clojure. Best of both worlds, i.e. Scheme-ish with Java host. We had a SICP Clojure group here in London last year and I think someone has written an adaptation of SICP in Clojure. If it was any other text I'd have nothing to say but SICP without the Scheme/Lisp doesn't make sense. If universities choosing Python or Java as their teaching language they should also switch to texts based on OOP.
Why doesn't it make sense without any LISP? I read only the first chapters and found the concept of composition immediately applicable in C and as a C programmer I would write the same way in python, probably.
Remember, all of those "old" languages, when they became notable or influential, were younger than Python is now.
Scheme, when SICP was written, was only 15 years old. It was young and trendy at the time.
C was less than 20 years old when I learned is as part of a required course for my CS degree. I've no doubt that part of the reason was that it was widely used in industry.
C++ was the trendy thing by the time I left school. It was less than 10 years old.
Perl was 10-15 years old when it was used as the "Swiss Army chainsaw" for a lot of the first era of web development.
While this appears to solidify your observation that software engineering is very trend-following, I listed them to point out that Python seems to be the oldest language when it made the jump. It's surely older than most of the students and even TAs for the course.
Python has been a common teaching language for at least a decade now, it's just that this particular course has now switched to it as well. Still took 15 years for it to get that momentum, though.
You won't see a CS course taught in Go or Rust anytime soon.
Lambdas are anonymous functions so an inner def is not the same thing. The fact that you have to "adapt" Python to get non-crippled lambdas suggests it should have been the last choice for teaching a text like SICP.
The reason lambdas are powerful -- one of the only reasons -- is because it's a closure. The fact that you can reference variables outside of the function is the power, not whether it has a name. A label is just a convenience. (Or in this case, an inconvenience.)
If my understanding is flawed, I'd like it to be corrected, though.
Higher-order functions (funargs) do not equal first-class functions (lambdas). It happens to be that any language which supports first-class function must also support higher-order functions, but the reverse is not necessarily true.
That does not match my understanding of first class functions, nor Wikipedia's.[1] Python's inner defs are not anonymous functions (which is what people usually mean by lambdas) but they are definitely first class - they can be returned from a function, stored in a data structure and so on. I agree with the grandparent that the need to label the closure is just an inconvenience, not a disqualification.
You can't have anonymous lambdas that aren't first class (how would you reference them?) but you can have first class functions that are not technically anonymous.
From TFWikipedia: "Some programming language theorists require support for anonymous functions (function literals) as well."
I suppose I wasn't aware that that wasn't a universal requirement, but count me in with that group. I think it's useful to make a distinction between "higher-order" and "first-class" for this very reason (i.e., we can talk about languages that do make a distinction without resorting to overloaded terms).
I mean, consider a language where this is the case:
> "hello," .. " world"
==> "hello, world"
> 2 + 2
==> SYNTAX ERROR
> def a = 2
==> #<number "a">
> a + a
==> 4
Would you consider such a language to have first-class integers?
Sort of on a tangent, but while I'm reminded of it: as far as Python's "lambdas" go, they're brutally gimped compared to Python's notion of a function, since what is allowed (and even required) of a function body in Python is much different than what is allowed of a "lambda" body.
> Would you consider such a language to have first-class integers?
Yes, I still would, because integers can be bound to variables, passed to and returned from functions, and stored in fields of objects.
I would think that the language had a silly syntactic restriction -- which is the same thing I think about Python -- but it wouldn't materially change how I would write programs. It's just that, before any expression using an integer literal, I would have to name the literal.
Well, same here. If you want to use a function too complicated to fit in Python's restricted lambda expression syntax, you have to use a named local function. I agree that it's silly, and I'd never design a language like that myself, but again it wouldn't materially change how I write programs -- it just means that what would have fit in one expression would now, in some cases, require multiple statements.
Trying to draw a fine distinction between "higher-order" and "supporting first-class functions" doesn't appeal to me; I think of the latter as the definition of the former. (I'm sure I would have trouble remembering which was supposed to be which.)
But again, I have no problem with criticizing the design decision -- just not using these terms :-)
Here's the thing, though: when some element of a language is subject to silly restrictions that other first-class elements of the language are not subject to, that element is, by definition, second-class. The reason being that you can't treat it the way you'd treat any other first-class element.
I see where you're coming from, but still, after the function value is created, there's no restriction on its use. Contrast what some call second-class continuations, which can be invoked only once -- a much more severe restriction.
I could see saying something like, Python functions are semantically first-class but syntactically second-class. Or maybe we should just call them "business-class" :-)
(BTW your comment downthread about referential transparency is spot-on.)
And I also see (now) where you're coming from. But it's hard for me to accept the "once the value has been created" exception, because in my mind the construction of a value is as important as any other operation on that value.
Anyway, I'm not going to argue this any further -- I think we understand each other's perspective and we can pretty much resolve to chalk this one up to the (rather unfortunate, imo) lack of precision that's so common to terminology in computation science (see also: any debate whatsoever about types ;) ).
It's no secret that Guido has a little bit of a distaste for functional paradigms. From the man himself:
"About 12 years ago, Python aquired lambda, reduce(), filter() and map(), courtesy of (I believe) a Lisp hacker who missed them and submitted working patches. But, despite of the PR value, I think these features should be cut from Python 3000."
> This is the textbook used for the introductory CS course (CS61A) at Berkeley. The course material is available at http://cs61a.org (The course uses Python 3, Scheme, and SQL). There are some neat projects (students are working on building a Scheme interpreter now)
I'm a seasoned programmer, who after years in the industry really enjoyed SICP once I discovered it. I found it very nicely put together in the sense that it managed to teach a simple LISP and handle programming in a deeper and more academical/sciency way, and not just the regular "here's how you make a blog with whateverDB".
I'm currently looking into expanding my very basic Python knowledge, and are looking for books/courses on the subject.
Would you say this course here is good for learning Python, or would you rather recommend something else to experienced programmers?
Picking up the language, vs picking up what is idiomatic code, what is where in the standard library, which third party tools/libraries are common to use, etc are widely different things though.
This is a general programming intro course that happens to just use Python, I don't know from where do you get idea that it will make emphasis about the python ecosystem. Will it make you a better Python programmer? probably, but you won't gain much if you already really did SICP. And since you are "seasoned" programmer you will have to deal with all the what is a variable, what is function etc.. all over again.
Overall, SICP in Python 3 is first rate. I highly recommend it to anyone wanted to improve their sophistication with Python and programming in general.
The exercises and demonstrations in each section quickly build from an elementary introduction up to powerful examples accompanied by clear explanations. I especially like the explanatory diagrams.
Thank you to the authors for your craftsmanship and for making it available on-line.
I was thinking of taking "Principles of Reactive Programming" by Martin Ordesky after taking his other class "Functional Programming Principles in Scala" on Coursera. After looking at CS 61A, I am thinking of taking it instead. How long did it take you to complete the course? Also, as far as I understood you can use the grader with "--local" flag to check the correctness of your program, is that the case?
Guys I just have to give the obligatory shout out to Prof Hilfinger. The guy is legendary. My life has been made worthwhile to have been initiated into the deep mystic art of CS by him.
Berkeley CS bends to common usage, instead of holding to its own pride? Scheme is such a simple but deep language, and is really good at introducing all the concepts. It is aged, like a good single malt whisky. Now they try to serve fruit punch?
I love Scheme as much as anybody else, but I’m not convinced that it should be the first introduction to computer science that people have. I think that the goals for an intro class is to get people to enjoy programming, not expose them to the raw beauty of Scheme/FP. I sure as shit didn’t appreciate what Scheme offered the first time I learned it.
The way I see it: the language rules are simple, anyone can understand all in one sitting; yet the language gives you all the materials you need to build different concepts in SC from scratch. The latter point is important to both teaching and learning.
Goodness gracious, no. At least it sure wasn't as of the last time I TA'd a class that was taught using it (admittedly during the previous millennium). In a typical group, approaching half the students - smart students, this was a fairly selective school - dropped or failed. A huge chunk foundered on figuring out how to do useful things with lists (cons/car/cdr and friends are elegant, but they are also weird). Another huge chunk struggled with let/let*/letrec. And closures weren't very fun for many students, either.
The one neat thing I've noticed about Scheme is that, if you get Scheme, then you will have a very solid grasp of how to compose abstractions. I wouldn't be too quick to infer a causal link there, though. It might be that Scheme makes people better at CS. It could just as easily (given those failure rates, possibly more easily) be that Scheme is a filter for identifying people who have a pre-existing knack for the academic side of CS.
I completely agree with you regarding the simplicity of Scheme! But that’s exactly why I don’t think Scheme should be people’s first introduction to programming. You need to give them friendly abstractions[1] that make them realize what is possible with computing, and only once they get those abstractions, break them down, and show them how the magic is actually implemented.
My opinion isn’t pie-in-the-sky theorizing: I know plenty of intelligent people whose first exposure to programming was Scheme, which scared them off into thinking that “they’re not smart enough for programming” or some bullshit like that. When they later tried a language like Python, they weren’t scared off—they were hooked.
The whole argument in favor of using Scheme as an introduction to programming seems highly ideological, and not at all informed by how people actually learn how to program. The fact that so many online resources successfully (Codecademy, Coursera, etc.) introduce people to programming using high-level languages is evidence that it works.
[1]: I know that all abstractions are leaky, but in the context of an introductory class, they’re not leaky enough.
Yes, this is one path to getting someone hooked on learning CS, which works for a small percentage of students (myself included). However, most learners are motivated by other ideals beyond intrinsic language purity. This academic book highlights dozens of research studies that go deeper into this fascinating topic:
http://www.amazon.com/Learner-Centered-Design-Computing-Educ...
The biggest utility of Scheme for introductory programming is the same utility that you get from any functional language, really: referential transparency. Transparency enables what SICP calls the "substitution model" of evaluation and application. The nice thing about that, in regards to pedagogy at least, is that it permits programmers to approach evaluation as a simple algebraic term-reduction system. Sit and evaluate an expression by substitution (there's only a handful of rules), and the similarity to solving and reducing algebraic equations will be obvious. This means that students can leverage the existing intuition they've developed for elementary algebra to quickly grok evaluation.
Of course, once side-effects are introduced, that substitution model breaks down, and a new one must be picked up. But by then, the students ought to have become reasonably comfortable with the ideas of programming.
I often hear smug Lisp weenies bragging about how simple the syntax is , which allows instructors to spend little time on syntactic rules and get at the meat of programming. But I'd argue that the simplicity of the semantics in the absence of side-effects is even more important. Any student with an eighth-grade[1] understanding of elementary algebra will be able to understand program evaluation quickly and easily.
As to why Lisp specifically when any referentially transparent language or sublanguage thereof could provide the benefits of the substitution model just as well: try implementing a complete and fully-functional metacircular interpreter in something other than Lisp, as a literate program, in under 40 pages. Now do it again for a lazy version, a nondeterministic version, and a relational (a.k.a. logical, as in logic programming, e.g. Prolog) version. Some argue that a metacircular evaluator really has no business being part of an introductory course, but in my experience it's actually a good way to finally formalize the actual model of evaluation, while at the same time demonstrating that interpreters, compilers, etc. aren't magic black boxes, but actually relatively simple programs (though when I teach the material I like to point out that they can get pretty complex, depending on source/object languages and optimization... let that be a lesson to them!).
[1]: This is the name for the educational status of typical twelve-to-thirteen-year-old students in the US.
Scheme as an idea goes way back, but the version of Scheme used by SICP 2e is about as old (young?) as Python is -- both made their initial appearances in the very early 90s. And just like Python hasn't stood still in those 25 years, neither has Scheme, having had three further updates to the definition and quite a lot of research in areas such as hygienic macros and composable continuations, among other things, which make appearances in various implementations (but especially Racket... those guys have a habit of implementing pretty much every paper about Scheme...)
As weird as it sounds Scheme may be too, uh, unaccessible to certain kind of folk who grew up around, say, Java tradition of doing things. I personally spent some time on the other side of the extreme - couldn't be bothered to touch anything C-like with a 10 foot pole. Now that I am starting to warm up to more "traditional/public/common" languages like Python, I think this book will be very valuable to me.
Two intro CS courses were provided: 61A, in Python, and 61AS (cs61as.org), which was self-paced and taught in both Racket and stk Scheme. Unfortunately there wasn't enough student interest or available TAs to continue providing 61AS. I was fortunate enough to take it the last time the full 4 unit course was offered.
I was kind of hoping to see python used in the functional programming section. I was playing around with a small project in python and decided I wanted to re-implement a really simple context free grammar I had done in a functional programming class. It was slightly annoying, Lisp seemed like a better choice than python. But it looks like they use scheme.
It seems to be pretty good at map, filter, any, all, reversed, reduce, zip, list comprehensions, dict comprehensions, lambdas, and closures.
Granted, it would be unusual to see all, or even most, of those in one place, but they see use enough that calling python bad a functional programming is a little odd. I'd say it's pretty good at functional work in the small while somewhat ill suited in the large.
It's really not good. Python is an imperative language and if you dig a little, you can find why [0]. The reduce function has been moved to functools module [1].
If you try to write in purely functional style in Python, you will have a bad time.
This is silly. You're pointing out some historical reasons why some functional features have been removed, nerfed, or moved out of __builtins__, but none of this is a discussion of problems you've come across with actually writing Python in a functional style.
I've written tens of thousands of lines of code in a functional style in Python, and while it's different from Scheme and takes a little getting used to, Python is a very capable programming language for functional programming.
It does lack tail call optimization, but I think you'd find that even in Scheme, most times where you would write a recursive function, you should be writing something with one of the higher-order functions. 99.9% of recursive functions contain a buggy hand-rolled implementation of map, reduce, or filter. Scheme is a more academic language, so it makes sense for them to include this feature, which gives language users the same power as the language creators. But Python is intended for professional development and as such, it makes sense to force users into a better style.
> Yes, Erlang is such a toy language[1], it's never used for "professional development" at all[2].
From your sarcastic tone it seems like you take issue with something you think I've said, but since your post is unrelated to anything I actually said, I can only guess that you've inferred some meaning that isn't there.
The point is, the idea that it's somehow "unprofessional" to write a recursive function is an absurd assertion. Many, many functional languages beyond just "academic" ones use recursion as a standard idiom, and using that to dismiss Scheme is itself an inflammatory statement especially on a site that is literally running on top of a Scheme derivative.
Sometimes, it's just the clearer way to get the job done. I realize it is these days considered "un-Pythonic" to consider there to be more than one way to do something, but that is in fact one of the reasons why I no longer use Python. The attitude of its creator and its community have shifted against creative solutions and I have no use for that attitude in a technology field.
I tried to gently inform you that you had misinterpreted what I said. That was your opportunity to pause and think, and realize that I did not say what you thought I said, but you missed it.
> The point is, the idea that it's somehow "unprofessional" to write a recursive function is an absurd assertion.
...which is why I didn't say that. What I said was, "[E]ven in Scheme, most times where you would write a recursive function, you should be writing something with one of the higher-order functions. 99.9% of recursive functions contain a buggy hand-rolled implementation of map, reduce, or filter."
There are certainly cases where writing a recursive function is cleaner, and I would never say it's "unprofessional" to write a recursive function. However, I've debugged enough Scheme to know that if you're writing a recursive function you're probably reimplementing one of the core higher-order functions, and in doing so you're increasing your chance of bugs. You're also making your code less composable. If you're not experience enough with Scheme to have recognized this fact yet, you're probably not experienced enough to recognize exceptions to the rule.
> Many, many functional languages beyond just "academic" ones use recursion as a standard idiom, and using that to dismiss Scheme is itself an inflammatory statement especially on a site that is literally running on top of a Scheme derivative.
Scheme is my favored language for personal projects. I write more Python because there's more work available for Python, but when it has been my choice I've chosen Scheme. The reason I've written so much Python in a functional style was because I learned that style from Scheme. So if you think I'm dismissing Scheme, you're hilariously wrong. My statement isn't inflammatory; you became inflamed all on your own.
> I realize it is these days considered "un-Pythonic" to consider there to be more than one way to do something, but that is in fact one of the reasons why I no longer use Python. The attitude of its creator and its community have shifted against creative solutions and I have no use for that attitude in a technology field.
Cool. I thought I was the one being inflammatory.
It seems that you saw me say, "Scheme is a more academic language" and decided to take offense to that. Well, news flash, Scheme was designed at the MIT AI Lab. It's an academic language. That doesn't mean it's a "toy language" or that it's only an academic language; I didn't say either of those things. It's also ridiculous that you needed to bring Erlang into the conversation to defend Scheme: Scheme has plenty of history of professional use at both HP and Sun and is perfectly well respected in its own right.
Please try to learn something from this experience.
Python list/dict comprehensions are awesome. The only other language I've seen that has comparably awesome comprehensions is Haskell which (opinion ahead) offers easier access to more power at the cost of some readability upfront.
Sadly I've seen functional operations like map/filter/reduce etc discouraged in (commercial) projects I've worked on, though I guess there's a bit of an argument there for debugability (make it a word?) since hunting down side effects may be more onerous?
I work with a lot of people who have never drunk the functional koolaid, professionally. They have to do an extra mental step with map or filter - "oh map, that's where the first argument is applied to each item in the second - I think - perhaps the second argument is applied to the first, right, Stack Exchange". More explicit is clearer, then. 'Clear' like beauty, is subjective. But productivity isn't. And whatever my personal feelings, the productivity of the team wins.
On the second point. I agree. Python's lambdas are a code-smell, for me. Which I don't say lightly, I'm a Scheme/Lisp guy deep down. But they fundamentally work against the aesthetics of the Python language, imho. A local function declaration is much less fragile.
But I would say that as code complexity increases, in my experience, comprehensions don't last long either. They've a narrow range of applicability before you get a big block of spaghetti code. Then it is much better to define a local function and, essentially, 'map' it (even if that 'map' is done as comprehension). Of course, YMMV.
100% agree that lambdas fight python's aesthetics. On the expression complexity front I can definitely say it's easy to slowly increase their complexity until they're no longer readable. As a side note, comprehensions in Python (2 at least, not sure about 3) leak scope a bit which can lead to some really funky bugs if you're in a loop and absentmindedly use a loop variable in a comprehension.
Yes, that was a pain. Particularly because it was only true of list comprehensions, not dict comprehensions or generators:
Python 2.7.10 ...
>>> a = 3
>>> [a for a in range(10)]
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> a
9
>>> a = 3
>>> list(a for a in range(10))
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> a
3
Guido called it Python's 'dirty little secret'. It is fixed in Py3 though.
In my experience, the more complicated processing ends up being composed up of smaller units (that are testable), and then map/filter remain the most clear way of representing the iteration. Using lambdas for non-trivial process is just as bad as throwing that same complicated processing into a list comp.
Some lisps have similar comprehensions. Common Lisp has `iterate` (listcomp) and arguably `loop` (listcomp). Clojure has `for` (genexp) and `(doall (for ...))` (listcomp). A very common clojure library (Plumbing) implements dictcomps via for-map, although you could do python 2.6-style dictcomps yourself via
Just because it has those functions doesn't make it good at them. Have you ever seen a Pythonic program converted into functional style? It looks horrendous.
If you define "functional style" as using immutable data structures, higher-order functions, avoiding side-effects and using referentially transparent variables, I think Python programs look fine, and I try to write all the code I can this way (although practicality beats purity).
If you define "functional style" as writing Lisp or Haskell programs using Python syntax, I agree they look horrendous.
> and what do you do when every third party library you interact with uses mutable ones?
That's situational; which library are you having a problem with? I haven't really had that problem (okay, sometimes the tooling around web frameworks do stupid things, but that's not common).
Note that just because something is mutable doesn't mean you have to mutate it. Yes, Python doesn't as widely enforce non-mutation as Haskell, OCaml, etc., but that doesn't mean that suddenly all the Python programmers lost their minds and started mutating all the things. We're not savages.
This is why I asked, "Which library are you having a problem with?"
You've claimed that third party libraries and your colleague's functions will mutate the data structures you pass them, but this has very rarely been my experience.
Besides tuples, I mostly treat the standard data structures as if they were immutable, without relying on them actually being :) As for third-party libraries, very few of them actually mutate structures passed by my code, in my experience; if they do, I simply make a copy before passing it.
Those who have mastered principles doesn't have to memorize particulars.
Nevertheless I still maintain that the classic CS61a by Prof. Brian Harvey is a gold standard. It teaches the most important big ideas, such that code is data and hence the whole OO paradigm is mere a DSL over structured data with named slots, with a protocol to follow (inheritance).
It teaches the superiority of declarative over imperative approach - one defines what shall be done, not how it can be done.
It also teaches lazy evaluation, so one doesn't parrot nonsense about monads and Haskell.
It teaches what genetics are, and that everything could be defined as an ADT with corresponding predicates (which represents mental categories we learn from environment).
Mr. Harvey is a gentle intellectual with charm and that characteristic lack of arrogance and tendencies to show off modern narcissistic personages exhibit nowadays.
I can't stand guys like Hickey or Tellman or whoever it is.
Previous Hacker News Discussions:
https://news.ycombinator.com/item?id=3491142
https://news.ycombinator.com/item?id=3141996
Full disclosure: I'm a TA for the course right now.