>How many people reading this are 50 years old? Lisp is 50 years old — and for the most part, the lifespan of a programming language is closer to the lifespan of a dog than to that of a person. Only one other language (Fortran) is that old and still in use. Why has Lisp survived? Not because it’s useless. People still use it because you can write working code in Lisp way faster than you can in most so-called “practical” languages.
Sorry to burst this bubble, but tons of languages created in the same period or 10-15 years later are still in use.
Fortran, which he already mentioned. COBOL is still in use (much much more than any Lisp, just not in shiny new startups). Forth is still used here and there. C of course which is about 12 years younger but still around 40ish. Pascal is still used (in far greater numbers than Lisp). Heck, even BASIC still has tons of fans, either in VB disuise or in the the various commercial BASIC's around.
With this in mind, Lisp being "50 years old and still in use" is much less impressive -- since one other 50 year old, and several 40 and 30 year old languages are still in widespread use. We use lots of 3+ decades old languages - so, it's not like we only use "new" languages and Lisp is the exception.
Hell, even python is 24 years old now, it's not exactly a spring chicken in programming language terms... this observation rather supports the idea that Lisp is mostly used by old professors who think 1991 is still very recent :P
One of the differences is that Lisp became relevant quickly at least in part because the digital universe was so much smaller and correspondingly so was the community of digerati. Python lived in relative obscurity throughout the 1990's and early oughts before becoming an overnight success. The same could be said for Ruby.
>Python lived in relative obscurity throughout the 1990's and early oughts before becoming an overnight success.
I doubt Python became an overnight success. If so, it's more likely to have been the "I became an overnight success in 10 years" kind of overnight success. Python has been seeing steady growth in usage over many years. I'm also aware of the fact that it has come into the limelight somewhat more, a bit recently, but that does not take away the credit from the steady growth that happened before, and the community responsible for it - because these things do not happen by themselves or just due to hype.
I guess an important distinction here is popularity by choice rather than circumstances - many basic, cobol, pascal, etc. programmers may prefer to use another language.
Then again many lisp programmers may be in the same situation, e.g. AI or emacs users ;)
I think overall continued popularity is a less interesting metric than actual demonstrations of benefits of a language, as far as we can given how incredibly difficult it is to demonstrate an actual quantitative advantage for _any_ language.
>Perhaps he meant the Fortran-style programming language, e.g. Cobol, Algol, C, Pascal, etc.
No, I think he meant Fortran, the language specifically.
And he is right, it IS still in use, from NASA to NumPy. When I'm talking about "bursting this bubble" I mean his claim that "only one other language" is that old and still in use. Several decades old languages are ALMOST as old and still in use.
>It's stretching belief to call VB and Altair Basic the same programming language. Or did you mean the Basic-style programming language?
Not that more stretching that calling the original, 50's Lisp with Common Lisp, Scheme or Clojure.
Obviously I (and the author) both tal about X-style languages.
Not to mention Objective C or Smalltalk. The former is the de facto language used in programming the iPhone (although they started to replace it with Swift).
Tons of people still use Borlands tools and there are lots of legacy enterprise apps maintained in them.
In Windows there are also several high profile apps written with Pascal. One such I remember reading about recently is Fruity Loops, one of the most popular (though entry level) DAWs.
In the TIOBE rankings (not definitive, but indicative), Pascal has 2 entries in the top 20, as Delphi/Object Pascal and plain Pascal -- far ahead languages from all CLs combined, to Clojure, Scheme, Lua, Groovy, Erlang, etc.
I think what makes Python unique and successful as a programming language is a strict pragmatism (rejection of any ideology) and rather conservative choices in features to add (that is, don't add everything that someone suggest, wait until we can see it's a good idea and it cannot be readily composed from existing parts).
Scheme, OTOH, has some ideological choices. The whole notion of programmable programming language that comes with Lisp tradition (which includes s-expressions). The emphasis on teaching and small libraries. Attempt to somewhat model mathematics in numerical tower. These design choices, while interesting, are going against practicality.
I am big fan of mathematics, Common Lisp, Haskell. There is so much beauty and elegance. And yet, when I need to get something done quickly, I will just use Python because it drops the mathematical purity without prejudice, just like a physicist working on an impossible problem.
> is a strict pragmatism (rejection of any ideology)
What? Python is very pushy on its ideology of "there should be only one way to do it", which is an exact opposite of a Lisp complete lack of ideology: "do it any way you like, turn Lisp into any language imaginable".
> rather conservative choices in features to add
Which is only ok if your language is extensible, while Python is most definitely not anywhere near an extensible language.
What I mean is that Python tries hard to find a compromise solution between various ideologies. Whether or not that is an ideology I would leave up to Bertrand Russell; I think it isn't.
Regarding extensibility, again, there are people who argue against it and have an opposite opinion to Lispers. Python tries to find a middle ground, conservatively. It's not super extensible like Lisp or Forth, but still quite a lot compared to many mainstream languages.
These are the people who don't know any better, arguing out of ignorance, not from some valid ground.
> Python tries to find a middle ground, conservatively.
How exactly? I do not see any extensibility features in Python, none at all. Some dirty hacks with rewriting the compiled bytecode in runtime do not count.
> These are the people who don't know any better, arguing out of ignorance, not from some valid ground.
That's very unfair. Let's take Lisp-style macros. There are good arguments against macros in programming languages. For example, keeping the amount of primitives small and core language simple. I don't agree with these arguments (I am pro-macros), but it's not just ignorance.
> I do not see any extensibility features in Python, none at all.
Not sure what you mean, but macros are not the only way to extend a programming language.
One particularly common method is called "functions" (if you have ever coded in assembler, you will know what luxury that is). Then you have classes, operator overloading, metaclasses, decorators, native C interface, AST parser in the standard library..
Not sure what more, aside from macros, you could want regarding extensibility. Python is more extensible than C and Java, which are the most common programming languages in use.
> Let's take Lisp-style macros. There are good arguments against macros in programming languages. For example, keeping the amount of primitives small and core language simple.
Wait, that's the argument for including Lisp-style macros as a primitive feature of the language: by doing that, you reduce the number of primitives needed, keeping the number of primitives small and the core language simple.
With macros, you can have fewer primitive control structures in the core language, and build other control structures that are desired on top of the existing ones + macros.
So, while there may be arguments for keeping macros out of programming langauges, the one you've presented is, instead, pretty much the main argument for having Lisp-style macros.
I don't want to get into this discussion, because, as I said, I disagree with these arguments; I just wanted to point out they exist and are valid.
Anyway, I already outlined the argument in another of my comments, where I talk about set theory and integrals - the "axiomatic" primitives are different bunch than "useful" primitives. So trying to keep language small may lead to different decisions based on which primitives you want to keep.
So for instance (as a simple example) map and filter can be built with recursion, but why bother with learning how to do it, if I can just call them from the standard library? (Or in case of Python, use generator comprehension.)
Another example would be complaint below, that Python doesn't have proper closures. Valid complaint or not, Python has classes, and you can pretty much build one from the other.
Uhm, no, so far not a single valid non-straw-man argument against macros.
It's important to mention that macros do not affect the rest of the language design choices in any way whatsoever. The underlying language can be literally anything, macros can be added on top of any imaginable combination of design decisions.
In case of a python-like language, see Converge, for example.
> It's important to mention that macros do not affect the rest of the language design choices in any way whatsoever.
I think you're wrong. It affects how people use the language, and how the community develops around it, and other things. Perhaps if people would use macros as a sort of last resort thing, then it wouldn't matter, but from my experience, they don't. It's probably most visible in the Forth community - every implementation of Forth is subtly different from the others, and by design! Chuck Moore even said that the Forth standard goes against the very idea of Forth.
I now work on a large codebase in language with macros (SAS), and while it's powerful, it also causes problems; the code where any calculation can take place during compilation is harder to read, sometimes, and they cannot be easily translated to other language, for example.
The truth is, the programming community moved from macros, which are powerful and easy to implement, to other abstractions, less powerful but more specific to problems that people usually face (such as OOP). Many languages from 70s and 80s had macros, but today it's not so common. This transition is a little bit similar to transition from imperative to functional style, which will probably happen in the future.
> It affects how people use the language, and how the community develops around it, and other things.
It affects the use (in very positive way), but does not affect the design.
But, what is true, is that a majority of people using macros do not understand how to do it properly and do not follow the right discipline. What is needed is to promote such discipline instead of spreading lies that "macros are dangerous".
And no, there is a very strong trend back to metaprogramming at the moment: Julia, C++, D, Clojure, Rust, all that stuff. All the other methods of implementing abstractions (OOP and the other pathetic things) have a very pronounced maximum of the level of abstraction, while there is no limit when metaprogramming is used to implement hierarchies of eDSLs.
> It affects the use (in very positive way), but does not affect the design.
It does, because the usage was one of the design criteria for Python. That's what I mean by "Python being conservatively non-ideological"; they don't want to add features that are controversial in the programmer's community.
> But, what is true, is that a majority of people using macros do not understand how to do it properly and do not follow the right discipline.
Maybe; maybe you don't want to accept that there are good reasons against. I accept it, even if I disagree; you can do the same.
In any case, opinion how should people be programming (e.g. use macros), is ideology. One ideology can be, in practice, better than the other, but it still is an ideology. Python tries to avoid the controversial ones and only adds those that have significant consensus.
> What is needed is to promote such discipline instead of spreading lies that "macros are dangerous".
You should be a little more honest with yourself and admit there is a grain of truth in it. Things of great power are almost always dangerous. Heck, even the Lisp community is divided whether or not the macros should be "hygienic".
> there is a very strong trend back to metaprogramming at the moment: Julia, C++, D, Clojure, Rust
First of all, I think macros and template metaprogramming is a different feature. So unless D or Rust have macros, you end up with Julia and Clojure.
Second, I am talking about languages used in the industry; in the 80s, macros were in widely used languages such as assembler, C, PL/I, Common Lisp, SAS, M4, Forth.. It's simply not what we see today. It may change in the future, but given the general trend in programming languages to add constraints (smartly) rather than remove them (I mentioned FP), I think it's unlikely (just to remind you - I actually like macros, at least in CL).
Finally, you can do (equivalent of) template metaprogramming in Python (mostly, templates are way to circumvent the type system, which is already done deal in Python). You can also define DSLs in Python pretty easily, especially with __call__() methods. The bottom line, for me, is that Python has lots of small niceties (mostly in the standard library) that collectively overweight the advantages of macros.
> maybe you don't want to accept that there are good reasons against
I'm a scientist. Before I accept something I need to see it first. I have not seen a single non-straw-man argument against macros yet. Not a single one. Either vague scaremongering without any specifics, or attacks on something that is totally unrelated to the macros.
> Things of great power are almost always dangerous.
Macros combined with the proper discipline are by orders of magnitude less dangerous than pretty much any other programming practice in existence. That's the whole point - our aim as engineers is to eliminate complexity. Macros are the ultimate way of doing so.
> So unless D or Rust have macros, you end up with Julia and Clojure.
Rust got proper, Lisp-style macros. D metaprogramming is Turing-complete, but yes, quite a bit behind, now even less powerful than the modern C++.
> C, PL/I, Common Lisp, SAS, M4, Forth
Of this list, only CL and Forth had proper macros.
> that collectively overweight the advantages of macros.
Nothing can overweight the infinite (literally) list of advantages of macros. With macros, every single semantic property you can imagine is available to you at no cost. Without macros - only the limited set of things that the language designers decided to give you.
I would argue not. These paradigms actually encompass other models, and it that sense, they are pragmatic conservative choices (in a sense, "minimal").
You can program functionally in Python. It doesn't really gets in way of doing that, which cannot be said about functional languages, they are often designed to prevent you programming imperatively.
Dynamically typed languages can be understood as having only one type, so it's in a sense a type system that encompasses all the other type systems. For fans of typing, Python added optional type annotations, again, as a very conservative choice.
Interpreted - again, it's probably easier to write interpreter than compiler. Once you get into compiling, you have to make some more choices, which Python conservatively forfeits.
That is not to say that these choices aren't cultural (for example, most people are more familiar with infix notation, although even here there may be pragmatic reasons - the exactly right amount of parentheses). Just that it tries to encompass various tastes and be conservative about it.
Über fast JIT virtual machines, maybe. Even bytecode interpreters are likely simpler because while you still have to compile to the bytecode, it is generally simpler and higher level than x86 or ARM assembly. Direct interpreters (based on the AST) are slow, but dead simple.
What makes you think interpreters are more complex?
> Direct interpreters (based on the AST) are slow, but dead simple.
Still far too complex, and, what is worse, very convoluted. You cannot split such an interpreter into series of very small and easy to understand steps, it must do everything at once, in a single huge recursive visitor. Compare it to a simple compiler which will rewrite one distinct aspect of a language at a time.
Another awful property of interpreters is that you can't strip them away from the runtime, it must be woven into all the parts of the interpreter and cannot be abstracted away. While compiler can be isolated from all the runtime issues which should be implemented separately (or simply reused).
I don't think compilers are easier to write than interpreters, but you should try and find the source code for some of the basic compilers from the eighties. The difference between (lex, parse, execute) and (lex, parse, generate assembly) are very small, if you are willing to forego optimizations. A compiler can even use thenexact same data structures as the interpreter to store variable names and their values, and use subroutines to look up and store values.
Fast? No, but it will be faster than that direct interpreter (some Basics of the '80s scanned a linked list to find target lines of goto's and gosub's. If you manage to get rid of that, tight loop can get a lot faster soon)
I've written a few interpreters during my (almost) 30 years of programming; never a compiler. The thought of 1) machine code generation and 2) optimizations scares me every time.
Don't do it. Compile into a higher level representation and reuse and existing machine code generator (LLVM, even plain C), or compile into a higher level VM with threaded code implementation (Forth and alike). Although even a full compiler with a native codegen is still much simpler than an interpreter.
> 2) optimizations scare me every time
As if you don't have to optimise in an interpreter. Not to mention that even an unoptimised compiler would produce much faster code than your average "easy" interpreter.
Compilation is nothing but a sequence of trivial term rewriting passes. Each can be as simple as you want, doing one trivial thing at a time. Easy to understand, easy to write in a fully declarative way, easy to reason about, easy to debug and maintain. You don't even need a Turing-complete language to do it.
And an interpreter is always a pile of convoluted mess, leaking multiple abstractions in between layers (if you're lucky enough to even get any distinct layers at all). I always wondered how people can even compare interpreters to compilers.
>>> x = 1
>>> def f(y): return y + x
...
>>> f(2)
3
EDIT: Note that my example works even if the definition of x and f appear within the definition of another function. If closed-over variables must be mutable then Haskell doesn't have closures!
Closures define the set of variables whose lifetime extends the function they are in because of their use in another function.
For instance:
;; Function that returns a function that counts even numbers
(define (even)
(let ((x 0))
(lambda () (set! x (+ x 2)) x)))
(define count (even))
(count) ; → 2
(count) ; → 4
In Python:
def even():
x = 0
def add():
x += 2
return x
return add
count = even()
count() # → UnboundLocalError: local variable 'x' referenced before assignment
That said, Python does support a less convenient style of closures — it works if the variable is a container (such as a list, a dictionary or an object).
Python supports closures even when the variable isn't a container. For instance:
def add_double(n):
x = n * 2
def add(y):
return x + y
return add
f = add_double(1)
f(2) # -> 4
What python versions prior to 3 didn't support is rebinding a variable from an enclosing scope; not because of any fundamental feature of the implementation, but because there was no syntactic way to differentiate between a assigning to a variable in an enclosing scope, and creating a variable in the current scope. So x += 2 is short for x = x + 2, which python interprets as creating a new variable x in the current scope and then trying to access that variable before it's been assigned.
Python 3 introduced a syntax to specify that you wanted to assign to a variable from an enclosing scope, the 'nonlocal' keyword. So your example would be:
def even():
x = 0
def add():
nonlocal x
x += 2
return x
return add
count = even()
count() # → 2
count() # → 4
> Python has had support for lexical closures since version 2.2. ... Python's syntax, though, sometimes leads programmers of other languages to think that closures are not supported. Variable scope in Python is implicitly determined by the scope in which one assigns a value to the variable, unless scope is explicitly declared with global or nonlocal.
Just been reading up on this. Python has sort of read only closures where the free variables are stored against the functions. You can't alter a variable inside the closure like you can in JS. Personally, I don't mind - I'd rather have a better idea of which scopes are changing which information, but I don't think they're considered "true closures."
Recently I've been musing the aphorism "Quantity has a quality all of its own". I think something similar applies here too. Python goes to such extreme lengths to be (seen as) pragmatic that I can't help but conclude that "Pragmatism has an ideology all of its own".
Pragmatism is an ideology, not the rejection of any ideology. (Perhaps more generally, pragmatism is an umbrella name for a class of ideologies which are all about maximizing some objective utility function, they differ in what utility function they are maximizing.)
Python wasn't my first programming language. And I think I gave a fair chance to other languages. But there come these moments where you are like "oh, this would be really simple in imperative code". Ideology gets in the way; to claim the opposite is just kidding yourself.
I probably gave the most chance to Common Lisp. It's a very pragmatic language too, unfortunately (largely for historical reasons), the standard library didn't keep the pace. It's not just a matter of what you can do, it's also the design. Python standard libraries almost always put the user first and the purity second.
For instance, Python would never add versions of functions remove-if, remove-if-not, delete-if, delete-if-not. That's mathematical purity (in if vs. if-not) and letting
the onus of choice on user (in delete-if vs. remove-if) instead of conservatively making reasonable default.
At the same time, Common Lisp library lacks things like function to strip whitespace from string; you can do it, but Python has it as a function. It's just hundreds of little things like these that make Python more pleasant to work with.
I genuinely want to like Lisp more than Python (I am learning Clojure about now, which is on par with Common Lisp when it comes to pragmatism). But sometimes it is just a little more frustrating, because of ideology and not putting user first when it comes to API design.
If you do not declare your ideology upfront you will still end up having one, only implicit and ad hoc.
This does not apply to Python, however. It does have a explicit, if not entirely serious, statement of its ideology in The Zen of Python. (For those unfamiliar, read http://c2.com/cgi/wiki?PythonPhilosophy. It has The Zen and some discussion.) Python takes its ideology pretty seriously, too, as evidenced by how much the words "pythonic" and "unpythonic" used in discussions involving the language. In discussions around, e.g., Java you will find no terms of that sort.
I already explained that below, but let me elaborate. (But beware I am not a big fan of Zen of Python, precisely because I think it's misunderstood.)
The thing is these rules do not really give much guidance on how to design a programming language. These don't hint whether or not the language should be functional, or should have small library, or should have classes, and bazillion other things that programming language designers tend to argue about.
We can pretty much agree on these rules. And that's the point, these rules are all about making tradeoffs.
Maybe you will like Hy. It's a Python import hook that allows you to load files with a syntax similar to Clojure into Python. It also has an interpreter, so it can work as a standalone language or just dropped in Python projects. And it's not just the syntax, you can of course use Lisp macros!
The notion of a programmable programming language is an implementation of the mathematics of computation. It's as ideological as addition. COBOL allowed self rewriting code...probably because all software is [when we're at the right level of abstraction] is a mechanism for a computer to rewrite its instruction set.
A programming language that does not allow self rewriting code can only come about in one of two ways: a lack of cleverness on the part of its designer or a willful decision to disallow certain types of computations. Willful decisions are driven by systems of beliefs and values. One person's beliefs are an ideology to someone who doesn't hold them.
In my opinion, Python's practicality stems from its library ecosystem and that ecosystem is an accidental artifact of the epoch in which it was born and grew. The internet and open source made sharing of libraries practical. The idea that Java would be the one true language in enterprise and the economics of MatLab in academia and the public failure of Lisp as a startup in the 1980's, all helped.
But "Python" is fungible with "Javascript" in any it-just-lets-me-get-things-done argument without too much problem. More people use Javascript to just get shit done than any other language.
Python is so ideological it is not even funny. It has a crippled lambda because Guido couldn't find a syntax he liked for it. It doesn't have tail call elimination because Guido is under the, mistaken, impression that they inhibit backtraces. The zen of python spouts explicit is better than implicit as a marketing slogan to cover up a design wart with Python's lolscope. It only had local and global scope. Python Scope is still broken[0] btw. But don't worry, it is not a bug because it is documented. (╯°□°)╯︵ ┻━┻. The only reason Python passes as a language with good design is because people compare it to PHP or Javascript.
How is the numerical tower impractical? Having to check for overflow/underflow after every addition is what is impractical in my book.
> I am big fan of ... Common Lisp ... There is so much beauty and elegance.
Idk where to people get the idea that Common lisp is 'beatiful and elegant' (in opposition to practical I may add). It is eminently practical. Have you seen loop and format (which btw Python has adopted with some glaring _practical_ omissions like list iteration ~{~}.)? The whole anything but nil is true to allow functions like digit-char-p to return a number instead of t is another practical design choice. Do you think there is anything elegant in modifying the readtable? I could go on and on. CL is a practical language that, apparently due to ESR's nirvana bullshit quote, people who haven't used it have a misconception about its design.
But back to the point, Scheme's design (ideology if you want) is to play to pick features that synergize as to allow discard others as they can be expressed with the previous features/concepts. You don't need iteration, as it is a special case of recursion. You need iterators, you have map, which is just a function the user can define, etc. And this is relevant to teaching computation as it allows the teacher to waste as little time teaching a language and allows him to focus on CONCEPTS. The amount of scheme knowledge needed to finish the SICP can be taught in 15 minutes (It doesn't use macros). All the difficulty lies in the concepts it deals with. Getting the language out of the way allows to cover a lot of ground, for example in the SICP one implements a Virtual Machines and a compilers targeting it.
Now if you want to argue that nowadays the student needs to learn how to pipe software together and has less need for concepts that is another thing altogether[2], but saying that Python is practical (as opposed to ideological) and Scheme's 'ideological' choices hampers the students learning is bullshit.
> The only reason Python passes as a language with good design
..because ability to compromise is a good design, in a sense.
I kinda agree about the scope though, but I don't think it's a big deal. Sometimes having a hierarchy is overrated, and two levels of names is good enough.
But you will find design mistakes in all languages. Every designer errs. Python doesn't actually have that much warts because the designer is quite conservative.
> Idk where to people get the idea that Common lisp is 'beatiful and elegant'
Maybe from the paper that introduced Lisp, of which CL is descendant? I am aware about CL's warts, but these are result of Lisp's success rather than its original design goals.
> Scheme's design (ideology if you want) is to play to pick features that synergize as to allow discard others as they can be expressed with the previous features/concepts
I understand. But the reason for that is that people consider it beautiful when you start from small subset of features. It doesn't however reduce the need to know the more practical concepts. Like in mathematics. The fact that we can build everything from set theory doesn't mean we don't need to know about integrals.
Some universities skip the basics (set theory) and teach only the practical concepts (integrals). Perfectly fine in my book.
> Now if you want to argue that nowadays the student needs to learn how to pipe software together
I have said it on HN before, it depends on your goals. If you're teaching future programmers/computer scientists, sure, starting from Scheme or even Lambda calculus is better. Because you can assume people will do the real work in other languages. But if you teach a general non-programmer audience to program (like scientists), why not teach them just one language, which they can use in practice?
Yes, it's working as designed, and I see how it can be considered broken, but it's not due to a scope issue. The "except X as x" is specified (with Python 3.x and PEP 3110 at https://www.python.org/dev/peps/pep-3110/ ) as doing a "del x" at the end of the except clause, and that was done for garbage collection reasons.
I've found in the past couple of years Go has fallen into that pragmatism bucket for me. If I need to throw something together quickly it's a good tool for the job thanks to the simple syntax, decent tooling and significant standard library.
I was actually enrolled in the first python version of this course. at that time it was taught by a google employee and not Brian Harvey (author of the OP, past professor of this course).
For me personally, the biggest benefit of python was the large amount of accessible resources online. There were multiple ways to learn something rather than banging my head against SICP.
Incidentally, writing a scheme interpreter was one of the last projects of the course.
For writing real-world code, what you want is aggressive optimization, and access to libraries for up-to-the-minute solutions to real-world problems.
and ofcourse, this mindset is _exactly_ right in the context of a course for teaching programming. as much as we might fantasize about it, computers are made of metal, not lambda calculus :)
I know more about Racket (that is very similar to Scheme). The compiler does a very aggressive optimizations and inlines many of the lambdas. You can write nice understandable code with lambdas, but under the hood many of them are removed and the actual running code is similar to the imperative version.
In Haskell there are a lot of abstractions, the compiler is also very aggressive, but I don't know the details.
"After all, facts are facts, and although we may quote one to another with a chuckle the words of the Wise Statesman, 'Lies--damned lies--and statistics,' still there are some easy figures the simplest must understand, and the astutest cannot wriggle out of." Leonard Henry Courtney, 1895
Also, those are "toy programs" not "microbenchmarks".
There appears to be a website [1] for the self-paced, Scheme version of the course referenced (61AS) although it doesn't appear to be run as a MOOC formally. There's also a well-organised site for the scheduled, Python version [2] .
I wish the lecture videos for 61AS were online, but sadly I can't find them. I think 61AS is done in Racket now, which would make it even more practical.
"Kicking and screaming", to be fair, is a bit of a literary exaggeration. But Rossum did not particularly want to add functional features to the language and only did so at the behest of his users. Lambda as well as map/filter/reduce were "a significant, early chunk of contributed code", not something he intended the language to have from the outset[1]. More recently, he has been trying to deemphasize these features in favor of list comprehensions.
Moreover, Rossum is opposed to proper tail calls and even recursion in general[2]. He also doesn't like folds, and generally thinks you should just express that sort of logic as some sort of loop. (I forget exactly where I read that opinion, but it could easily have been [1].)
He's a staunchly imperative programmer, and the design of Python shows this off consistently except for an initial accident of first-class functions and closures (of a sort)—a design he likes because it enabled other, non-functional language features like new-style classes.
"Kicking and screaming" might have been an exaggeration, but it captures his overall attitude towards functional programming pretty well.
There are many thing he didn't intend from the outset, so that by itself isn't so significant. Some other things he didn't intend were the type/class dichotomy and the inability to raise an exception from comparison. Two changes that come from external users include David Ascher's rich comparisons and Samuele Pedroni's proposal to use the C3 method resolution order.
> "Steven Majewski might not agree with me, but in Python you are better
off using "for" loops (the overhead of calling a function for each
element is quite substantial)."
You can see that's still true with CPython:
% python -mtimeit -c 'x=range(100)' 'd=[]' 'for a in x: d.append(a*a)'
10000 loops, best of 3: 22.1 usec per loop
% python -mtimeit -c 'x=range(100)' 'map(lambda a: a*a, x)'
100000 loops, best of 3: 18.2 usec per loop
% python -mtimeit -c 'x=range(100)' 'd=[];ADD=d.append' 'for a in x: ADD(a*a)'
100000 loops, best of 3: 12.7 usec per loop
% python -mtimeit -c 'x=range(100)' '[a*a for a in x]'
100000 loops, best of 3: 9.91 usec per loop
FWIW, Python has made at least one language change in order to improve performance. In Python 1.x the following was possible:
def f(x):
from math import *
return cos(x*sin(x))
Python 1 used a dictionary for locals, while Python 2 introduced a pre-allocated array of variables for locals so lookup could be done via a simple index offset.
In the same email, van Rossum writes:
> "To be honest, I wish I hadn't introduced lambda, map, filter and reduce -- they support a style that is inconsistent with the rest of Python. Unfortunately, in the sake of backward compatibility, I can't take them out. So enjoy them if you have to. But don't get too thrilled!"
That same email also says:
> Ah, the infamous "first class objects" argument again. Really, I don't understand all the fuzz about lambda, and I don't see why its introduction made functions any more first class than they already were in Python. They are entirely syntactic sugar for local function definitions. Everything you can do with lambda you could always do with local functions, at the cost of one local temporary identifier -- surely no big deal!
This is interesting because you mentioned "initial accident of first-class functions". Why do you think first-class objects are an "initial accident", and not a deliberate decision?
when you have a project concerned with readability, adding new features is a huge community cost in the long run because it's one more thing you have to learn to deal with as you go across projects.
One of the wonderful things with python is that you can dig into the code of most python projects and not have to worry much about completely different idioms requiring a complete context switch to understand how they work.
He is not really being black and white, he is doing a service by preventing language bloat ala C++ as much as possible.
It always struck me as a reaction against Perl's "Tim Toady" approach, encouraging a more idiomatic and unified programming strategy to ease comprehension and teaching (IIRC the origin of Python, or at least ABC).
This is exactly what makes Python a bad language. Lack of expressive power results in an unnecessary complexity of the code. It's much easier to comprehend a short, compact code written in the familiar, domain-specific idioms rather than digging through convoluted pile of low-level leaky idioms like loops, list comprehensions, generators, classes, recursion and all that irrelevant crap.
Just wanted to add my voice: this matches exactly my perception. I used to really like Python, but I've come to the conclusion that there's a certain point in terms of the complexity of a Python program where everything just blows up and it's next to impossible to keep it from becoming a tangled mess. This point is reached incredibly soon, something like five classes with some inheritance, some interface/virtual/abstract class. Ugh. Now I avoid Python as much as possible, much preferring C++ or Java, which I think work incredibly better for larger programs.
I used to think I needed both, but since picking up Haskell I don't miss for loops at all and I rarely need to use explicit recursion either--now I almost exclusively use higher-order functions like `map`.
When I started learning Python, I always used map and filter because they were just how I thought about problems. But there is a certain bit of elegance to being able to write
[process(child) for child in node if child is not None]
in lieu of
map(process, filter(notNone, child))
The functional style makes better use of conceptual primitives, but the imperative style is generally better for documenting.
for child in node:
if child is not none: # Comment explaining decision.
process(child) # Comment explaining more stuff.
Oh, I agree. I mentioned it because is the preferred style in Python -- probably because it is so readable. But my points still apply to the loop version.
I'm actually a bit scared of using recursion in production (at a web company). It has its place of course, but gratuitous recursive code that could have been written without gives me pause. It's too easy to miss an edge case and end up with a stack overflow, and that can bring down a whole server farm.
changeset: 1369:89e1e5d9ccbf
branch: legacy-trunk
user: Guido van Rossum <guido@python.org>
date: Tue Oct 26 17:58:25 1993 +0000
summary: * compile.[ch]: support for lambda()
so that's only two years after it was first released, and before it was 1.0. The code was user-contributed, but given the timing and the recollections at http://python-history.blogspot.com/2009/04/origins-of-python... I can't agree that the historical record is compatible with "dragged kicking and screaming to lambda."
What a shame it would be if he'd removed it. Pyspark now makes great use of lambdas, and I'd strongly argue that having to name every function argument would be strictly worse.
The loose consensus in Python is that lambdas are best when they fit on a line, which is what the examples at http://www.mccarroll.net/blog/pyspark2/index.html do. Otherwise, make a named function.
I think arguments like this are one of the main reasons why van Rossum has kept lambda in the language.
At Oxford University in the UK, students are taught exclusively Haskell in their first year - a language designed by a committee of some of the worlds best computer scientists. An institution that teaches Python under the banner of computer science, certainly loses prestige in my opinion.
Pythons scientific library situation is much much better than Haskell's at the moment. But I think that speaks much more to the strength of the python community than the weaknesses of the Haskell one
By comparison, I think that the sicilib situation in Ruby ( a language that's fairly similar in Python in many characteristics) isn't really better than the Haskell one
You can doubt as much as you want it's just a fact. But I'm not saying it's good either. As much as I like python as a scripting language I don't think that's it should be use as widely as it is now.
Concerning Haskel it still lacks some pragmatism to solve some pratical problems such as install and cabal hell.
There are countless parsing libraries, compilers and cutting edge research done in Haskell. My guess is that you are thinking of NumPy and similar wrappers over older C/Fortran code. A computer scientist would seek to design something better, perhaps by exploiting algebraic properties not considered by NumPy. Universities teaching Python is a relatively recent phenomenon, perhaps to appease industry.
EDIT: cabal came from industry and was based on ideas from the Java, Python, Ruby communities. The Nix packager better embodies the ideas of Haskell.
Yes it has lots of wrappers over old C/Fortran libraries and is moving into the MatLab/R space. But does that make it good for teaching computer science?
Well what do we mean when we talk about teaching computer science? Most subjects have fairly broad high-level survey style classes freshman year. Even the X-for-majors classes tend to cover huge swaths of the field with a lot of "you'll cover this more in EPS325" hand waving. The purpose is not to lay a foundation of first principles but to make sure you know enough that you can see connections between topics in later classes, that nothing is too surprising, that you understand what you're in for.
For non majors it's usually so you have enough to be "well rounded" or more importantly (with say english or math) you have enough facility in a field that you can use it's tools to further your work in your own field.
Python might not particularly add much over Scheme for the first situation, but if schools wanted to follow Dijkstra's conception of CS students wouldn't end up touching a computer until senior year let alone Scheme|Python|Pascal|Whatevs anyway and Python isn't distinctly worse for explaining bigO and data structures/algorithms (yes recursion support sucks a bit but not so much that you can't teach it to freshman); Python does however aid in the second tremendously because when students leave CS110
152|whatevs they will go back to their primary field and not have to muddle about learning a new language.
Yes Python is fine for teaching the imperative algorithms needed for a graduate to pass his/her Google interview. But computer science and SICP is/was so much more than that. Even if these institutions want to be vocational, typical software development roles do not require recalling famous algorithms, they require glueing code together. To be good at this, you need a sound grounding in abstraction and composition - the mathematics of program construction. Python will not help people learn this.
Chemical engineering is more than P/S electron shells, balancing RedOx reactions and listing at least one property of each column of the period table but that's all the more you're going to get out Chem 1+2.
Yes computer science IS more than that. Which is why a B(S|A) in CS is a 4 year degree. SICP isn't a complete CS education either. That gluing code together is the typical software job is if anything an argument against Scheme/SICP.
It helps that Haskell was based on Miranda and the committee deferred to Miranda whenever they could not agree on something. This is opposed to the typical committee solution of adding both ideas or a sub-par compromise.
And one that teaches Haskell loses prestige in employeability minded people -- but then again Oxford graduates don't much care for those things, privileged upper class as they are.
What students cannot understand is that after passing the classic CS61A in Scheme, languages such as Python or Julia could be picked up in a few hours with occasional consulting of reference manual for particular details. Erlang in a couple of days.
Well, for Java (a dogmatic religion) one have to be thoroughly brainwashed, so there is CS61B for that.
It's interesting that the PG article is cited. I've always had mixed feelings about it.
On the one hand I agree that wanting to use Lisp for a startup is probably a good indicator of technical strength (yes I'm saying that people that pick Lisp as their first language are smarter than the average).
On the other hand startup financing is all about the hyper growth potential and it would seem to me that basing your technology stack on Lisp could be quite the hinderance. The job market is already tough and finding people with a deep understanding of Lisp to start working on an (assumed) nontrivial Codebase isn't easy.
I guess Clojure and "allowing" (or migrating to) Java could be an option?
Then again some hypergrowth startups don't need many developers (WhatsApp)
When they were starting their company there was no Python/Ruby/PHP to choose over Lisp, and Java wasn't as mature. Lisp was actually quite the RAD choice at the time.
Can't believe you air heads are seriously comparing Scheme to Python. Scheme is a nightmare, it's only good for academics who have nothing better to do and the 1% of your projects where functional programming may actually be beneficial to what you're trying to do.
As a developer, I tend to be now agnostic/indifferent towards programming languages. Essentially, you can use any programming language to do the job. The difference lies in developer's comfort zone and libraries.
For example, I prefer C# over VB.Net for ASP.NET/MVC backend. However, for several practical in-house projects, I've found Excel+VBA far quicker over IIS/ASP.NET project.
So, it all depends on the context, your audience and usability of the project. We should not let technologies drive our projects, but our business users and current trend patterns. For example, I'm seeing a lot of JavaScript frameworks being used to deliver responsive and device-independent web applications, such as SPA (Single Page Application).
Granted, it takes a while to learn a language's API, but essentially all of them do similar things. The advantages lies not in them alone, but in the entire framework/suite that it comes with.
We are living in times where knowing programming languages is not as important as knowing frameworks/libraries. There are heaps of them out there that require programmer to spend more time on writing integration code to "glue them" or configuring them. We no longer need to write the low-level mundane code, but focus on delivering high-quality, rich software applications.
Yes I'm the same way, but you really can't do anything clever until you really know a programming language well. You could implement anything using the basics of language, but for it to be a good implementation, you need to have experience with the language to fully exploit its power.
Sorry to burst this bubble, but tons of languages created in the same period or 10-15 years later are still in use.
Fortran, which he already mentioned. COBOL is still in use (much much more than any Lisp, just not in shiny new startups). Forth is still used here and there. C of course which is about 12 years younger but still around 40ish. Pascal is still used (in far greater numbers than Lisp). Heck, even BASIC still has tons of fans, either in VB disuise or in the the various commercial BASIC's around.
With this in mind, Lisp being "50 years old and still in use" is much less impressive -- since one other 50 year old, and several 40 and 30 year old languages are still in widespread use. We use lots of 3+ decades old languages - so, it's not like we only use "new" languages and Lisp is the exception.