Hacker News new | past | comments | ask | show | jobs | submit login
Why MIT switched from Scheme to Python (wisdomandwonder.com)
219 points by vorador on Mar 24, 2009 | hide | past | web | favorite | 128 comments

I think there's something else here, implicit in Sussman's comment, that's important. MIT was founded on a philosophy of practicality, and everything else is secondary. If you couple that with the belief that fundamental computer science is the most efficient way to enhance practical software engineering, Scheme was a wonderful choice. Java and C++ may have been more directly practical to software engineers, but they are not good for the field or people pushing the limits of the field - you spent too much time mucking around with their paradigms and idiosyncrasies.

Python (and, frankly, a number of the scripting languages-turned-mainstream) combines this clarity of computer science with a practicality that Scheme never had. If you can convey 95% of the basic ideas in Python, and you can also open the door to learning how to deal with 3rd party code, you'd be a fool not to. It was never about programming purity anyway, so there's no reason to mourn the passing of Scheme. It's progress.

> If you can convey 95% of the basic ideas in Python

But you cannot.

Some of the ideas in SICP can nearly be conveyed in Python (like lexical scoping, closures, data types, streams). You'll use built-in language features in Python (classes, generators) instead of implementing the functionality. I think that implementing these things is important if you want to really understand them.

An example, cons, car and cdr from nothing at all:

    (define (cons a b) (lambda (f) (f a b)))
    (define (car p) (p (lambda (a b) a)))
    (define (cdr p) (p (lambda (a b) b)))
You can do this in Python in theory, but you can't throw this into a Python course.

Other ideas, like interpreters, require a lot of effort. Try writing a metacircular interpreter!

  def cons(a, b):
      return lambda f: f(a, b)

  def car(p):
      return p(lambda a,b: a)

  def cdr(p):
      return p(lambda a,b: b)
As well as in theory you can do this in practice. It's about as elegant (hard for me to judge as I have a strong pro-lisp bias).

Aside from lack of tail safety and performance issues with writing scheme-ish code I don't see why you couldn't implement those things in Python. The main difference is that typically in a real world situation you wouldn't because there's a library or a language feature

The problem is not that you can't do this in Python, but that this is out of place in a Python course (you don't use linked lists in Python, for example). SICP shows that you need only a very small number of primitives to get a full programming language. You don't get this elegance with Python.

This may not be important if you teach programming to people who want use programming languages, but it is important for people who want to advance the state of the art in programming (i.e. CS students).

Maybe it's better if you teach Scheme in a seperate class class for CS students.

Keep in mind this is a freshmen level course. You don't have to - and shouldn't - cram every important CS concept into an intro course.

I agree. And this course probably is a lot better than the course I got (C++, ugh). That may be the reason I'm biased...

It's also counterbalanced by there being an obviously standard, reasonably portable distribution of Python. You go to Python.org, and there it is.

It took me a while to find a Lisp that I liked, and it certainly wasn't a decision I was prepared to make while I was still trying to learn the basics. (I eventually settled on Chicken Scheme.)

Your definitions do not work.

  foo = (1,2,3)
  bar = (4,5,6)

  print cons(foo, bar)
  print car(foo)
console output:

  <function <lambda> at 0x00B3F430>
  Traceback (most recent call last):
    File "./test.py", line 14, in <module>
      print car(foo)
    File "./test.py", line 5, in car
      return p(lambda a,b: a)
  TypeError: 'tuple' object is not callable

They do work; they just don't integrate with the built-in types: Why should they?

Try instead:

    -> 1

    -> 2

  car(cons('a', cons(2, 3))) 
    -> 'a'

You need to remember that 6.001 is an /introductory/ programming course. Presumably most of the people taking it at MIT have done some codework before : ] but an intro. course is an intro. course. The question of what language to use is a pedagogical one, not a question of thoroughness.

But the above examples are taken straight from SICP. Wasn't that the textbook for 6.001 before they switched to Python?

Which sort of wanders back around to begging the question; is that still the best way to start teaching, as Sussman talks about?

I ask myself, "how would I teach my son to program?", and for all my copious academic and practical experience, "start with re-implementing car and cdr from lambda functions" isn't even remotely in the running. Sure, I'd introduce such things earlier than most people and SICP is still firmly in the running, but for that to be an introduction in the general case? Not so sure.

There's definitely an argument to be made that robotics is in many ways a much more realistic starting point. While some people may find the idea that we should actually start out by making a case to the students that their major is cool and important and gives them the ability to affect things in the real world vaguely repulsive, I'd say that it's an idea at least worth considering. The student that can leap from lambda, car, and cdr, to realizing that right in the introductory course is so exceptional that you're not going to hold them back anyhow.

Yeah, I think I actually agree with the overall point. I got my first experience of programming using BASIC to get various Sinclair computers to paint stuff on the screen and make funny beeping noises. I'm all for instant gratification.

On the other hand that was when I was 9. By 12 I was learning Lisp, and by college age I'd written my own little Lisp compiler.

I've always had this image of MIT students that doesn't quite mesh with them needing a gentle introduction to programming, but maybe that's not really the case?

> I've always had this image of MIT students that doesn't quite mesh with them needing a gentle introduction to programming, but maybe that's not really the case?

If it leads to more (non-CS) students taking a programming class, I'd call it a big win. Other fields can benefit greatly from programming skills. I'd almost go as far as calling it a new form of literacy.

Absolutely I'd call it a new form of literacy. The most impressive applications of computer science always happen outside the realm of computer science disciplines.

"I've always had this image of MIT students that doesn't quite mesh with them needing a gentle introduction to programming, but maybe that's not really the case?"

6.001 from my experience does not in any way imply a student's focus on computer programming or software development. I've heard horror stories about people in completely unrelated majors ending up in that class and having their brains thoroughly melted. In my personal opinion, Python will be considerably more humane in that department.

There are probably a lot of physics, math and engineering students taking intro to programming that haven't written a line of code before.

I'd assume most, if not all, computer science students have, but there's still a difference from learning to bang out some code in high school that does neat things, vs. starting back at the beginning and approaching it as, well, computer science.

The way my university covered it was doing Pascal for the intro courses (this was the mid-90s) and then in third-semester "Programming Languages" we worked mostly in Scheme implementing language fundamentals. That seemed to be a reasonable balance.

6.001 presumed the students had prior exposure to college physics and calculus mathematics. If your son has this level of foundational knowledge, then SICP probably would be the most efficient way to bring someone up to speed on programming. If they have not attained that level of education, then this isn't the best intro point to computer science. Predicate logic and set theory is probably a better starting point. Or if you're feeling particularly cruel, Java. :-)

My concern is that "practicality" is often just a code word for "resembles Java or whatever else I was comfortable with so I can continue on doing what I've always done before." (With the result that the student learns less than if they were required to do something truly different.)

Reminds me of this post that came up on HN not too long ago:


I think here, "practicality" means "has a library for controlling robots". Also a good numeric library, a good matlab-like graphing library, good 3D graphics, good real-time performance, easy C++ embedding, etc.

Rather than complaining about how everyone else is a bonehead for not using Lisp, write some libraries to make it useful for more things in the real world.

Also, Python does pretty well for real-time control. At Anybots we use it to run walking balance feedback loops at 100 Hz in the same process as a GUI and logging and it never misses a tick. PLT Scheme (used on HN) frequently pauses for 10+ seconds to GC. While there are theoretical concurrent GC systems for Lisp, none of them seem usable in real implementations.

Libraries aren't the issue. I use Chicken Scheme for a few things, which has plenty of libraries and a fantastic package system called eggs. Sure, other languages like Perl and Python have more libraries now, but that's not what made them popular.

What made them popular was the design of the core languages. I started using Ruby because it was faster to type and easier to remember how to do things than Perl, not because it had more libraries, which it didn't.

The problem with Scheme is that the core language isn't terse enough, and so it gives the impression that it will be hard to do things. And that's correct. Scheme requires much more typing and is harder to remember than Ruby.



  h = {}
  h[:foo] = "hi"
  h[:foo] #=> "hi"
  h[:bar] #=> nil
Scheme (Chicken):

  (define h (make-hash-table))
  (hash-table-set! h 'foo "hi")
  (hash-table-ref h 'foo) ;; "hi"
  (hash-table-ref/default h 'bar #f) ;; #f
That, in a nutshell, is why people don't use Scheme. The people who made Perl etc. popular hate typing, and the core language of Scheme forces you to type a ton. There are other problems like really slow string functions, things that should be core being in SRFIs, libraries being called "SRFI 18" rather than normal things like "threads" so that it's extra hard to tell where a function is coming from, where its documentation is, or what you have to import to get it. Lack of libraries (if indeed that's a problem for you) is downstream of the core problem, which is that Scheme has a lot of clumsy design features that discourage would-be library writers from using the language a lot.

This isn't a Lisp problem, it's a Scheme one. A Lisp with a better core and enough users would get lots of libraries eventually, just like any language.

I think Scheme is good for teaching though, and would rather learn it in a course than Python.


    user> (def h {:foo "hi"})
    user> (h :foo)
    user> (h :bar)
The main difference with the Ruby is the immutable by default data structure, which is why I replaced the assignment with a literal definition of the map. In any case, I think this demonstrates how Clojure has stolen some of the good ideas from Ruby and Python in addition to stealing many of the best ideas from other Lisps.

That is a great demonstration that Scheme's problems are not Lisp problems. I'm hoping that we're about to see a lot more experimentation with new Lisp dialects.

> Libraries aren't the issue

You say libraries aren't the issue, but the discussion is about a blog pst that said: "And why Python, then? Well, said Sussman, it probably just had a library already implemented for the robotics interface, that was all."

I'm sure your experience is grounded in a general reality. But the post you disagreed with was about a specific case: i.e. the one described in the blog post.

From the point of view of a person at an instant in time, libraries are a concern. But when we're talking about Scheme being displaced by Python at MIT, where Scheme was developed 30 years ago, we really ought to ask what's been happening to Scheme during that 30 years.

The problem, at MIT and elsewhere, is that hackers don't like hacking Scheme enough. Python has more libraries than Scheme now, but that's not the main thing that made more people use it. People use it because they prefer the language. This is something that Sussman and others at MIT should be trying to understand, but it doesn't strike me that they are trying to understand it. They're just saying "Python has a better robotics library -- oh well!" as if this were something completely random that they have no control over. That doesn't seem like the sort of attitude that produced Scheme in the first place.

My point of contention with the comment I replied to is the idea that fixing the problem is a question of writing more libraries for Scheme. Writing or interfacing to libraries for numerics, 3D graphics, etc. is a trivial task compared to making a language hackers love.

Now, maybe hackers' distaste for Scheme is Scheme's fault, or maybe it's hackers'; but in any case, the issue is with the core language, not the libraries. And based on my personal experience with Scheme I would say that enough of the fault lies on Scheme's side that it's worth fixing.

The prospect of Scheme as a potential mainstream programming language went down with the rest of the Lisp ship in the late-1980s AI winter - the same one that sank Symbolics and anyone who was remotely Lisp-guilty by association.

It is no accident that it survived almost exclusively in non-saltmine sanctuaries like MIT, where industry herdthink held less sway. In the linked article, Sussman directly admits that he is unable or unwilling to fight back against the intellectual decay that has consumed non-academic programming.

He has decided to "go with the flow" - namely, the flow of industrial sewage into students' heads.

Ease of programming isn't the issue, at least not when it comes to writing libraries that must interface with C code (as a robot-interfacing library is likely to). Most Schemes and Lisps can interface to C by means of a few simple declarations, while Python requires the C code to accept and return Python's internal data structures (and convert them to a form that can be used by C code), and requires it to register those functions that are to be made visible to Python.

Scheme (PLT)

  (require scheme/foreign)
  (define my-c-lib (ffi-lib "libmylib")) ;; loads libmylib.so
  (define my-c-func (get-ffi-obj 'my_c_func my-c-lib (_fun _long -> _long)))
  ;; The C code does not need to take Scheme into account--
  ;; Existing C libraries can be interfaced with Scheme
  ;; with a trivial amount of effort.
C (to be linked against Python code-- Python can load the resulting shared library as if it was written in Python). Notice that all this code does is interface Linux's htonl function to Python:

  #include <Python.h>
  #include <netinet/in.h>
  static PyObject *
  py_htonl(PyObject *self, PyObject *args) {
     long my_arg;
     if(!PyArgParseTuple(args, "l", &my_arg))
         return NULL;
     return PyBuildValue("l", htonl(my_arg));
  static PyMethodDef methods[] = {
      {"htonl", py_htonl, METH_VARARGS,
       "Demonstrate Python's C interface\n"}
  static char module_doc[] = "Demonstration";  
  initfunc(void) {
    PyInitModule3("mylib", methods, module_doc);

Rather than complaining about how everyone else is a bonehead for not using Lisp, write some libraries to make it useful for more things in the real world.

Thank you. Cannot stress this enough. (Making said libraries portable helps, too!)

Just curious, which library are you referring to when you say "good 3D graphics"? I'm writing a graphics engine using PyOpenGL, but that's just a minimalist wrapper over the standard OpenGL API that's available for dozens of languages.

Yes, I use PyOpenGL in GTK (with GTKGLExt). People have said good things about Gazebo, but I haven't used it.

Most of the 3D stuff is performance-critical, so I typically prototype in Python and then convert to C++, which for graphics code isn't too painful. The Boost.Python interface makes it pretty easy.

Also, the Cairo graphics interface is wonderful for drawing charts, animated robot stick figures, and many other visualizations. It has a full Postscript rendering model with alpha blending.

It looks like PLT Scheme has an OpenGL binding too, though I haven't seen it used. It looks very low-level, for instance it has separate methods named: gl.Vertex3d gl.Vertex3dv gl.Vertex3f gl.Vertex3fv gl.Vertex3i gl.Vertex3iv gl.Vertex3s gl.Vertex3sv

Nice. Yeah, the GL graphics code is pretty easy to convert to C++ when performance matters. It's wonderful to quickly prototype graphics code in Python with full error checking.

Also, if you haven't used it yet, check out Py++. It takes a lot of the pain out of hand-authoring a Boost.Python code file. I'm using Py++ to automatically generate Boost.Python bindings for the C++ library FCollada. (FCollada is a library for accessing 3D modeling data exported into the .dae COLLADA format.)

Thanks for the tip about Cairo.

Also, glVertex3d, 3dv, 3f, etc are part of the standard GL API. The 'd' suffix indicates 'double', 'f' indicates 'float', 'i' indicates 'int', 's' indicates 'short', and the 'v' suffix indicates that the function accepts a pointer rather than 3 separate components passed by value. Of course, the glVertex* API has been deprecated because VBOs are far more efficient.

Just curious again, why do you use OpenGL for Anybots? Do you create simulations before building new robot prototypes?

My point was there's no single 'gl.Vertex3' call that takes native Lisp numbers. Thinking about double/float/int/short should be abstracted out below the Lisp level.

We use OpenGL to show real-time renderings of the robot, derived from the gyros and joint angle sensors. We have an analysis tool that shows, either in real time or post-run analysis, renderings synchronized with high-speed video of the robot and scrolling strip charts of every important variable over time. I always see much more looking at slow-motion videos & renderings afterwards than looking at the real robot.

OpenGL rendering also turned out to be useful for extracting data from CAD models. Our CAD software (SolidWorks) can export STL files (a list of polygons) and we need to extract center of mass and moment of inertia and such. Doing that without visualizing it is a nightmare.

We have done dynamics simulations (using ODE), but they haven't had much predictive value.

Ah. Very cool.

Are you computing center of mass / moment of inertia yourself, or using a 3rd party library to do it? PhysX is free, and I believe it can do those calculations for you fairly automatically. (In PhysX, a 'scene' is composed of 'actors', and each actor is described by a set of shape descriptors. A shape descriptor can represent a box, a sphere, a capsule, a convex hull, etc. So a laptop actor could crudely be approximated with two box shape descriptors (one for the screen, one for the body) connected by a joint. Given those shape descriptors, the library calculates the actor's center of mass / moment of inertia / etc. If your CAD software simply exports a list of polygons, you'll need to decompose those into a set of convex hulls using convex hull decomposition techniques: http://codesuppository.blogspot.com/2006/04/approximate-conv... ) You also might have better luck running dynamics simulations with PhysX than with ODE. It comes with a large number of code examples to get you started: http://developer.nvidia.com/object/physx_downloads.html

I use a very simple and exact O(N) algorithm to calculate mass properties of polyhedrons without computing convex hulls or anything:


Maybe 80 LOC. I'm not sure why anything more complex is needed.

PhysX does seem promising.

Yeah, that works. I mentioned computing convex hulls because I wasn't sure whether your CAD program exports "manifold" meshes, that is, closed polygonal meshes where each edge is connected to exactly two faces. When a mesh is manifold, it is possible to compute whether a point in space lies within or outside of the mesh, for example. It is also possible to compute the mesh's mass properties. In the video game industry, artists create models with techniques that are much less precise than CAD techniques, and so non-manifold meshes are very common, which makes them difficult to analyze in physical ways. The solution to that problem is to decompose the mesh into a set of convex hulls, then analyze the convex hulls.

The other reason I mentioned convex hull computation is because PhysX can only simulate dynamic actors that are composed of boxes, spheres, capsules, and convex hulls, IIRC. Arbitrary polygon meshes aren't supported for dynamic actors, because they are very difficult to simulate in realtime. So if you wanted to use PhysX for dynamics simulations, you would probably need to decompose your model into a set of convex hulls.

Good luck -- if you need any assistance with PhysX, feel free to contact me at shawnpresser@gmail.com.

"You also might have better luck running dynamics simulations with PhysX than with ODE." I worked on the simulator at Anybots, and I'm pretty much convinced a different physics engine isn't going to magically make things better (in terms of using simulations to predict real robot behavior). The problem is more about getting every little physical detail of the actual robot -- and its interactions with the environment -- into the model. The particulars of how collision, joint constraints, and dynamics are numerically computed isn't what makes the problem really hard.

Yeah, after giving it some thought, it would be very tough to repurpose a realtime physics simulator like PhysX or ODE to be useful to Anybots. But that doesn't necessarily mean that it isn't practical to create a simulator suited for rapid prototyping of robots.

(I'm just brainstorming ideas in the rest of this post; I'm not sure whether they'll work, but it's a fun problem to think about.)

Perhaps it would be useful to write your own simulator that incorporates data about how the components of the robot behave in the real world, to get a more accurate simulation. By that I mean, the problem is that the robot model has so much complexity that it's difficult to simulate accurately. So instead of creating a mathematical model for how a compressed-air actuator behaves, for example, gather real-world data about how it actually behaves in the robot, then incorporate that data into the simulator. I mean, if the robot is walking slowly on a flat surface, then its individual components behave in the same ways each time the robot takes a step forward, right? Therefore it seems like the behavior of the whole robot can be reproduced and predicted in a simulator, since the behavior of each component is known (under controlled conditions, like when the robot is walking slowly). Brute force the problem by using large amounts of real-world data, is what I'm getting at.

It's not that PhysX or ODE aren't good enough, it's that they only solve part of the problem. We did do some empirical analysis to get the pneumatic models.

"Brute force the problem by using large amounts of real-world data, is what I'm getting at."

Absolutely, that is Trevor's viewpoint, and I agree. Turns out to be somewhat difficult in practice. It's hard to even know what all the parameters should be, much less do all the work necessary to tune them. And all the while, the robot itself is changing. Joints get loose; servos get replaced. It's a constant battle.

There is a feeling among hard-core robotics folks that this level of simulation is either impossible or not worth the trouble. I tend to disagree. I think we got pretty far with the Anybots simulator; we did manage to evolve a neural network that balances on the simulator, and also balances the real robot. It's definitely a 'hard' problem to get a simulator good enough to be usefully predictive.

For Python, there are also a number of game libraries around which you might be able to repurpose. Pyglet, Pygame and PyOgre are three which spring to mind.

Depending on exactly what you're trying to do, of course...

Panda3D, developed by Disney and given to CMU for research. One real MMO built off of it, Pirates of the Caribbean.

Panda3D is quite good. It has many basic building blocks for 3D games. When I used it, docs were ok but a bit sparse in places, but you could deduce much from what was auto-generated.

When it's not a code word for that it's often a code word for something bad. Not always, but often.

People defending what's right can say one should do it because it's right. But people defending something they know is basically wrong can't say one should do it because it's wrong. So they look around for another word, and "practical" is usually the first they find.

In this particular case of Gerry Sussman choosing Python over Scheme, I don't think he's using "practical" to mean "I know it's wrong, but I'm comfortable with Java so Python seems more familiar".

I think "practical" often means choosing your battles. Sussman's choosing to fight the battle of explaining some syntax to his students rather than the battle of implementing a real-time robot control library from scratch in a language where no implementation supports real-time very well. Sun-Tzu would approve.

I was fortunate enough to attend this ILC session last night. Some people here are reading way too much into what Gerry Sussman said. I don't think Lispers should castigate Gerry for MIT's decision to use Python in its new introductory EECS curriculum, nor should Pythonistas praise him for giving in to "practical" concerns.

It's not evident from Andy Wingo's blog post, but just to clarify, Gerry didn't choose Python over Scheme: a committee did. Gerry said that when they had their revelation about the state of computer programming in 1997, he and Hal Abelson decided to stop teaching 6.001, and they eventually stepped away from the course altogether. (I believe Gerry returned to teach the class for the final term it was offered, but he didn't get into that last night; I could be wrong.)

Gerry went on to say that 6.001 essentially languished until sometime around 2007, when a committee got together and redesigned the introductory EECS curriculum to be what is now 6.0[012]. He didn't say whether he was on that committee, but I got the impression he was not. He appeared to be speculating when he offered the reason why Python was chosen. It was an offhand remark, something to the effect of (and I'm paraphrasing here), "And why Python? I suppose there were already good libraries for controlling these robots."

It sounded like Gerry explicitly chose not to fight any battles at all on this issue, and simply walked away resigned to the fact that computer programming, and engineering in general, are much different now than they were when he, Hal and Julie wrote SICP.

Two more things:

* I don't recall Gerry using the word "practical" even once while reciting his short anecdote.

* Gerry continues to teach with Scheme, not Python, in 6.945, his advanced topics course.

It sounds like a healthy attitude. It's not like Scheme and Python are radically different languages.

Yeah, sure, I'm not saying it is in this case. Just that it's one of those words to watch for.

We're talking about the user interface to computational machines. There is no abstract "right" or "wrong" and no context-insensitive "practical" or "impractical."

He wanted to do a course about robots, and according to him, the interface to the robots was Python, which was otherwise an adequate language.

If you want to make a case to the contrary, it should be rooted in the specifics of the situation.

    (define (deriv f)
       (define (f-prime x)
         (/ (- (f (+ x dx)) (f x))
jeez, I wish it was easy to do that in Python. Python's main limitation, IMSHO, is lack of homoiconicity. Yes, I said it!

I hate to be the guy who's always saying the same thing but, that is easy in python.

  dx = 0.00001

  def deriv(f):
      return lambda x: \
             (f(x + dx) - f(x))/dx

  In [11]: def cube(x):
     ....:     return x*x*x

  In [12]: deriv(cube)(5)
  Out[12]: 75.000149996640175
I'd also so that most (high school) educated people find infix easier to read than prefix, so isn't the homoiconicity a detriment to readability in this example (and any involving mid sized arithmetic expressions).

I love scheme (alot more than I do python), but the only example that seems "impossible" to do is to implement a large subset of <language> in <language>.

You could obviously implement, a scheme-ish lisp in python, but there are alot of advantages to the host and guest languages being similar (or so that berkley professor would have me believe) that you would miss out on.

(You would also need a scheme expression reader in python... which I'm just itching to write now ... )

the example is from sicp: http://tinyurl.com/cubk49

actually, you can do it pretty easily with lambda... but it's just not the same somehow. You can't (in the general case) use Python to generate Python, except at the text level.

I'm wondering if pressure on MIT to develop demonstrable practical skills in underclassmen has increased since (classic) 6.001 was introduced. I mean, if you're smart enough to get into MIT in the first place and you learn Scheme as a sophomore and CLU as a junior, then by the time you graduate and look for a job, you should be able to get up to speed pretty quickly in Java or C++ or whatever the flavor of the month is. But if you're looking for work in the summer between your sophomore and junior year, the employers don't want you to spend two weeks out of a two-month internship learning the syntax of an industry-standard language.

I've not found a college on the planet that graduates good programmers. It's something you have to learn from experience, and a college curriculum simply include enough large coding projects for people to become good. I regularly run across code in recommendations stuff written by Ivy League post docs and most of it's pretty bad, even if the ideas it implements are smart.

I think that realizing that universities can't effectively teach programming should be at the root of choosing which programming languages are used in computer science instruction. In college you get loaded up with the right concepts that will later facilitate you as a programmer, and I think languages chosen should be languages that are geared to that end.

Perhaps the most instructive thing I've seen for student programmers, and which pays to boot, is Google's Summer of Code. That throws students in with a bunch of brutally honest and usually seasoned hackers and makes them fare for their own for a summer.

Another good argument for not mourning the passing of Scheme is that it isn't dead. You just used it when you posted your comment. HN is implemented in Arc which is implemented in Scheme.

I was a TA for 6.01 two years ago. 6.01 and 6.02 (Intro to EECS II, may have replaced 6.001 and 6.002, but their purposes are fundamentally different, as someone else mentioned earlier. They're now both introductions to a wide range of CS topics--6.01 covers not just software engineering, but feedback and control, circuits, and probability and planning; 6.02 covers things like signal processing, digital logic, and networking. Good computer scientists nowadays need to (or at least should) know so much more than how to write good, clean, modular code. As the 6.01 FAQ says, they still teach software abstraction ideas including functional programming, higher-order functions, and object-oriented programming. They chose Python mostly because Python is easily approachable. But most of 'modernizing' 6.001 isn't about moving from scheme to Python--it's focusing on learning how to modularize/abstract and model/analyze across a variety of different contexts (not just software, but linear systems, circuits, and AI).

Also, 6.01 actually assumes that you have some programming experience already. When I TAed, at least, we assumed working knowledge of basic Python from week 1 and told people to go read a Python tutorial/book if they didn't have such. If you don't know how to program at all, you're supposed to take 6.00 first, which is the real 'Introduction to Programming' course. Except that most people skip it because they already know how to program or think they can learn on their own (and because it's kind of seen as a remedial course, which is especially unfortunate for people who really don't know how to program already--6.01 is pretty harsh if you've never programmed before).

I'm a sophomore at MIT and took 6.01 last term.

I think a lot of people don't understand the goals of this course. 6.01 is not meant to be a crash-course in writing code. (6.005, know as "java deathlab" is for that.) Likewise, it's not designed to reveal the deep inner workings of computers. Instead, it's mean to be an introduction in how to /think/ with computers. It's conceptually more high-level, and often focuses on the "why" rather than the "what." We learn how to approach problems and look at the best ways to solve them using logic and code.

Python is visually clear, has great documentation, is well maintained, and has an active community of developers. I don't understand the outrage in switching. I didn't learn to drive in an obscure European sportscar; I learned in my mom's minivan. Python is like my mom's minivan, but with better documentation.

I don't think the caliber of CS grads coming out of MIT in the next few years will change because they know Python, although it'd be an interesting thing to look at.


Here's the course website, if you'd like to look closer: http://mit.edu/6.01/mercurial/spring09/www/index.html

"And why Python, then? Well, said Sussman, it probably just had a library already implemented for the robotics interface, that was all."

I can confirm this is true. The Python Robotics module (Pyro) inspired MIT's internal library, SoaR, which was developed at first by a highly motivated TA.

What does SoaR stand for? Snakes on a Robot, of course.

Pyro is great. We don't use it because I started our robotics code base in 2001, but if I were starting from scratch I'd use many pieces of it.

I believe the main reason why robotics has progressed so slowly in the last 30 years is that getting the damn hardware and firmware working together takes 90% of the available time and students barely get beyond that. So libraries that abstract away most of that and enable plug-in real-time control modules are going to give the field a huge kick in the pants.

Blog spam. It is a huge quote from this:


Pulling a meaningful, discretely discussable quote from another place isn't necessarily 'blog spam'.

Sussman's rationale for the change in 6.001's focus and language is more interesting to me than anything else in the 'international lisp conference -- day 2' post, and is hidden near the bottom of that post. So, the pull quote blog item adds finding/filtering value for me, which is the opposite of spam. Neither the blogger nor submitter deserves the implication they are a spammer.

Thanks - I was going to point out the same thing. The linked blog has no ads, so I don't think he's going to gain much from hits.

Tomato, potato.

I'm a PhD student at MIT and arrived to my office on Monday to find the International Lisp Conference had set up shop downstairs.

Let me just tell you.. I've never seen so many bearded people in one place!

I noticed this too. In one of the admissions tour groups passing by, I heard the guide comment that it was probably a Literature convention.

So I suppose beards either mean lit gurus or lisp hackers.

Are they still going to teach Scheme in any other course(s)? Why not teach both?

I had the privilege of taking an SICP-based course at the U(C). The brain-stretching is critical, isn't it? Yes, there's a lot of head-slamming that goes into modern development, but that's not really the same thing.

Python may not be a scheme, but it does have what they used to call functional programming, low-grade continuations with the generators, and some other stuff like that.

Python may not stuff its academic credentials in your face, but I think it has one of the better balances between what you can use it to teach, and what you must teach to use it. (Referencing the common and IMHO justified complaint that the simplest possible Java program requires rather a lot of stuff to be taught before you fully comprehend it.)

Teach Scheme later... or perhaps, fully distinguish between academic and practical by starting with Python to get you going then transitioning to an ML or Haskell later. (That approach has a lot of appeal to me personally, and has the benefit of the student coming out with at least one useful-in-the-industry language (Python) without compromising academic utility very much.)

Python has some issues that can really interfere with doing Scheme-style FP in in it (no tail-call optimization, single-expression-only lambdas, closure/scope problems, general hostility from Guido), but IMHO it's a great general-purpose language, and well suited to teaching, particularly for introducing higher-order functions.

Lua has much more Scheme influence than Python does (while being otherwise pretty similar). Its design assumes proficiency with C, however, so, while very practical, it's probably not as good a language for people new to programming.

Yeah, that's why I called it "what they used to call functional programming". Functions are first-class values and closures are easy to get at. (Some people quibble with the exact implementation, but they are definitely there. Such people will be happier with Python 3.0, but still not entirely satisfied.) This is what "functional programming" used to mean.

Now it means having immutable values, tail recursion, easy lambdas, powerful type systems, and so on. There's a clear historical progression from past to present, but there's still been changes in what the term means over time which means that different people mean different things by "functional programming" if they don't clarify.

Agreed. I originally said "lisp-style", but changed it to "scheme-style" for just that reason.

I didn't miss tail-call optimization much when I studied SICP using XLisp back in the 80s (with 640kb of memory) -- only a couple exercises blew the stack and made me add a while loop. Of course it matters more for real programming on real data, and I agree with your comment generally.

over IAP I heard they did a bootcamp version of the scheme course.

Hi, I am MIT student in CS and I took 6.001 (the new course is called 6.01). When I asked the instructor why we are using some language that has almost no practical use, they told me that the reasons are: - it teaches the concepts of the class well - after it one can learn any programming language very easily.

I believe both of these to be true. From my point of view the new 6.01 sacrifises depth for breadth. This is part of much bigger change of the ciriculum in MIT. That's why I hurried to take all nice classes before they get destroyed.

P.S. I didn't know python untill 2 weeks ago. But I learned in need very fast - definitely having background in different programming languages helps a lot.

Then you can also take comfort in the fact that Python has enormous practical use.

After his explanation, I asked Sussman why, if that's the world we now live in, it was a bad choice. He responded: "I didn't say it was bad. I said it would make you sad."

It it just me who is seeing a cowardly surrender to cultural decay here? From the renowned Sussman, no less.

The glorious MIT of the 1980s and prior appears to be dead. Erased, in fact, without a trace. Consider the following:


These are the latest releases from MIT press. Among them you will find nothing remotely like SICP, but plenty of postmodernist/related pablum.

"You have to do basic science on your libraries to see how they work, trying out different inputs and seeing how the code reacts." Is this not an atrocity, to be fought to the last bullet? Is this not an argument for declaring all existing software obsolete and starting from scratch? Why are we - even luminaries like Sussman - so willing to give up on the future without a fight?

As argued in some of PG's essays, in order for creativity to bear fruit it is not enough for a given individual to possess a creative mind. The culture has to be hospitable to creativity. And there are processes which come close to stamping out creativity without any overt or deliberate suppression involved. Specifically, a transition to a medium where one cannot get reliable, understandable "small parts" having predictable behavior.

Take note: http://xkcd.com/298/

Programmers have indeed created a universe where "science doesn't work." Learning where the permanent bugs and workarounds are inside a phonebook-length API teaches you nothing. It is anti-knowledge. Where your mind could have instead held something lastingly useful or truly beautiful, there is now garbage.

There is no longer a "what you see is all there is" Commodore 64 for kids to play with:


No one is working on, or sees the need for a replacement. This is just as well, considering as one would have to start from bare silicon - and avoid the use of design tools built on the decaying crud of the past. Only a raving lunatic would dare contemplate it...

Outside of programming: Linus Pauling developed an interest in chemistry by experimenting incessantly throughout his childhood. Today, he would have been imprisoned:


The new generation will have no Linus Pauling, unless the Third World - where one might still purchase a beaker without a license - supplies us with one. Likewise, we will see very few truly creative programmers. What creativity can there be when your medium is shit mixed with sawdust?

"You have to do basic science on your libraries to see how they work, trying out different inputs and seeing how the code reacts." Is this not an atrocity, to be fought to the last bullet?"

I think it's an acceptance of reality. There is a phase change when systems become too complex for a single human to ever understand them in their entirety. Dealing with systems at that level is a different beast than dealing with simpler systems where "what you see is all there is".

"What creativity can there be when your medium is shit mixed with sawdust?"

The same sort of creativity artists possess, who work with media that are idiosyncratic. It's a different mindset. I see the reddit-gen programmers talk about things on a completely meta-level. To them, MySql is the hardware. Is it a bad thing? Not necessarily. Some of them hopefully will dig down the stack and be the low-level heroes. But that should (and probably can) only be a small percentage.

Now I guess one could argue that those sorts of heroes are what MIT is supposed to produce, but as has been mentioned, this course is not just for CS students. So the real question is, can the CS heroes of tomorrow survive an introductory course in Python? Well, consider that they have probably been modding games since 10, hacking PhP at 12, realizing at 14 they need to learn a 'real' language (C#, Ruby, Python), by 15 I bet they've abstracted out the fact that languages are not all that important; they're just the tip of the iceberg of complexity.

So yes, I think they will survive.

>I think it's an acceptance of reality.

All of technological progress consists of refusing to roll over and "accept reality" - instead, attacking it and modifying it to suit our needs.

>There is a phase change when systems become too complex for a single human to ever understand them in their entirety.

This is how innovation dies. The disease is preventable:


> To them, MySql is the hardware. Is it a bad thing?

Yes! It is. A programmer's finite mental capacity for dealing with complexity is consumed by an ever-greater load of accidental complexity - inconsistencies in the medium learning about which teaches you nothing. They are holes in reality. The thing that made modern technology possible is that physics, chemistry, etc. do not have "bugs." A correctly set up experiment will yield the same results when done a week from now, in Australia, etc. And in the end, there are unifying principles governing it, which will explain everything that happened in a way that teaches you something permanent. A medium with "holes", like a buggy programming system, is a universe where "science does not work":


> Some of them hopefully will dig down the stack

Wading in sewage.

> languages are not all that important

This statement is true when the languages are all equally crippled - as they increasingly are, through their reliance on an ever-sinking foundation of 35-year-old rotting OS.

I believe I categorically disagree with every point you make. You are pissing in the wind if you think you can treat a system with 100M LOC the same way you wrote assembly on a 6502. They are different beasts, with totally different emergent properties. It's like claiming that you can understand the weather if you just stick to thinking about individual molecules.

[edit:] I believe the xkcd strip can be read two ways. Unknowable complexity equals magic (a riff on Arthur C Clarke). I strongly believe that exactly what's been wrong about software is our clinging to the idea that it is reducible and manageable. We don't think that way when we work in the physical world; we do rely on science, but it's empirical science, with statistically useful models at appropriate levels of abstraction. We don't rely on quantum mechanics to build bridges.

> you are pissing in the wind if you think you can treat a system with 100M LOC the same way you wrote assembly on a 6502

I would be, if I actually believed that it is possible given current technology. However, I believe that the problem is technological rather than fundamental, and that we can arrive at a solution. Let me give you an example of a problem that was once thought to be fundamental and inescapable yet ceased to be viewed as such. Before the advent of high-quality structural steel, no building (save a pyramid) could exceed five stories or so in height. Towards the end of the 19th century, metallurgy advanced, and the limit vanished. I believe that we are facing a similar situation with knowledge. The conventional wisdom is that the era of "encyclopedic intellects" (and now, human-comprehensible software) is gone forever due to the brain's limited capacity for handling complexity. Yet I believe that the computer could do for mental power what steam engines did for muscle power. I am not alone in this belief - here are some pointers to recommended reading, by notables such as Doug Engelbart:


Additionally: why do you assume that any system built to date necessarily must contain 100M LOC? Does this not in itself suggest that we are doing something wrong?

> clinging to the idea that it is reducible and manageable

All of technological civilization rests on the assumption that the universe is ultimately "reducible" to knowable facts and "manageable" in the sense of being amenable to manipulation (or at least, understanding). The alternative is the capricious and anthropomorphic universe imagined by primitive tribes, run according to the whim of the gods rather than physical law.

> We don't rely on quantum mechanics to build bridges

I must disagree here. QM is entirely relevant to the science of material strength, which is ultimately a branch of metallurgy - arguably a discipline of applied chemistry. In any legitimate science, there really are "turtles all the way down." And when there is a discontinuity in the hierarchy of understanding - say, the situation in physics with the need to reconcile general relativity and QM - it is viewed as a situation to be urgently dealt with, rather than a dismal fate to accept and live with forever.

A computer, unlike the planet's atmosphere, is a fully deterministic system. It is an epic surrender to stupidity if we start treating programming the way we treat weather prediction - as something that is not actually expected to reliably work in any reasonable sense of the word, or to obey any solid mathematical principles. It is a disaster, to be fought.

I think we're talking past each other, though there is still a fundamental disagreement in principle. Of course I believe that science 'works', in the sense that in theory, everything should be reducible to a lower level description. High-level source code reduces to machine code; weather reduces to quantum mechanics. However, there's a critical decision about what level you choose to work at when you attack a problem. While you may know that QM has something to do with why certain steel alloys are strong and light, when you build a bridge or a building you don't work at the QM level.

In the practical world of software development today, we do indeed have tens of millions of lines of code 'under' the application level most of the time. Even on Linux, where every one of those lines of code is theoretically accessible as source, the practical fact is nobody has the resources to read, comprehend, and manage all of that code during the lifetime of a typical project. Therefore, you are necessarily in a position where you have to deal with most of that codebase as black boxes, which have a statistical likelihood of doing what they are intended to do. You have to program defensively, hoping for the best but assuming the worst. You have to rely on your pattern recognition skills and your intuition to guide you. You have to treat the chaotic complexity you sit on top of like a surfer riding a wave. You cannot afford to take a fully reductionist approach. It's just not possible anymore, full stop.

The exception to this would be a small, embedded system with a minimal operating core and limited memory. But wait -- that system today is almost sure to be networked up to the bigger, chaotic systems that populate the metaverse. So again, at the level of communications, you have to treat the components you communicate with as black boxes.

If you disagree, give me a feasible scenario in which you could in fact take control of the situation in a reductionist fashion. I propose that any such scenario is a toy problem, that will not scale to real-world deployment.

The cat is already out of the bag.

I don't disagree with the necessity for fundamental elements ("black boxes") on top of which to build. My only complaint is with the universally shoddy state of these black boxes. Specifically:


The modern computer is the first time mankind has built a system which is actually capable of perfection. You are more likely to have a spontaneous heart attack and die at any given moment than the CPU is to perform a multiplication incorrectly. All of the "hairiness" which your computer's behavior exhibits is purely a result of programmer stupidity. It is theoretically possible for the entire stack to work exactly as expected, and for each element to seem as inevitable and obvious-in-retrospect as Maxwell's equations.

It is possible to build a system where the black boxes aren't fully black and can be forced to state their actual function on a whim. It is possible to expose the entire state of the machine to programmer manipulation and comprehension in real time:



>The cat is already out of the bag.

The Lisp Machines existed, and were capable of everything modern, "hairy" systems are capable of, plus much more. You cannot un-create them. They are proof that the entire software-engineering universe is currently "smoking crack" - and I sincerely wish that it would stop doing so.

we don't need lisp to know the state of the machine -- in the Linux case, we have source code. We can _theoretically_ debug everyone else's code and know exactly what is going on.

"It is theoretically possible for the entire stack to work exactly as expected, and for each element to seem as inevitable and obvious-in-retrospect as Maxwell's equations."

Yes, and it is theoretically possible according to the laws of physics for broken glasses to spontaneously reassemble. But they don't.

This is basically an issue of entropy.

Have you ever actually used a Lisp Machine? There is an emulator available (ask around.) All of these things were actually possible there - not merely theoretically.

Maybe I'm not making myself clear. I realize you can in principle debug everything, know the state of the machine completely, etc. What I am trying to point out is that it is too much information to handle at that level of detail. It doesn't matter if it's written in Lisp or Swahili. And the fact remains that even if you convinced 90% of the next generation of programmers to use Lisp, you still have 50 years of legacy code lying around in other langauges. These are practical facts that can't be washed away with a new (or old) programming paradigm.

MIT Press has about as much to do with the MIT undergraduate curriculum as the MIT Real Estate Office, so I don't think citing their new-books list proves anything.

I read SICP when I was 14; this was a couple of years after I came across some pg/Lisp stuff. It would be absurd if MIT CS students remain ignorant of 6.001 given their institute's really hardcore hacker culture. Hence, I fail to see what this entire fuss is all about.

I wonder, if the switch had been made now, it they would have switched to Clojure instead of Python. It's a lot closer to Scheme, and its connection to the JVM gives it a lot of practical power. Does anyone have any thoughts on this?

People often say that Java has great libraries, but is it true? AWT is crufty and ugly. Maybe there are impressive XML parsers (Xerxes is 188,000 lines of Java!), but I don't want those. Numerical libraries seem bad. So what is the library that might convince me to use Java for something?

No, they aren't very good. I think the problem is cultural. Programming Java gets you comfortable with extreme verbosity and tons of structure so its libraries end up like this: http://ws.apache.org/xmlrpc/apidocs/org/apache/xmlrpc/server... (In extreme cases.)

Lucene was mentioned earlier - it, at least, is very good.

I have the same impression about the numerical computation and machine learning libs too.

There's a lot of "enterprise" crap in the Java libraries: extreme verbosity is the most notable problem. Object-orientation is also just the wrong metaphor for many problems. Finally, camelCaps are uglyAsFuckAndNotVeryReadable.

However, the JVM libraries are well-supported and generally can be trusted to work. For a lot of the wheels I'd rather not reimplement, they're quite useful. I don't think that the Clojure of 2020 will be as reliant on Java, but for now, the JVM libraries are very useful.

"The" library is which ever squirrelly database, file format, app, etc. your next project needs to talk to which doesn't have bindings for your favorite language. You compromise on Java to avoid writing that by hand and use a JVM language to avoid the pain of actually using Java.

Such as??? Can anyone point to a format I might want to use that has a Java library but no C++ or Python library?

A few years ago, I took over maintenance of a Windows program that depended upon a Java component wrapped in COM. With Sun and Microsoft at war, that seemed a bit risky to me.

I asked: "why is this Java component in the Windows client app?" The answer was that the server was also implemented in Java and the SOAP libraries were compatible, whereas Microsoft's and Java's were not (at that time).

I nearly choked: they chose a "standard" like SOAP and then re-standardized on a particular library (client and server components of it) "for compatibility."

So to bring this digression back to relevance, I'll say this: if you already have Java code around, it will be easiest to talk to it from Java. And since Java is the new COBOL, there is a bunch of Java floating around. (but if you need a Java client to talk to a Java server using a "standard" protocol then something is seriously awry somewhere)


I'm not saying they are all are high quality, or relevant to your task, or that there are no C++ or Python equivalents - just that there is a high quantity of Java projects. Why not have a quick search? Maybe there is a project there that will convince you.

AWT is about the most prejudiced example imaginable - it was rushed out, disowned by its authors, and quickly replaced.

I think one of the key aspects of the java ecosystem is the idea of free reference-implementation libraries, but you can pay ridiculous amount of money for corporate versions with better implementations. For example, the BEA Systems implementation of Java itself and related tools seems pretty good, well-documented and so on.

I find Java itself and its OO style is pretty horrible for writing prototype code that can be changed quickly and arbitrarily - so, probably, any revolutionary ideas that require that flexibility of interfaces will be hard to iterate in java, and therefore we probably won't find many such projects written in java.

"AWT is about the most prejudiced example imaginable - it was rushed out, disowned by its authors, and quickly replaced."

Ahh yes -- replaced with Swing. Do I need to spell it out?

Please elaborate.

swing sucks almost as hard as AWT.

I don't know why people even bother with XML when it is so much easier to write S-Expression generators and parsers.

I beleive actually that s-expression are going to come back. Clojure, Arc, R6RS - these are harbingers of the return, I think.

JSON is much more popular than s-expressions, and not just accidentally. Explicit, compact syntax for both lists and key-value pairs is The Right Thing in a language like this.

       "x": [
               "y": "a",
               "z": 23,
               "q": [
        "r": 43

JSON is a subset of s-expressions. (Yes, s-expressions provide explicit and compact representations for mappings. They also handle other kinds of objects.)

The other difference is that there are JSON parsers and generators for more languages and they're not programmable.

Don't tell me. Show me. Please translate the example into s-expressions.

I'm confused why you don't see this as obvious. (There are several possible ways to represent mappings. I picked one that was close to something that you're happy with.)

{ ("x" ( { ("y" "a") ("z" 23) ("q" ( 54 32 45 )) } )) ("r" 43) }

Clojure uses {} for maps, #{} for sets, and [] for vectors, and vectors are used in function definitions, let statements, etc. This slight decrease in regularity makes a lot of functionality more visible. For example:


(defun f (x y) (let ((z (gethash x :a))) (+ z y))


(defn f [x y] (let [z (get x :a)] (+ x y)))

Having syntax for maps is a huge win. Like ML's pattern matching, it's one of those things that changes your coding style entirely for the better, in a way that you wouldn't predict just by looking at the feature.

The only think I miss from CL when I work in Clojure is keyword arguments to functions. There, the Clojure way is a bit worse:

(defun f (x &key y z) (list x y z)) (f 2 :z 3) => (2 nil 3)

(defn f [x {y :y z :z}] (list x y z)) (f 2 {:z 3}) => (2 nil 3)

> (defun f (x y) (let ((z (gethash x :a))) (+ z y))


> (defn f [x y] (let [z (get x :a)] (+ x y)))

The CL defn of f is a function that is called with two arguments. That function is called like "(f a b)" If the Clojure definition is comparable, that is, the call looks like "(f a b)", why are []s used in the definition and the let?

> This slight decrease in regularity makes a lot of functionality more visible.

What functionality? x,y aren't part of a vector and neither is z. (x,y may come from a vector in the caller, but I'll assume that the defn works if they don't, so that shouldn't matter.)

Why can't we keep the indents and drop the parens?

It would make data generation more difficult. When you emit any given data, you'd have to know its level of nesting to get the correct number of indents. Using begin and end tokens (whatever they are) makes that much easier.

(I actually prefer Python's style of using indentation to indicate blocks, but I recognize generating such code is harder than code with tokens.)

You don't have to know the level of nesting; you can add indents afterward just as easily as you can wrap a subexpression in parentheses afterward.

def indent(code): return code.replace('\n', '\n ')

In my current project, I've implemented a source-to-source compiler. In the places where I emit C code, I usually don't have a handle to the scopes above me. I could get one, but it would take more work.

The code I generate has no indents and no newlines. Not having to keep track of these makes life simpler. To make my generated code more legible, I just run indent on it.

I've written C code emitters too. FWIW, I've found nicely-indented output templates help to keep the source code of the emitter clear (and with no need to get at the enclosing scopes). But the question you raised was whether indentation-only was harder to generate, and I'd say no, because code.replace('\n', '\n ') is as simple as '{' + code + '}', if slightly slower.

And my point was that with the transformations I generate, I don't just say '{' + code + '}'. I'm applying transformations to existing code, not generating all of my own code from scratch.

Again, I see no point in making efforts to generate clean looking code when utilities like indent exist.

{} means object [] means array So you would not be able to just drop them, since they mean two different things.

for associative arrays foo: bar, for non-associative foo, bar,

surely this is unambiguous?

      y: "a",
      z: "23",
      q: 54,
    r: 43


1.hmm, well, if you try it you'll see that at least in the area of programming languages, after spending some time you'll see that you reinvented s-expressions. 2. can be very interesting to see for example this (sum of every vector-element) in json/yaml...

(defn sum-vec [vec] (let [vec-len (count vec)] (loop [idx 0 acc 0] (if (< idx vec-len) (recur (inc idx) (+ acc (vec idx))) acc))))

I don't want that in my json/yaml.

If I have Javascript then I already have mobile code.

If I'm using json in a situation where I don't have a Javascript interpreter, then I probably don't have a Lisp interpreter around either.

A decent JSON parser is also only a day's worth of work or less in just about any language.

My experience is that it is as little as and possibly less initial development and maintenance as a custom flat-file format or a really robust csv parser.

Java's JIT - this part is really interesting.


Which is a clone of a Common Lisp search engine ;-)

This took me quite a bit of googling to "confirm" so I thought I'd save the next guy some trouble.

Lucene is sort of Doug Cutting's Java version of Text Database (TDB), which he and Jan Pedersen developed at Xerox PARC, and which, to complete the circle, was written in Common Lisp (see "An Object-Oriented Architecture for Text Retrieval").


Please put newlines in the above, or don't put the two spaces in.

sorry, It seems I can't edit it now. Is there a timeout?

Yes, but I think I also triggered it by responding, which means only an admin can change it.

Clojure is still something of a small hype. Python is a 18 year-old programming language.

Clojure is a Lisp. Lisp is 50 years old.

Don't you think that's a bit disingenuous? Clojure is still changing with a fair amount of frequency. As far as I'm aware, they have not even hit a 1.0 release yet (releases are currently referred to by their release date, which tells me it's still in the pre-final stages).

It's a great language, but there is nothing to gain here by claiming it somehow is mature because Lisp is 50.

I suppose my point was that the principles of Lisp have lasted 50 years. Python (which is great) is the child of many, many Lisp-y ideas, but I don't think people will be using a variant of Python 32 years from now. But I would definitely bet there'll be many cool Lisp variants 50+ years from now, some of which may have taken some really good ideas from Clojure. Let's make it happen ;)

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact