The fact is, it's stalled (https://www.google.com/buzz/bcannon/bZDN1jNZ3uC/Is-this-fina...) - http://www.python.org/dev/peps/pep-3146/ was accepted. This means that a branch was made with the intent of merging US into Py3k. Unfortunately, this proved to be a greater task then originally planned. Stalled != Dead.
To quote Collin:
"Unladen Swallow is stalled because merging the code into the py3k-jit branch plain sucks: lots of new compatibility work to be done, lots of corners we had to un-cut, lots of drudge and suffering. It doesn't help that Google is full of far more interesting and influential projects."
So add in the fact that the resources Google had "assigned" to the project got moved/reassigned onto something else, and what you have is a very stalled project. Collin has stated that they hope(d) that other python-core devs would/will step up to help, but everyone is pretty tapped out as it is.
I'm as disappointed as anyone that it's stalled - given it's compatibility with C extensions, potential speedups and future potential, it has been one of the projects I've had the most hope for.
That said, PyPy is also coming along; when they hit 2.7 compatibility and they have drop-in replacement capability, the ecosystem will be very interesting. Competition in the interpreter space is good, and I feel that Unladen's aggressiveness helped spur/move PyPy along.
That said Unladen (in my mind) remains the CPython interpreter's best way of moving into the future. It still has the most promise for that code base.
I hoped that this thread would raise some discussion about the status of US, as, after a lot of publicity on project launch, there have not been many official statements from the team. I have followed the project since the beginning and saw the exponential decay of traffic in the mailing list and commits as the promised performance improvements seemed less achievable (but it has to be said that the US team contributed to LLVM a lot of useful code, like gdb debugging of JITted code).
So my question is: why is US relevant since it hasn't achieved any significant performance boost? Why should we believe in the "potential speedups and future potential" if there is no evidence that supports them?
I myself had done some experiments with implementing a python JIT with LLVM some time before US was launched and had the same results, 10-15% improvement on microbenchmarks at expense of huge memory increase and compilation times.(see https://groups.google.com/d/topic/unladen-swallow/bqf9TzWHht... )
It seems that the bottleneck is not in the interpreter loop, but in the CPython runtime API. I am afraid that, if the interpreter has to stick to CPython source compatibility, it cannot get rid of the overhead. But I would be very happy to be proven wrong.
Your indulgence in catchiness is annoying, as it will be echoed (and it already has been) all over the place with breathless drama "Look! It's Dead!" when, despite languishing, the code is still there, still functional, but starved of resources. The editorializing is frustrating, misleading and overall, I think harmful.
As for why it's still relevant: Easy, it's the only currently visible way of moving the CPython interpreter forward from an evolution standpoint. US does show a series of speedups, one thats useful for CPython. To quote:
"A JIT compiler is an extremely flexible tool, and we have by no means exhausted its full potential. Unladen Swallow maintains a list of yet-to-be-implemented performance optimizations  that the team has not yet had time to fully implement. Examples:" - http://www.python.org/dev/peps/pep-3146/
I would recommend: if you want to move anything forward, or help, rather than make posts like this, jump in on the http://svn.python.org/view/python/branches/py3k-jit/
Yes; part of the reason the current interpreter is limited is due to backwards compatibility, but I'd rather have a limited JIT with some performance gains then none at all.
The fundamental issue is that Jeffery, Collin and Reid - the founding three are "off in the woods" - and they're the only ones who understand the code enough to move it forward right now. PyPy's success(es) are orthogonal to the evolution of the CPython interpreter code base.
I have nothing against calling it "stalled", but IMHO it is just an euphemism for "dead".
Also I don't understand why people at Google decided to stop investing on it, the cost of 3 full time engineers is nothing compared to the millions they would save with a more efficient python implementation, given their huge python infrastructures (App Engine, for example).
As for Google's intentions, I've learned (as it has been explained to me time and time again) - never attribute smaller decisions to a larger strategy or "plan" of Google's. It probably not that they decided to stop investing in it, more than "something more interesting came up".
I do wish that we could have funded work on core/the interpreter. That would be nice.
Left right and center you're bombarded with 'use this', 'use that', but most of these projects end up not having much staying power and if you depend on them for the running of your business then sooner or later you might end up having to support them.
When I choose an open source component these are the things I like to see in the eco system around it before deciding to jump in (obviously not every project selected will have all of these, but more helps):
- multiple maintainers, active response to patches being sent in
- broad support, using open standards
- preferably a drop in replacement available
- lively community of people willing to help each other out
- two years track record with a good roadmap for future support
- people working on it because they like it, not because they get paid to (this one is probably very counter intuitive when you're looking at this problem from a business point of view, but it is my opinion that people working on something because they like it are automatically in for the long haul).
Every time I've been seduced by new, hot and sexy technology I've come to regret it sooner or later. Old, boring and solid seems to win the race every time.
Running a small shop means that the investment you make in a tool is one of the most important decisions you can make, the wrong decision could cost you a lot of time and/or cripple your company.
I hope that everybody using unladen swallow is able to easily transition to a replacement.
I was actually quite shocked when unladen swallow was somehow almost accepted for python 3.2 at some point, given how bad the performance results were: if you add C++ + llvm and all the new infrastructure/complexity it brings, you rather have good numbers, not something like 10 % speed improvements which was the order of magnitude last time I checked.
It did bring useful code to python, though: in terms of benchmark code, but also proving that speeding up python while keeping cpython compatibility (vs just python compat) is actually pretty hard (which was expected, but they confirm the suspicion).
Memory is getting cheaper. I was looking forward the GIL removal (at first), but I'll be happy with any multi-threading improvement.
OTOH, when I really needed it, the multiprocessing module did the job nicely. And it also prevents some very hard to catch bugs.
And, besides that, Unladen Swallow would most probably take 18 or 27 MB per process on your scenario. 10x more memory seems weird.
The "merge plan" section states that, before that, all the jit-related patches will be confined in the py3k-jit branch
I hope that everybody using unladen swallow is able to
easily transition to a replacement.
It seems ironic that this is often cited as a reason not to use an open source project. Isn't the selling point supposed to be that you can support it if you really need to?
That said, the pragmatic part of me would rather work on my problems instead of my tools.
Or, seeing as this is Python we're talking about: It's not dead, it's just resting.
And doing what US already did is about equivalent to eating a cow.
Yes, but the feature is in alpha state and is available only on trunk (not in the 1.2 release). However, we'll only ever support well-behaving CPython extensions. Please consult PyPy developers on IRC or mailing list for explanations if your favorite module works and how you can help to make it happen in case it does not.
We fully support ctypes-based extensions, however.
It's worthy to note that GIL is still there in pypy
Yes, assuming that you hear about the project at all. Overannouncement is lying, but it does get you more publicity.
An example of this would be Mike Pall and his LuaJIT project.
Jeffrey and I have been pulled on to other projects of
higher importance to Google. Unfortunately, no-one from
the Python open-source community has been interested in
picking up the merger work, and since none of the original
team is still full-time on the project, it's moving very
slowly. Finishing up the merger into the py3k-jit branch is
a high priority for me this quarter, but what happens then
is an open question."
I wonder what the future holds for pypy.