
PyPy and Psyco - iamelgringo
http://www.voidspace.org.uk/python/weblog/arch_d7_2009_03_14.shtml#e1063
======
paulbaumgart
Does anyone know why JIT-ing Python isn't given more "institutional" support
(by which I suppose I mean mostly Google, since they employ Van Rossum after
all)? It seems like the logical next step to make Python more generally
useful.

As I see it, speed is the last big advantage Java has over Python (and I do
realize that a dynamically typed language makes achieving efficiency harder).

Apparently PyPy has some financial support from Google
(<http://en.wikipedia.org/wiki/Pypy#Project_status>), but it really seems like
basically any company that uses Python would gain so much from robust JIT
compilation in the interpreter that I'm surprised there's not a more concerted
effort to make it happen.

~~~
moe
Maybe because raw excution speed matters so little in most applications that
they're rather focussing on lower hanging fruits such as improvements to the
core language or standard library. - But that's just a wild guess.

~~~
ankhmoop
Raw execution speed does matter, but like any incremental cost, is often
ignored.

Off-the-cuff benchmarks I ran on our Scala-based webapp on a four-core desktop
system demonstrated 5,000 req/sec @ 2ms/request after the JIT warmed up.

5,000+ requests/second -- scaling up with available CPUs -- means not having
to worry about performance, complicated caching, and the slew of other things
that developers do to eek performance out of frameworks based on poor
interpreter implementations.

~~~
moe
It's not really an incremental cost but rather a constant overhead.

The idea is that if you're worried about performance and scaling then you'll
have to find a way to distribute your load over multiple physical machines
anyways.

At that point it doesn't matter so much anymore whether one of your nodes
handles 5000 reqs/sec or 2500 reqs/sec. Hardware is cheap.

~~~
ankhmoop
"Hardware is cheap" is a false dichotomy. The implication is that hardware is
cheap, but development is not, and performant architectures require expensive
development.

In reality, neither hardware or development are cheap, and performance does
not inherently necessitate more costly development. The mantra and the
dichotomy are false.

------
almost
Cool to hear that things are moving on. I'm expecting great things from PyPy
:)

Psyco is pretty cool right now, even without generator support (though that
would be good).

I'm doing stuff with a neural net built in Python/Numpy right now and Psyco
makes it twice as fast. Given that I'm waiting for it to run each time I try
something this sort of speed up is quite nice. It's now to the point where
pretty much all the execution time is spent in Numpy (which is largely
implemented in C) which is cool.

------
danbmil99
The main problem with psyco is lack of 64-bit support. While a new psyco would
be nice, what really would make sense is a boost-like tophat on rpython so you
can drop into it and compile your performance-critical routines. Today people
do that with C++ and boost. Cython/pyrex don't cut it because they don't allow
you to create C-style strongly typed objects with Python-accessible methods.

Maybe you can do that today, but good luck finding out by reading the pypy
docs.

------
pooryorick
I recently wrote some image-manipulation routines in Python which ran 71 times
faster with Psyco. Without Psyco, this kind of computation isn't really
feasible in pure Python, so I'm also a little surprised Psyco hasn't had more
influence on C Python. But perhaps its just a matter of time before PyPy is
bequeathed Psyco optimizations.

