
Stackless Python for Python 3.0 and Python 3.0.1, now available. - arem
http://zope.stackless.com/Members/rmtew/News%20Archive/3.0
======
ggruschow
That's cool and all, but cores are practically growing on trees now. Is there
something out there that'll let me efficiently do tons of active objects
(coroutines with data that can make sync or async calls?) but scale out across
processors and machines (hi EC2, I love you) also? I can do without shared
data.

As it stands now, unless I had a big need for bazillions of contexts, I
wouldn't touch stackless because I've already got to use other mechanisms to
get work running on a lot of different cores / machines at once anyway. Since
I'm already doing that, it's nicer to stick to the one mechanism and design
things to work within it (keep contexts <1,000).

~~~
moe
I hear you, I'd love to have the same thing.

Still the big deal about co-routines for me is not mainly raw performance (or
core-utilization) but rather the programming model. I've literally slashed the
LOC-count in half for some of our projects just by moving to co-routines,
that's what I call a productivity boost.

Also I find myself _enjoying_ co-routines much more than the tangled mess of
callbacks and data-locality issues in the multithreaded world.

~~~
MaysonL
I still remember with fondness the first coroutines I wrote, almost 35 years
ago: it seemed like a really cool hack to connect a producer of bytes to a
consumer of them (both already written, just 6 lines of assembler plus a
little bit of init code).

