
Die Threads: Python async code [video] - luu
https://www.youtube.com/watch?v=U66KuyD3T0M
======
rerx
I missed some discussion of how this is built on top of async/await.

Edit: Here is a recording of the talk where you can both see the slides and
the speaker:
[https://www.youtube.com/watch?v=xOyJiN3yGfU](https://www.youtube.com/watch?v=xOyJiN3yGfU)

~~~
erdewit
It's built on top of threading really (imported via curio).

------
dingdingdang
Looks like a super nice way of doing threads in Python, here's the rep:
[https://github.com/dabeaz/thredo](https://github.com/dabeaz/thredo)

Sad to see that it has not received any love for last 4 months, is there
another "preferred" thread lib for Python?

~~~
jMyles
twisted.internet.threads is always, always my go-to personally. Twisted is an
amazing project and has especially shined brightly in the past 3 years or so.

~~~
richardwhiuk
Twisted doesn't support Python 3, which is problematic for some projects. I've
also always found the code it produces difficult to test.

~~~
jMyles
> Twisted doesn't support Python 3

No no, it now supports Python 3 _wonderfully_. It took a very long time,
because a lot of what Twisted does was more dramatically changed from 2-3 than
a lot of projects. Consider, for example, the implications of PEP-3333, which
required that headers be in the native str type in Python 2, but also the
native str type (ie, unicode) in Python 3. This was a tough problem. [0]

However, those days are long behind us. Not only does Twisted support Python
3, but it can interchange its own flow control (ie, the reactor) with the
asyncio event loop, and also convert (and ensure) that Futures and Deferreds
fire in a way that is cross-compatible. You can also use the inlineCallback
decorator to see all of the new coroutine syntactic sugar.

Here's a project I'm working on right now that is Python 3.6+ with Twisted:
[https://github.com/nucypher/nucypher/](https://github.com/nucypher/nucypher/)

> I've also always found the code it produces difficult to test.

Yeah, I hear that a lot, and for my part, I've just never had that trouble, so
I'm not sure how to respond. Have you ever read the "TDD with Twisted"
document[1]? Also, the new pytest-twisted tooling helps quite a bit if you're
using pytest.

0:
[https://github.com/twisted/twisted/blob/6ac66416c0238f403a8d...](https://github.com/twisted/twisted/blob/6ac66416c0238f403a8dc1d42924fb3ba2a2a686/src/twisted/web/wsgi.py#L26)
1:
[https://twistedmatrix.com/documents/current/core/howto/trial...](https://twistedmatrix.com/documents/current/core/howto/trial.html)

~~~
limaoscarjuliet
BTW, when writing in Twisted use @inlineCallbacks with yields for async code.
The code reads as if it was synchronous then.

    
    
        @inlineCallbacks
        def get_index_status(self, index_id):
            uri = "{}/index-status?id={}".format(self.SERVER, index_id) #yes, no escaping, as this is silly test only
            r = yield self.get(uri)
            returnValue(r.text)
    

Using raw Deferreds is a big more difficult.

------
devxpy
It would be really cool to have the sort of Erlang-style processes in python
-- where each process is a green process, so don't pay the cost of OS handling
stuff, but at the same time you run multiple interpreters so that you still
get the benefit of multiple cores [1].

Implementation-wise, this could be made simpler than removing the GIL, if we
just fully give up on the concept of shared memory. (This is above my pay-
grade, anyone with CPython internals knowledge, please share your thoughts!)

I've been trying to build a shared state system over message passing that
doesn't use shared memory at all [2], and I have experienced that OS level
processes are a high cost to pay, when you try to do things the Erlang way,
launching thousands of processes.

[1] [https://hamidreza-s.github.io/erlang/scheduling/real-
time/pr...](https://hamidreza-s.github.io/erlang/scheduling/real-
time/preemptive/migration/2016/02/09/erlang-scheduler-details.html)

[2] [https://github.com/pycampers/zproc](https://github.com/pycampers/zproc)

~~~
tyingq
There's this: [http://www.gevent.org/](http://www.gevent.org/)

~~~
devxpy
Single core only :(

~~~
btown
If you need multicore performance, you likely will soon need multi-server
performance, in which case you can either use multiprocessing or even
containerization to run one gevent “hub” on each core across a cluster.
Compared to threads, the memory overhead is minimal.

~~~
_asummers
Going from single node to multi core doesn't introduce nearly the amount of
complexity that making a distributed system does. Recommending that for
someone who wants to use more than 1 core on their machine is not a reasonable
recommendation. You're not wrong that you might wind up there, at the end, but
doing that for the first step is too heavy unless you have a runtime to
support those complexities more easily, as in e.g. BEAM.

------
miduil
In case you are wondering, the editor used at this talk is called "Mu" and got
previously discussed on HN about three months ago [0].

[0]:
[https://news.ycombinator.com/item?id=17638067](https://news.ycombinator.com/item?id=17638067)

