Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I’m still bummed that Python took this direction. Maybe introducing new keywords into the language for event loop concurrency was Python’s way of satisfying “explicit is better than implicit” but i can’s shake the feeling that callback passing and generator coroutines are a fad that is complex enough to occupy the imagination of a generation of programmers while offering little benefit compared to green threads.



Having worked in asyncio for a bit I don’t entirely follow this (it truly could just be familiarity), very little of asyncio (especially post `async/await` were introduced in the language) is callback based and reads more procedural.

Regarding generator coroutines it feels like a natural evolution of the language. Given that yield previously suspended the current function’s state providing value(s) to the closure, it only makes sense that yield (on the producer side)/await (on the consumer side) does the same thing but in an event loop based context.

I can’t speak deeply enough about green threads, but from my understanding there’s much less magic (as you cite “explicit”) in an async/await world vs the magic (“implicit”) world of green threads.

    async def thing():
        print(‘before’)
        await asyncio.sleep(0)
        print(‘after’)
Vs

    def thing():
        print(‘before’)
        gevent.sleep(0)
        print(‘after’)
There’s nothing clear in the latter when something yields or otherwise passes control flow.

Having worked in a few evented systems, I find the explicit shift to the runtime is valuable.


The cooperating part of this concurrency model is the complicated part. Consider how you would go about making an orm like sqlalchemy cooperate. Now you have to access properties like this:

    name = await account.user.name
since a lookup may have to occur. This is extremely unnatural and would be better if you could just avoid writing await yet still depend on it being concurrent without blocking your event loop. The fact that the caller needs to understand that the callee supports this form of concurrency is an abstraction inversion in my opinion. Python forces this concurrency to be explicit, but it would be more powerful and more natural if it were implicit:

    name = account.user.name


I've gone back and forth on this so much.

On the one hand, it's really annoying when your client library doesn't actually support asyncio compatible code (ex libraries which perform synchronous network or disk reads/writes), and you have to wrap everything in an executor.

On the other hand, making it explicit ensures I'm actually doing things async. "Leaf" functions with an async containing no await is now a red flag to me.

It's a mental tax to remember that I may actually be returning a future instead of the result of a future (similar to how you can return a function but not the result of that function being executed, or a non materialized generator), and having to call 'await x' instead of just assigning x kind of violates 'do what I mean'. In the end, async is (relatively) difficult, so I appreciate the enforced explicitness.


This is certainly one "drawback", depending on your perspective, and certainly a cost more of an ORM and how they tend to work than a runtime environment. You'd have a similar issue, for example, in Go if you want to lazy load a property (however there you can't await a goroutine).

Long story short, this drawback tends to be primarily based on experience of the overall system. Coming from traditional rails/django/etc will make these constructs seem awkward.


I think this just means that a sqlalchemy style ORM doesn't fit the model. If you had an ORM where the calls which could call cause database queries were distinct from calls which just looked up local properties, then this would work fine...


I think it mostly means that identity-mapped objects which may be expired aren't really compatible. Of course, one could always

    await session.commit()
    user.name  # BlahError: Object not loaded

    # correct
    await session.commit()
    await user.refresh()
    user.name
This might actually make people more actively avoid SELECT n+1, since lazy-loading would error out by default or require an extra await.

Another thing that might not be completely obvious, but sessions and their objects (session×objects = transaction state) are never shared between threads, similarly it would be unwise to share them between different asynchronous tasks.


It's doesn't have to be complex though.

It's complex because the asyncio API is terrible. It exposes loop/task factory and life cycle way to much, and shows it off in docs and tutorials.

Hell, we had to wait for 3.7 to get asyncio.run() !

Even the bridge with threads, which is a fantastic feature, has a weird API:

    await = asyncio.get_event_loop().run_in_executor(None, callback)
Also, tutorials and docs give terrible advices. They tell you to run_forever() instead of run_until_complete() and forget about telling you to activate debug mode. They also completly ignore the most important function of all: asyncio.gather().

asyncio can become a great thing, all the foundational concepts are good. In it's current form, though, it's terrible.

What we need is a better API and better doc.

A lot of people are currently understanding this and trying to fix it.

Nathaniel J. Smith is creating trio, a much simpler, saner alternative to asyncio: https://github.com/python-trio/trio

Yury Selivanov is fixing the stdlib, and experiments with better concepts on uvloop first to integrated them later. E.G: Python 3.8 should have trio's nurseries integrated in stdlib.

Personally, I don't want to wait for 3.8, and I certainly don't want the ecosystem to be fragmented between asyncio, trio or even curio. We already had the problem with twisted, tornado and gevent before.

So I'm working on syntaxic sugar on top of asyncio: https://github.com/Tygs/ayo

The goal is to make the API clean, easy to use, and that enforced the best practices, but stay 100% compatible with asyncio (it uses it everywhere) and it's ecosystem so that we don't get yet-another-island.

It's very much a work in progress, but I think it demonstrate the main idea: asyncio is pretty good already, it just needs a little love.


> i can’s shake the feeling that callback passing and generator coroutines are a fad [...]

Callback passing and coroutines are well known techniques that's been around for a while. Generators are just coroutines that yield to their parent. I remember using these concepts in C and Tcl ~15 years ago[1] and they were well known then. According to wikipedia (citing Knuth), the term coroutine was coined in the late 50's by Conway.

Callback passing and coroutines suits some problems well. Sure there are situations where they do not fit and if people use a hammer for all of their problems they will create new ones. I wouldn't call it a fad, though the techniques may be a bit hyped up in some circles.

[1] I don't remember when Tcl got the coroutine package, that may have been later


Note that the asycn/await syntax, coroutines, an asyncio are 3 independents part. If you do not like callback and Future have a look at trio[1] that takes a quite different approach.

http://trio.readthedocs.io/en/latest/


For me it had the opposite effect.

Working with async-await syntax was the last straw that made me finally go "there's got to be a better way" and find a language can handle concurrency without the semantic overhead (in my case Go, but there are others).


I agree it's always left a bag taste in my mouth. I loved generators and yield/yield from, but I was stuck on 2.7 for a long time so I never quite understood the motivation for async/await over them.

One issue is that it "reifys" the "colored function" problem that green threads like goroutines don't have!

Side note: Java world is working on green threads/fibers for the JVM in Project Loom.

[0]: http://cr.openjdk.java.net/~rpressler/loom/Loom-Proposal.htm...


I’m not an expert on this but it seems like they just didn’t want to find a way to give you green threads without GIL. This had already been done in another Python implementation: https://en.m.wikipedia.org/wiki/Stackless_Python

Stackless has a proven model. Basically the same model as goroutines and channels. It’s the reason EVE Online is able to run its primary game server in Python with such a large number of users.


Perhaps asyncio is just a bit more low-level than we're used to in Python. Maybe we'll end up with something analogous to the "requests" library, but then for asyncio...


Effing thank you. I don't think most people realize just how convenient green threads are. It kills me to see devs stuck in the local maximum of callback hell.

That sad, https://www.usenix.org/system/files/conference/atc12/atc12-f... raises some interesting points in defense of one-way RPC. The key is not to allow returns.


Isn't Python's async/await syntax an implementation of green threads? I mean using await is almost exactly the cooperative scheduling idea. The article may use Futures and callbacks but you can just as easily do something like:

    result = await fake_network_request('one')


They're sort of similar, and you can probably get the same work done in either system, but I think real threading (green or otherwise), may leave you with less cognitive load. Spawning a thread may be complex, and thinking about how the threads are scheduled is often complex, but what each thread does can be very simple -- and you don't have to think about 'long running things need to be futured/awaited', you just do things in a straightforward way in the thread (caveat: slightly less straightforward if you need thread actions to be cancellable).

Green threads may be running an event loop underneath, but it's a useful abstraction in many contexts.


> ... you don't have to think about 'long running things need to be futured/awaited', you just do things in a straightforward way in the thread

https://www.youtube.com/watch?v=bzkRVzciAZg

Six years later, very little has change in the arguments about events vs threads.


> Spawning a thread may be complex, and thinking about how the threads are scheduled is often complex, but what each thread does can be very simple

And that's how it starts, and in the end it's New Year's Eve and you're somehow, again, debugging a deadlock.

> green threads

yes please


You can deadlock with futures as well. Except that with futures you do not get a (two actually) nice call stack pointing to the deadlocked resource.


Python has proper threads, and they're anything but simple.


Python threads aren't simple, because of the shared everything model python uses, so any variable access requires the GIL. Shared nothing threads are much simpler to work with.


This might be interesting reading. Comments on rust's approach to async from green thread.

https://aturon.github.io/blog/2016/08/11/futures/


One of the key reasons Rust took this approach is because Rust's implementation is zero cost - it doesn't require a runtime to implement. It is potentially very efficient, and thanks to Rust's other guarantees its very safe to use. It's as close to bare metal as you can get for a async framework and as a result its incredibly efficient.

However, to me, the implementation is a lot complex that green threads. The Future's crate has had a lot of churn, and when I first dabbled in it a while back, it was one of the first few times I struggled to understand what the compiler errors even meant as the types were so deep. Compared to golang's 'go', futures are harder to understand, plus you have to rewrite all your networking/blocking code to be compatible (Python would still likely have to do the same, but I think it could be done in such a way that if you were using the system provided networking libraries, you could get compatibility for "free").

Python doesn't benefit from the Rust benefits explained in that article. Python already has a garbage collector and runtime. Python is single threaded.

I don't follow the Python language to have a well informed opinion on why they went with futures, but I doubt its close to the reasoning that Rust chose.


> Python is single threaded.

That's not true, there are multiple threads in Python i.e. zlib from the standard library.

People want unsafe code to communicate concurrently through the Python thread state. I'll let someone else tackle that one.


And we're going further, with async/await http://aturon.github.io/2018/04/24/async-borrowing/


I wish they had implemented parallelism based on the actor model. It seems like the perfect high-level abstraction for managing parallel processes. All of the asyncio stuff feels too fine-grained for Python-style development.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: