Python's going to have a bit of an awkward time with two completely different sets of ecosystem for threaded vs. asyncio approaches, but it's necessary progress.
One thing I'd be really keen to see is asyncio frameworks starting to consider adopting ASGI as a common interface. Each of quart, sanic, aiohttp currently all have their own gunicorn worker classes, http parsing, and none share the same common interface for handling the request/response interface between server and application.
It's a really high barrier to new asyncio frameworks, and it means we're not able to get shared middleware, such as WSGI's whitenoise or Werkzeug debugger, or the increased robustness that shared server implementations would tend to result in.
Would be interested to know what OP's position on this is?
It's threaded vs async/await (and even then projects like https://github.com/dabeaz/curio bridge that gap really well), rather than threaded vs asyncio. asyncio is just one (really really poor) implementation of async/await coroutines on Python.
> One thing I'd be really keen to see is asyncio frameworks starting to consider adopting ASGI as a common interface.
From what I've seen of the ASGI spec, it makes it incredibly easy (like most asyncio stuff) to DoS yourself with lack of backpressure. You get your callback called with data, and like all callback systems you can't exactly not get called.
Fair enough, sure.
> lack of backpressure
You’ll get new calls into the application on new requests, yes. Request bodies are pulled tho.
You can perfectly well ensure that server implementations properly handle flow control, and if necessary also have a configurable number of maximum concurrent requests.
Either way, those sorts of concerns would be far better addressed by having server implementations against a common interface, than by framework authors having to handle (and continually re-implement) the nitty gritty details of high/low watermarks and pausing/resuming transports.
Are there other implementations or is it just that the one existing (asyncio) is just poor?
I tried to understand asyncio a few times and failed. Threading was easy, Promises in JS were easy and I blocked on asyncio.
Disclaimer: I am a trio dev, though.
I was hoping (with Python, not trio in particular) that it would all end up with something like Promises which make coding really easy but I guess not.
This is of course a matter of personal taste, but having
call_a_asynch_function(with, some, parameters)
.then(with_the_result_when_it_comes => do_something)
I wonder if you could do something nicer using the `with` syntax?
with async_fn(with, params) as result:
async with fetch_thing() as result:
json = async_jsonify(result)
post_result = async_post(json)
json_result = async_jsonify(post_result)
The key part I wanted to highlight here though is that Quart serves HTTP/2, which I think sets it apart from most of the other Python frameworks. (I know Twisted also does this https://medium.com/python-pandemonium/how-to-serve-http-2-us...)
but i am not sure how broadly as a common spec this has been accepted.
One question, in the asyncio docs (really nice BTW, I hadn't tried that out yet and instantly grasped your example) you mention the common pitfall of `await awaitable.attribute` with missing brackets. In the Migration from Flask docs you give some examples where you need await like `await request.data` and `await request.get_json()` - do these need brackets in the same way or is `request` special? Same deal with `test_client` straight after that.
BTW, do all routes have to be async here - even your quickstart that just returns 'hello'?
One other thing - since you require Python 3.6 anyway it'd probably make sense to use `pipenv` instead of `venv` as your recommended install, it'd probably make your docs simpler.
The same thing with `await request.data`. You can easily avoid this pitfall by writing this code:
data = await resp.json()
The routes don't have to be async as Quart will wrap them in a coroutine anyway. However I'm not sure there is any advantage to not adding the async keyword.
Thanks, I need to learn how pipenv works.
To start a new project:
pipenv install <package>
pipenv install -d <development packages>
pipenv install -r requirements.txt
When someone makes a fresh checkout, they cd to where the Pipfile and Pipfile.lock are and run:
pipenv update (-d to load development packages)
pipenv run <command>
SA unfortunately, does not have a asynchronous version (its quite complex as it is). Therefore, I think it would require quite a lot of work, in order to actually get a standard flask app to work with quart.
However, if you've built your data access layer directly on psycopg, then I think you're good to go.
They're both incredibly experienced developers, and I don't have any good steer on the discrepancy.
Would love to see some independent benchmarking on a typical use case to get a clearer picture on how valuable (or not) asyncio is for high-throughput database operations.
Maybe using eventlet isn't necessarily native "async" but... it works just fine in our use cases...
If you want to use the orm parts of sqlalchemy, the idea is to use thread executors and handle detaching of objects if you return them to event-loop code.
Incidentally, my personal opinion is that is the reason why we don't have massive adoption of asyncio despite lots of frameworks (like apistar).
Nodejs for example has every database driver as asynchronous. It is probably worth the effort to make a asyncio compatible dB layer.
It is interesting to just know the reasoning. Nobody is bashing competition. It's just that GitHub has more mindshare and as such, contributors/forks are more likely, and there are more 3rd party integrations.
Again, competition is good for everyone, but let's not bash someone who is just curious why a project author would come to choose GitLab.
Or likes the interface over GitHub.
Or simply likes going against mainstream.
Full disclosure: I use GitLab since 2015.