
Tworoutines in Python - homarp
http://threespeedlogic.com/python-tworoutines.html
======
spamizbad
That's an clever way to go about solving the sync/async situation.

I work in an ecosystem that has both Python 2 and Python 3.6 async
applications, so my code has to play nice with both. I often end up writing
APIs as such:

    
    
        from __future__ import [Python 3 niceties]
    
    
        class BaseApi(object):
            def _method_that_does_io(self, *args, **kwargs):
                raise NotImplementedError
            
            def do_a_thing(self, stuff):
                return self._method_that_does_io(stuff)
    

and then create a SyncAPI and AsyncAPI respectively in different files, each
subclassing BaseApi implementing the respective io-bound helper-methods.

So Python 2 people can do:

    
    
        from SomeAPI.sync import SomeAPI
    

and Python 3 people can run:

    
    
        from SomeAPI.py3async import SomeAPI
    

The synchronous code is written in such a manner it's both 2 and 3 compatible,
so every flavor can be happy.

~~~
Xophmeister
But how can that work when the async methods have a different signature and
calling convention to their sync counterparts? Your “public” method in the
async version would have to handle all the async set up and submission of your
chain of coroutines to the loop. That’s possible, but it would have to live in
isolation (i.e., no other async code could run simultaneously, which kind of
defeats the purpose) and it means for a lot of nasty plumbing, which doesn’t
seem like it should be the class’ job (in a separation of responsibilities
sense).

~~~
spamizbad
Oh you can do it with identical signatures.

Let’s say we are making http requests, with the sync code using requests and
the async code using aiohttp.

Your public methods all call a private _request method with a verb, a url, and
optionally something to be seriaized to json.

Your async _request method does need to do a small bit of code around
sessions: make one first if one hasn’t been made for the class instance yet
(or override your __init__ and take an existing one as an arg) but at the end
of the day it’s just returning a coroutine that’ll ultimately give you a
deserialized response. Sync can just be more direct and return a dict. Both
have the same signature.

~~~
Xophmeister
Right, so you can just return a coroutine, so the return type of your method
is something like `Union[dict, Coroutine]`. However, then downstream
(ultimately) has to handle the dichotomy, where the `dict` is as is, but the
coroutine needs to be `await`ed.

You can do it, but I personally think it's messy. It's what I really dislike
about Python's async model, which you almost necessarily encounter because the
stdlib is mostly not async.

------
Walkman
I implemented something like this not long ago:

    
    
        class Api(AsyncApi):
            def __init__(self, token):
                try:
                    self._loop = asyncio.get_event_loop()
                except RuntimeError:
                    # when running in a thread, get_event_loop doesn't create another one
                    self._loop = asyncio.new_event_loop()
                    asyncio.set_event_loop(self._loop)
    
                self._session = aiohttp.ClientSession(loop=self._loop)
                super().__init__(token, self._loop, self._session)
    
            @lru_cache(maxsize=None)
            def __getattribute__(self, name):
                attr = super().__getattribute__(name)
                if name.startswith("_") or not asyncio.iscoroutinefunction(attr):
                    return attr
    
                def call_sync(*args, **kwargs):
                    coro = attr(*args, **kwargs)
                    return self._loop.run_until_complete(coro)
    
                return call_sync
    

It can run when there is no loop or even when a loop is already running.

------
pickdenis
Nim's `multisync` macro is another cool approach to this. It allows the
function to be called isomorphically regardless of synchronicity, determining
the latter using the type system. In the following example, the
`writeResponse` proc can be called from either a synchronous environment on a
`Stream` or from an asynchronous environment using `AsyncStream`.

[https://github.com/sid-
code/nmoo/blob/master/src/nmoo/sidech...](https://github.com/sid-
code/nmoo/blob/master/src/nmoo/sidechannel.nim#L17)

There is a bit of glue code holding this together, but I find it really neat.
There's no event loop hack, just static polymorphism.

~~~
dom96
Nice to see that others are finding this useful. :)

------
jacob019
This is why I love gevent. Synchronous code becomes asynchronous automatically
and it's rarely necessary to reason about the asynchronicity.

~~~
ekimekim
I'll add something I've observed to that:

In asyncio and similar systems, the choice between "is this code syncronous or
asyncronous" is made at declaration time - either you define a sync function
or an async one, and you can only use it in that way.

In gevent and similar systems, the same choice is made at call time - there's
no distinction between sync and async functions, instead you can call
syncronously with "foo()" or asyncronously with "future = gevent.spawn(foo)"
(I'm taking some liberties by calling the returned greenlet object a future
but it can be used as such).

~~~
meowface
Exactly. gevent has been a joy to work with since day one. asyncio (even with
the new syntax additions) has been a complete pain for me since day one. I
still use gevent for everything - even in brand new Python 3.7 projects - and
I see no reason to stop. Great API, great performance, "just works".

~~~
ekimekim
As much as I love gevent I can't claim in good conscience that it always "just
works" for me. It tends to just work until it doesn't, and then you're stuck
investigating how a dependency of a dependency just switched to using a
network library implemented in C that therefore isn't monkey-patchable and
causes process-wide stalls.

I've gotten good at diagnosing these issues over many years as a heavy gevent
user, and it doesn't stop me highly reccomending gevent to anyone who will
listen, but it's a caveat that should be mentioned for anyone new starting
out.

~~~
jacob019
What libraries have you run into trouble with?

~~~
ekimekim
grpc is one that we're struggling with at the moment, psycopg2 is another that
won't Just Work (you need to use a special "adapter" that teaches the
underlying C library to hook into the event loop). In general anything that is
a) not implemented in pure python, and b) doing calls that you expect to be
monkey patched (mainly network calls).

This is thankfully fairly rare, but it is something you need to be aware of.

~~~
jacob019
There are pure python alternative implementations of these protocols and of
mysql, anything pure python works well, but of course there are modest CPU
costs. Then there are workarounds if you must use C libraries with blocking
IO, like using a pool of sub-processes. But using C libraries for IO is pretty
low level stuff, and making them concurrent is beyond the scope of python.

------
jakobegger
I'm not familiar with async in Python, but creating a blocking wrapper around
an async call is something I've had to do quite often in various
circumstances.

Theoretically, you shouldn't need to do that, but in practice there's always
some big legacy component that you can't rewrite to make it asynchronous. In
my experience it's such a common pattern that the language should support it.
But in many languages it ends up being quite complex or inelegant.

Makes me feel like there's a bit of a schism between language designers and
users: designers want an architecture that encourages clean code, users want
duct tape.

------
sifoobar
Of all the methods for dealing with async calls I've come across, I strongly
prefer the Smalltalk process flavor.

The general idea is that main runs in a fiber (or process in Smalltalk lingo)
out of the box. There's a way of starting new fibers, yielding to the next;
and blocking calls yield automagically. And that's it.

But talk is cheap, and Smalltalk is not coming back anytime soon; which is why
I'm building a new language [0] around the same ideas.

[https://gitlab.com/sifoo/snigl](https://gitlab.com/sifoo/snigl)

~~~
nightfly
Isn't that pretty much how Erlang does things?

~~~
sifoobar
Erlang throws plenty of other ideas into the mix though, I prefer my concerns
separated. It assumes multiple threads for example, in plenty of applications
that's needless complexity. You end up sending messages between processes when
all you wanted to do was encapsulate some state.

~~~
dnautics
Elixir solves this with Agent.

------
yoklov
Prior to the introduction of async/await (or promises) there were a number of
places we used a similar technique inside Firefox to avoid callback spaghetti.

For us it caused no end of problems, and were glad to see it go (well, there
are still probably some places using it...).

It's not surprising to me that it's discouraged.

~~~
veli_joza
Could you point out what the problems were, and what technique did you replace
it with?

~~~
yoklov
A long tail of subtle event and reentrancy bugs, where an event loop wasn't
getting an event it needed (leading to dropped or mishandled OS events), or a
function that doesn't expect to need to be reentrant was suddenly reentrant.

Most of these could be worked around, but some of them are unreasonably
difficult given OS event apis.

We replaced it with just making the calls that might need to be asynchronous
use `async`/`await`.

------
csytan
IPython has a pretty neat feature where you can await at the top level:

[https://ipython.readthedocs.io/en/stable/whatsnew/version7.h...](https://ipython.readthedocs.io/en/stable/whatsnew/version7.html#autowait-
asynchronous-repl)

------
BerislavLopac
I'm not sure, but it seems an overly complex approach. If all you want is to
execute an async function from a sync context, run and wait for it:

    
    
        >>> import asyncio
        >>> async def foo():
        ...     return 'foo'
        ... 
        >>> bar = asyncio.run(asyncio.wait_for(foo(), timeout=15))
        >>> print(bar)
        foo
        >>>

------
carapace
In any event, Functional Reactive Programming is thought-provoking. Here's a
link to "Elm: Concurrent FRP for Functional GUIs", Evan Czaplicki's thesis
(although I think Elm-lang recently simplified their system.)

[https://www.seas.harvard.edu/sites/default/files/files/archi...](https://www.seas.harvard.edu/sites/default/files/files/archived/Czaplicki.pdf)

------
gok
Isn't it trivial to accidentally make this deadlock?

~~~
erdewit
Not really, it starts a sub-event loop that still handles all the events
sequentially. What can be a problem is infinite recursion, if inside an event
handler a new nested event loop is started.

~~~
mehrdadn
Sounds like Visual Basic's DoEvents() all over again?

------
andrewstuart
I thought this post my have been my Twolang gaining in public attention
[https://github.com/bootrino/twolang](https://github.com/bootrino/twolang) but
it seems not.

------
snicker7
I love the interface, but I don't like the idea of monkey-patching the
standard library. Can't one instead spawn a separate thread and run the event
loop there?

------
alehander42
Nim's multisync macro is another nice approach: on compile time it transforms
the same function to a sync and async variant

------
jlkjfasdfwm2
I hope Python & Node will evolve from async/await, because we can do better.

Take a look at Lua coroutines

------
infinity0
or just learn Haskell and realise that this whole concept is just monadic
bind, written simply as >>=

------
lukeplato
what kind of website is this?

