
See Python, See Python Go, Go Python Go (2016) - amzans
https://blog.heroku.com/see_python_see_python_go_go_python_go
======
lostmsu
There is a similar project for .NET:
[https://github.com/pythonnet/pythonnet](https://github.com/pythonnet/pythonnet)

It makes calling C# as easy as:

    
    
      import clr
      import System
      uri = System.Uri('http://python.org')
    

And also works the other way around. In both cases you have to mind GIL
though.

~~~
shbooms
A similar project to pythonnet (a CPython package that implements .NET) is
IronPython (a C# implementation of Python 2.7) the latter of which isn't
limited by the GIL since it's not using CPython.

[https://github.com/IronLanguages/ironpython2](https://github.com/IronLanguages/ironpython2)

~~~
lostmsu
At this moment IronPython is nearly dead. 2.7 EoL has passed and IronPython
3.x is not ready.

~~~
simonh
I don’t think the EOL of CPython 2.7 should necessarily bother anyone using
IronPython. It’s not like IronPython depends on CPython in any way, or is
exposed to any CPython bugs or vulnerabilities. I can see it from a
standardisation point if view of course.

------
allanrbo
He touches upon it in the "Runtime Overhead" section, but I think when calling
Go from C like this you loose a lot of the goodness of Goroutines, and any
code making heavy use of Goroutines could become seriously problematic.

In native Go, Goroutines are very lightweight and cooperatively scheduled. In
a CGo env I believe they each have an OS thread and a full stack. Source:
[https://www.cockroachlabs.com/blog/the-cost-and-
complexity-o...](https://www.cockroachlabs.com/blog/the-cost-and-complexity-
of-cgo/)

~~~
sascha_sl
They do not, the go runtime just locks the OS thread to the goroutine calling
C until it comes back out. This can lead to fun deadlocks that only appear on
single-core machines (because by default GOMAXPROCS is equal to the core
count).

Similarly, Docker locks goroutines to the OS thread when using the unshare
system call to spawn containers. These are of course later discarded. It has
to be locked because any goroutine might be stopped at any (automatically
inserted by the compiler) checkpoint and resumed on a different OS thread.

Unsharing the network interfaces from half your OS threads is a fun way to
chaos test networking in Go.

[https://golang.org/pkg/runtime/#LockOSThread](https://golang.org/pkg/runtime/#LockOSThread)

[http://man7.org/linux/man-
pages/man2/unshare.2.html](http://man7.org/linux/man-
pages/man2/unshare.2.html)

------
alimoeeny
I had not used python for like 5 year, since when I migrated all my work to
go. Recently I went back to python and was shocked to see how hard
(relatively) it is to setup a (moderately) high performing web server in
python. I mean in my case I had a “data science” type application and
sometimes a request would block and take a second to finish, and this meant a
handful of users would bring the server to it’s knees (due to extremely high
mem usage I could not have a lot of indepentend worker processes running at
the same time),

I wish I could call python code from within a go web server with some ease and
safety.

~~~
samcodes
First off, I totally agree. Not as easy as it should be to write an async web
server in python. FastAPI is probably your best bet. I usually use Sanic. Easy
to accidentally block though.

That said, it sounds like you’re serving a large model. No amount of
async/await or goroutines can solve this problem. A non-blocking web server is
a godsend for I/O-bound tasks, but a large model is just a deep call stack -
lots of multiply, nonlinear function like RELu, then add, times a billion.
This would still block, even if you had perfect async/await code.

I made some assumptions here, but if I’m right, the answer is “shrink your
model” and/or “buy more compute”. Neither of which are easy. But if you’re
trying to shrink a model, check out Distiller
[https://github.com/NervanaSystems/distiller](https://github.com/NervanaSystems/distiller)

Edit: the restriction I talk about is for event-loop based servers using
something like uvloop or asyncio under the hood. Maybe this restriction
doesn’t hold for other concurrency modes.

~~~
speedplane
> No amount of async/await or goroutines can solve this problem. A non-
> blocking web server is a godsend for I/O-bound tasks

In the past, we were told that threads were cheap and to use them heavily,
especially to achieve parallelism. Now with the advent of async models, we're
being told that threads are expensive, and often that a single
processor/thread async model is better than a multi-threaded blocking one.

I'm not a luddite, I do agree that async is often better. But I wonder how we
got tricked into thinking more and more threads were the answer and how we
avoid such trickery again.

~~~
hopia
Wouldn't that depend on the platform entirely? On some platforms VM threads
can be very inexpensive and are simply the right model for concurrency.

------
dragonsh
Werkzeug/flask and gohttp in my view is not a fare comparison. Based on the
details of the project werkzeug is designed to build WSGI apps easier and
flask is just a convenient library on top of it providing easier way to manage
request/response. It is designed for developer productivity and performance is
left to specific infrastructure software like gunicorn, uwsgi and others.
Flask/werkzeug or any WSGI/ASGI related servers were designed for interfacing
with a http server like nginx, apache, gunicorn, uwsgi and many. A python LWAN
http server can beat go http server [1] [2].

Also take a look at comparison of various frameworks in Python including go
http server [3].

People are most productive in the language/framework they are most familiar
and will defend it. Every language/framework has their own strength and
weakness and I believe over a period of time ideas flow from one language to
another.

Indeed today many people in Python community will be moving towards Rust or Go
because they think it can solve all their problems they face with dynamic
typing and performance, which might not be entirely true. It's upto an
individual to decide if they want to go that path.

As an example werkzeug/flask framework developer Armin moved to Rust in spite
of it being a complex language with very large syntax surface area and a steep
learning curve. In my opinion Rust's complexity, difficulty to learn and
understand, and probably a promise of type and memory safety (which is not
100% true given it needs to interface with C in unsafe mode and will only be
as safe as the underlying C implementation), makes people adopt it to make
them feel better programmers (Personally I would have chosen Haskell, if
needed to do the same).

Now all the Python projects he worked on is mostly maintained by volunteers
and David. But being a responsible open source developer and contributor
before moving in that direction Armin did create pallets projecs [4]. So his
decision to move is right for him given his preference and learning
priorities.

[1] [https://www.nexedi.com/NXD-
Blog.Multicore.Python.HTTP.Server](https://www.nexedi.com/NXD-
Blog.Multicore.Python.HTTP.Server)

[2] [https://lwan.ws/](https://lwan.ws/)

[3] [https://www.freecodecamp.org/news/million-requests-per-
secon...](https://www.freecodecamp.org/news/million-requests-per-second-with-
python-95c137af319/)

[4] [https://www.palletsprojects.com/](https://www.palletsprojects.com/)

------
raziel2p
The gohttp library seems to do a lot less than what gunicorn/werkzeug/flask
does. I would guess this is what makes the drastic performance difference, not
the fact that the HTTP handler is written in go. I'm surprised that it was so
close to go-net/http performance, though.

~~~
jchw
Actually it’s probably concurrency:

> Keep in mind that this is with 10 concurrent requests, so werkzeug-flask
> probably chokes more on the concurrency than the response time being slow.

I am not sure though. I’d imagine Go can beat Python performance enough to
make up for the (clearly not very egregious) CGo penalties.

~~~
hermitdev
I haven't dealt with Go at all, but if you can use aiohttp on Python 3 (3.5+
IIRC), it is remarkably fast for handling asynchronous requests. In my
testing, saw something like a 66% wall clock reduction over using
multiprocessing to achieve parallelism. Sadly, cant use it in production,
because aiohttp doesnt currently support Negotiate/SSPI auth.

~~~
pastage
Can't you use the ADFS for SSO against windows? It is heavily used for web
services that do not have support for Kerberos around here.

~~~
hermitdev
Do you have a link to some docs? As far as I'm aware negotiate/SSPI isnt
supported.

------
bra-ket
there is a video of Andrey's talk on this from PyCon CA 2016:
[https://pyvideo.org/pycon-ca-2016/see-python-see-python-
go-g...](https://pyvideo.org/pycon-ca-2016/see-python-see-python-go-go-python-
go.html)

------
toyg
Is there anything about doing it the other way around, i.e. embed python code
in Go...? I wish we could get the easy-deployment story of Go around the easy-
development story of Python.

~~~
hopia
I'm also curious to know how people who use Python for data science and
combine it with some other platform for web facing endpoints architect their
apps in the cloud.

~~~
trufas
I haven't done this specifically but it seems like a perfect use case for a
smaller separate service. You'd probably submit jobs through an internal API
(REST, gRPC, pick your poison) or through a job queue like Celery, if they're
slower operations.

~~~
hopia
Yes that's what I thought too. Building a bunch of data analytics
microservices that host a Python API and serve the results internally to the
actual outwards facing API server.

------
hwestiii
Shouldn’t that be “C Python Go, Go Python Go”?

------
justaguy12
or like just use go

------
tech234a
Title should mention this is from 2016.

~~~
dang
Added. Thanks!

------
The_mboga_real
2016, the year of all those hard times!

