
Sanic – Python 3.5+ web server that's written to go fast - shakna
https://github.com/channelcat/sanic
======
saynsedit
I've actually written a highly optimized asyncio-based Python web server in
the past. I meticulously optimized every component, using all the standard
CPython optimization techniques (heavy use of stdlib, minimizing method
lookups).

In the end, my implementation was nowhere near nginx. I even ran it under PyPy
and it fared no better. Then I realized the oxymoronic nature of writing an
optimized web server in Python.

~~~
e12e
What does "not even close" mean in this context? I find it a bit interesting
that the most obvious conclusion from the benchmarks in the story, and the
linked one - is that uvloop and go are high-performance and consistent low-
latency, high performance python is on par with nodejs - but in general the
real jump up is towards go and c++(?).

It's great to see an order of magnitude leap on the python side - but on the
face of it I'm not sure the leap really is enough to enable a different class
of services in python? Perhaps I'm being too pessimistic - I know I'd be happy
to be able to "ignore" node, and only consider eg: Python for most things and
go for some things. Just to limit my tech stack.

But where does nginx with lua fit - is it another order of magnitude above go
for dynamic content?

~~~
dom0
Honestly you sound confused.

Performance is not a magical inherent property of a language-and-webserver
combination, and will vary wildly with the application. In my experience
practical performance has much more to do with architecture, algorithms and
"the other bunch that makes things efficient" \-- not so much with a language.

It's also only one facet. And usually not a very important one, either.

~~~
merb
> Performance is not a magical inherent property of a language

well it is. however most performance characteristics of a language are well
understood. mostly python/ruby is slower than a lot of other languages.

> practical performance has much more to do with architecture

well sort of not every case does well with these kinds of languages.

however in most cases it's just fine to use them.

my company changed one product from python to scala. __everything __was slower
in python, however in 90% of our use case that didn 't even matter. however we
were thread and calculation (i.e. shuffling/changing large lists/maps in
memory) where python was just slow. I guess we could've written a library for
these kind of transformations in C. however another problem was also PDF
generation, which was really really slow in python (for bigger pdf's, slower
one's we just fine). we are happy to use scala, but I didn't found python bad
or weak. you are pretty fast and the tooling is just amazing. also the ORM's
in python are superior to everything i've seen on the JVM world. (Django ORM
and SQLAlchemy) I guess they are even the best ORM's out there. if I would be
developing more towards a cloud architecture I would probably use python
again. you could just do more if you have room for a "infinite" amount of
servers

still python is a really great and well designed language. I would everybody
encourage to look into it.

------
tschellenbach
Really excited about these efforts. Python is my main programming language,
but for high volume endpoints it's sometimes necessary to switch to other
languages. In an ideal world I'd just use Python for everything. It will take
a while before that's possible though. These performance gains don't mean much
if your DB connection is still blocking.

How does your benchmark compare against GO, Elixir and Node?

~~~
napperjabber
MagicStack has already taken the initiative.
[https://github.com/MagicStack/asyncpg](https://github.com/MagicStack/asyncpg)

~~~
spamizbad
Still beta, but those postgres numbers are incredible. MagicStack is doing
amazing things.

------
SwellJoe
It's interesting how so many of these microframework+app server type things
spread across many languages have converged on a very, very, similar set of
conventions and practices. The example code looks strikingly like every other
recent JavaScript/Perl/Python/Ruby microframework+server that I've tinkered
with lately (Express, Koa, and Hapi in JS, Mojolicious in Perl, and Flask in
Python). Aside from the obvious language differences, the concepts are all so
similar.

I wonder if this is a convergence on a local maximum for writing web apps, or
if it's just a situation where the most popular ones did it this way, so
everyone does it this way. Not that it matters too much, in the general case.
The route and app setup plumbing is a small part of most applications, so it's
rarely going to be ruinous if you spend an extra hour or two getting it
working right. But, since I've been evaluating a bunch of different ways to
build web app backends lately, I've just noticed how similar they all are, no
matter what language you choose. Most of the ones I've been looking at are
asynchronous, so this also has that similarity to most of the others.

While one could argue that it's obvious that they would look the same...but,
web development didn't always look this way. The first 10-15 years of web
application development I did looked very different, in fact (CGI or mod_perl
or PHP, which has some quite different conventions).

~~~
minitech
The README suggests this one is written to be Flask-like, and Koa is based on
Express/Connect.

~~~
MildlySerious
And Express as well as many others are loosely based on what Rails did iirc

~~~
minitech
I don’t see how Express is at all based on Rails. They both have… similar URL
pattern syntax that Rails didn’t invent?

------
yakcyll
Why it is app.run() and not app.go_fast() is beyond me.

~~~
btmiller
Good point. With Sanic you gotta .go_fast()

------
spamizbad
This is a great project. I am going to be watching this!

One question: Why default to ujson? I understand that if you're always dealing
with small, simple json objects it's quite fast and safe. But with larger json
payloads, its performance and compatibility start to break down. I haven't
touched it in the last 8 months, but I had to switch to Python's built-in json
module (which is quite fast in 3.5) for compatibility when serializing and
deserializing large json objects.

~~~
agf
Do you have a good writeup of this problem? I've seen only scattered,
anecdotal info about it.

------
rcarmo
Curious to see how this fares against something like uWSGI with aiohttp
support (haven't tried either, but uWSGI brings so much more to the table than
speed that I'd like to be able to make an informed choice).

~~~
ddorian43
asyncio is experimental still on uwsgi [http://uwsgi-
docs.readthedocs.io/en/latest/asyncio.html](http://uwsgi-
docs.readthedocs.io/en/latest/asyncio.html)

~~~
dom0
Running asyncio-based servers on uwsgi is very stable though, but may not
achieve 100 % of the performance native asyncio in uWSGI may have. Even then,
Tornado is "fast enough" for us (ie. we run out of network bandwidth before we
run out of CPU).

But the main advantage - that you're still using uwsgi as a "unified
application server" of sorts - persists, which was the main point for us
(Django, Flask, Tornado on uWSGI). This simplified deployment and
administration quite a bit.

Re. topic: It would be really interesting to see it compared to Tornado.

~~~
rcarmo
Yeah, I use uWSGI for a lot of things (check out
[http://github.com/rcarmo/piku](http://github.com/rcarmo/piku), for instance),
so am more interested in figuring out if I can get a little extra mileage off
it.

------
asdfologist
I noticed that the past several Python-related threads on HN haven't contained
the usual cynical comments against adopting Python 3. Are we finally at a
point where Python 3 has been accepted as the gold standard?

~~~
oliwarner
Yes. I think the async syntax in 3.5 was the tipping point for the stragglers.
It finally looked like a serious upgrade, not just a bigger number with a load
of things to fix.

I know there are people who are stuck on old versions, but the community has
recognised how much damage this dispute was doing to Python's image. They know
they'll have official Cpython support to 2020, so it's a non-issue until then.

Oh and Ubuntu 16.04 defaulting to 3.5 helped too.

------
shakna
Async handlers, Flask-style API, and fast.

Disclaimer: I have nothing to do with this project.

~~~
aitoehigie
Link to the docs?

~~~
shakna
Right there in the README, but here's the Getting Started [0] anyway.

[0]
[https://github.com/channelcat/sanic/blob/master/docs/getting...](https://github.com/channelcat/sanic/blob/master/docs/getting_started.md)

------
sixhobbits
The comparisons are nice, but it would have been nice to see a comparison with
Flask + Nginx/Apache2 for example.

------
cpeterso
TIL about "Sanic Hegehog": [http://knowyourmeme.com/memes/sanic-
hegehog](http://knowyourmeme.com/memes/sanic-hegehog)

------
__s
Curious how much this benefits from optimizations coming in Python 3.6. Is the
benchmark essentially spending all the time in uvloop Cython code?

------
floatboth
I love the name!! :D

The hard dependency on uvloop looks weird. It's just a pluggable loop for
asyncio, anyone can switch to it in two lines of code, no reason to depend on
something that's a native extension.

Also why not just improve aiohttp's performance…

~~~
nitely
> Also why not just improve aiohttp's performance…

Have you read aiohttp's code? there have been some discussions about improving
its performance, there is basically no immediate bottleneck, the whole thing
just performs poorly. I hate to say it but sometimes it's just better to start
from scratch.

------
leemalmac
This is awesome, Python is my number one, Node.js already has much larger
ecosystem for async i/o. If we need something faster for heavy computations Go
is a good choice.

But maybe in near future Python will grow significantly, who knows.

------
adamnemecek
Best name ever.

------
okso
No support for websockets ?

~~~
tschellenbach
I think it's fine to use an external library such as Faye for websockets. That
comes with the benefit of having many client libraries available. (JS, iOS,
Android)

~~~
revelation
When did the principal benefit of Websockets (send and receive _whatever_
instead of HTTP primitives) become a matter of client and server library
availability?

I want to trace this profound moment in WebDev history. Amazing.

~~~
yunruse
The principle of sending _whatever_ is important, but in reality most modern
consumption literally follows the ethos of 'go fast'. Perhaps dropping
Websockets helps out; it's a freedom of choice to pick speed over
compatibility.

------
infocollector
Can we just use the webserver of Sanic with Flask on linux?

------
lbolla
What's up with those latency numbers? Seem huge to me.

~~~
sitkack
Who measures average latency? Nearly meaningless.

~~~
shincert
I'm curious. Why is it meaningless? What would you measure?

~~~
sitkack
You almost never want to use average as your metric when dealing with time.
For time, the 95 or 99 percentile latency is going to be measure of how many
requests out of a hundred a less than your metric. Esp when the thing you are
measuring is a piece of larger system, which is the crux, because of how a
resource in turn requests k-more resources. Each request has an equal chance
of being over the latency percentile. Even at 10 requests, only 90% of
sessions will be under the 99% latency.

See also, Gil Tene
[https://www.youtube.com/watch?v=lJ8ydIuPFeU](https://www.youtube.com/watch?v=lJ8ydIuPFeU)

A nice overview, [http://bravenewgeek.com/everything-you-know-about-latency-
is...](http://bravenewgeek.com/everything-you-know-about-latency-is-wrong/)

See also, [http://highscalability.com/blog/2015/10/5/your-load-
generato...](http://highscalability.com/blog/2015/10/5/your-load-generator-is-
probably-lying-to-you-take-the-red-pi.html)

------
benbristow
Gotta go fast.

~~~
mangeletti
Gotta go faster.

~~~
Fluid_Mechanics
[Air Horn]

------
yahyaheee
This looks awesome, exactly what python needs right now

------
magicbuzz
I honestly don't understand why you would do this when nginx with Luajit is so
fast. I think Python is great but nginx is so robust, widely used and Luajit
is so impressively quick.

~~~
petre
Probably because even though Lua is super a nice language, writing code in a
config file is not that much fun? App servers spitting out HTML written in the
language of your choice are so much more flexible than developing on a single
platform/stack such as Apache + mod_php|mod_perl, or OpenResty. Of course
nothing stops you to proxy them through Nginx and serve the static content
directly with Nginx in production.

