
From Python to Go and Back Again - azth
https://docs.google.com/presentation/d/1LO_WI3N-3p2Wp9PDWyv5B6EGFZ8XTOTNJ7Hd40WOUHo/mobilepresent?pli=1#slide=id.g70b0035b2_1_168
======
hvs
I appreciate these war stories more than the "look at this great new thing
that will take over the world" posts (those have a place as well). We need
more war stories in this industry because everything has pros and cons, and
our job as software engineers is to be able to make decisions based on limited
information. Case studies are great way to glean real-world experience from
others without having to implement every new technology yourself in order to
make high-level assumptions about that technology.

This article shouldn't say to you: "See, Go is BAD, Python is GOOD!" It should
say, "That's an interesting case study. If I'm working on a project that
involves lots of sockets and concurrency, I'll want to take what they said
into account when I'm making technology decisions."

~~~
sethammons
I should reach out to our team that took python/twisted dealing with sockets
and lots of concurrency and ported to Go and see if they would put together a
similar presentation. Our case is a bit different, but we saw over 130x
improvement in throughput going to Go. While they were in there, they
increased monitoring, stability, and maintainability. More case studies to
help others make informed choices. Sending that email now :)

[edit: grammar]

~~~
windlep
I should note, we don't care about throughput for the most part. Our
constraint is purely the memory use of holding open the connections. The aim
is to hold as many connections as possible within 10-20% of the machines RAM,
and not exceed it. As such, we need to be careful about resource usage and
spikes.

Goroutines feel cheap, but if you're holding 140k connections, and just 20k of
them do something that spins up a goroutine each... you can easily exceed the
memory constraint. As such, we had to put goroutine pools in place, careful
select statements around them from connections to ensure we didn't overwhelm
external resources, etc. It was a huge pain. It has been drastically easier to
control resource usage with these constraints under python/twisted.

YMMV, of course, this is just our experience. Part of the reason for putting
it out there is that there are _already_ many people who have talked/blogged
about going from Python -> Go. I thought maybe the world could handle just one
story about going the other direction.

~~~
Jabbles
Typically if you wish to limit the number of goroutines you would spawn N
workers and have them read from a single channel. If 20k of your incoming
connections want to do something they send on the channel, without spawning a
goroutine themselves.

Did you try something like that?

~~~
windlep
Yep, this is what I meant by 'goroutine pools'. The select statements were on
the sending side to ensure if the feed channel was full we wouldn't retain too
much additional state. It works, but at that point its starting to look like
an async event-loop with a thread-pool....

------
latch
Most people who rewrote their apps from X to Go and saw improved performance
and readability benefited much more from the rewrite than they did from Go. At
best, the fact that Go has a relatively weak ecosystem, means that they had to
write from scratch a lot of things they were getting for free in X. But,
because in X is was a library, they only used 5% of the features, but paid a
high performance cost and had a complex API to work with.

Go's a good language for some things. But it does nothing special or
significant to close the massive productivity gap between dynamic and static
language. Yes, it's terse compared to many other static language and it has
stuff like implicit interfaces, but those are superficial (but nice) things
when it comes to what and how you do things in dynamic land. Go might actually
be a step back due to its poor type system and poor reflection capabilities.

I think Go's great for a seemingly new breed of "infrastructure" systems which
is becoming more important due to how systems are starting to be designed
(services hosted in the cloud). It's great for building CLI which don't
require your customers to install anything else. And it's good for a services
/ apps that needs to share memory between threads (which, in my experience, is
where dynamic languages really start to fail).

But for a traditional web app / service? It's horrible. At least as horrible
as most static languages. It sucks for talking to databases (more than almost
any other language I've used). It sucks for dealing with user-input. Like most
static languages, the stuff you need to do to handle a request, which has
essentially 1ms of life, is cumbersome, error prone, slow, inflexible and
difficult to test.

~~~
themartorana
This is patently absurd. It's mind boggling that people still hop on here and
declare how something is "amazing" or "horrible" for a particular problem-set
as fact.

You want my anecdote? Go is brilliant for web services. We've decreased server
costs significantly while decreasing response times by orders of magnitude for
write-heavy APIs. Concurrency primitives that do bleed into parallelism have
made a mockery of interpreted dynamic languages.

But don't believe me. Ask Cloudflare, or Google, or Dropbox, or any other
number of companies how horrible Go has been.

But just for shits and grins, I'll bet it'd take me moments to find people in
situations where Go didn't meet their needs or domain requirements.

Please stop with the absolutes. They're absolutely ridiculous.

~~~
gozo
What is the point of your comment? Your doing the same thing your telling
people not to do. You're not even disagreeing with the previous post which
essentially states the same things you do. The only thing your comment tells
me is that which developers you work with are far more important than the
language itself.

------
mratzloff
There are people who have been working in Go for years, successfully, but who
don't post comments with the same frequency and dogged determination as the
middlebrow dismissers.

Both Python and Go are fine. They both have their strengths and weaknesses. I
personally wouldn't write a web app in Go (at least, anything beyond the most
basic admin interface). I also personally wouldn't write a very large and
complex Python system given the huge unit testing burden necessary to ensure
safe refactoring down the road.

The biggest reason I like Go is because it makes it really hard for engineers
to create huge, complex abstractions. Engineers (and especially less
experienced engineers) just love them some abstractions. In my experience,
most abstractions aren't justifiable. The net effect is that it usually makes
their software harder to learn, harder to debug, inflexible (ironically), and
late for whatever deadline they were supposed to hit. You can't write Java
enterprise software in Go, and I really appreciate that.

~~~
rattray
Do you have any increased optimism about the refactorability of python given
the recent addition of type annotations?

~~~
mratzloff
Definitely, but given the relative recency my comment still stands.
Ultimately, all libraries in use in a given application must support this as
well for it to be truly comprehensive.

As a side note, sometimes you do need that performance gain, without wanting
to resort to C or C++. I hope to see Python make some gains there with the
addition of the type information.

------
windlep
I have to note this, because I think it deserves quite a bit more attention.

SSL is extremely expensive, on RAM (perhaps the implementations have optimized
for throughput over RAM). I have yet to benchmark any SSL implementation in
any language, with any binding, that can use less than 20kb per SSL
connection. I mentioned in my talk here that SSL is very expensive, here's my
benchmark suite that others may add to: [https://github.com/bbangert/ssl-ram-
testing/](https://github.com/bbangert/ssl-ram-testing/)

I have implementations in several languages, so far both Go and Python 3.4 can
get as low as the 20kb cited. If you can get your per-connection state below
20kb, then merely adding SSL means _doubling or worse_ your RAM requirements,
which is huge.

I appreciate that everyone loves obsessing on the language wars, but the SSL
RAM overhead affects us regardless of language. I covered that in one of the
slides near the end, it'd be great to see some movement on reducing the RAM
footprint here.

~~~
ssfak
Doesn't a TLS terminator proxy solve this? E.g. I usually put my application
services behind HTTPS-enabled nginx and it works wonderfully.

~~~
windlep
Nope, so, the goal here is to reduce how many machines (each with their own
RAM limits) are used. The task is holding open bidirectional SSL wrapped long-
lived websocket connections. They're held open for hours at a time, since we
need to send notifications when we get them.

Every connection has a base cost of the TCP kernel send/recv buffer, which in
our case we dropped a bit to 4kb each. So that's still 8kb per connection
right there. If we terminate the SSL on a separate machine from where we
handle the connection, then it means we'll be using 8kb more memory per
connection. Probably even greater because nginx has its own send/recv buffers
for data.

I'm sure our use-case is a unique one, most people care about raw through-put
so the majority of SSL optimization has focused on lowering CPU use under high
load rather than memory use under massive amounts of connections.

~~~
ash
What is your (unique) use-case about? What service do you provide to your
users?

------
ntrepid8
This echoes some of my own experiences. Python (even CPython) waits on the
network just as fast as Go (or anything else).

I've had good luck writing applications in Python, then profiling them and
implementing critical sections that are CPU bound in C as modules. Any
sections that are memory hogs can be converted to stream processors.

Lately I've started implementing the modules with Rust and the results are
promising. It seems like a nice balance of developer productivity and
application performance.

------
stock_toaster
The claim that pypy uses less memory than Go seems...rather extraordinary.

I worked with a fairly complex http api app that ran as a rather svelte wsgi
framework under gunicorn, and we saw at least 10x or more increase in memory
usage than cpython when we switched to pypy, once the jit was fully warmed up
(memory usage seemed to hit steady state after about an hour). pypy has also
historically (in my experience) been a bit more "lazy" (deferring GC of
individual objects longer) than cpython when an object falls out of scope.

My general rule of thumb for pypy has historically been that you trade memory
(requires more of it) for speed (faster) when compared against cpython.

Maybe the reimplementation itself was just far more efficient? Hopefully the
talk itself clarified that point. I would be very interested in hearing more
about that particular aspect.

~~~
dbaupp
I don't know the details, but the memory difference was almost certainly due
to the different approaches to concurrency. Python coroutines just need to
save their exact stack frame while a Go goroutine will spawn a "massive" stack
that likely has much more space than needed. The first case might only need
tens or hundreds of bytes for a dozen local variables, while the latter case
is a fixed overhead of several KB (8 previously, but 2 with Go 1.4 according
to the slides).

The JIT memory is constant at runtime (proportional(-ish) to the amount of
code, which is fixed) while it is desirable to have the number of coroutines
be as large as possible.

~~~
DasIch
The memory used by the JIT will at some point remain constant but before that
happens, it will grow as the JIT considers more and more traces to be worthy
of compilation.

It can take a very long time until the memory consumed by the JIT actually
remains constant. If you do continuous integration and deploy several times
per day, your application might never reach that point.

~~~
dbaupp
The fact it may be less than the long term result isn't so relevant: there's
an upper bound so the memory use of the JIT is O(1) with respect to the number
of coroutines.

------
zem
i've seen a lot of these posts ending along the lines of "it's time for rust".
two languages that are always conspicuous by their absence are D and ocaml.

D in particular seems like it would be the logical upgrade path from python or
ruby. it has a comfortably familiar C lineage, supports a variety of
programming paradigms, and has good concurrency support. i wonder why people
don't at least give it a look. (personal experience - i tried to use it twice,
several years ago, and gave up because the tooling was bad, but from what i've
heard that's very much improved today)

~~~
flogic
I've tried a couple of times to play with Ocaml. In seem like something that
should be great but it just falls short. Some of that is tooling that isn't
fully baked. Another issue is the syntax. The weird split between the
interface description file and the code file. The lack of a unified DB
interface. The lack of proper Windows support.

~~~
zem
i enjoy using ocaml a lot, but the lack of a good orm and the bad windows
support are indeed very painful. the tooling used to be bad, but at least
under linux i'm pretty happy with it these days for my small hobby projects.
it's one of those languages that i'd love to be able to use at work; i miss
the type system when i'm doing c++.

~~~
mercurial
Having worked with and without ORMs, I'm not sure what a "good ORM" is. On the
other hand, a somewhat type-safe generic SQL query builder would be very nice
to have.

~~~
zem
the other end of the process is pretty useful too - unmarshalling raw sql
results into ocaml objects or records, converting foreign keys into
references, etc.

------
0x434D53
Go is one of my working languages. Like with every other language (Python,
OCAML, C#, some Java, Swift) i have a love-hate relationship with go.

What i agree on:

What i agree on with the presentation: Concurrency using goroutines and
channels is f... hard besides very primitive scenarios. Even fork-join isn't
that easy. There also the lack of expressiveness hurts: It's nearly impossible
to build higher level abstractions above the channels/goroutines. You always
have to do the bookkeeping of your goroutines.

I also agree on the error-handling problems: It's often hard to locate errors.
It requires a big amount of discipline by the programmers to achieve some kind
of ability to locate errors. No, i don't want the Java/C#-'i throw exceptions
everywhere'-style back, but Go is the other extreme. Some more lightweight-
panic wouldn't be bad.

What i can't agree on:

That you cannot mock without interfaces in Go is typically not a problem:
There is no real encapsulation (_, ., no constructors) so in many cases you
just an instantiate your structs as you need them. The classic for mocking -
time - is problematic as in every language. IO is typically behind the various
io.Reader/Writer... interfaces: No problems there.

The criticism about memory consumption i don't get: Every system i saw ported
to go from Java, Ruby or Python had a much lower memory footprint than before.
And typically go allows to optimize allocation quite well when needed.

~~~
fauigerzigerk
I agree that error handling in Go is a headache. But since we are forced to
handle every single error where it occurs, we can at least make the best of it
and add context information before returning it.

I'm using a small utility library to wrap the original error and add function
name, line, file name and optionally a descriptive message that explains what
failed.

~~~
0x434D53
You are not forced to handle errors. Or do you check for the errors returned
by fmt.Print? I guess not.

This was my point: In many areas go forces the developer to the right thing
(no unnecessary imports, gofmt...) and does not rely on developers discipline.
But when i comes to error handling it does.

What i would wish for is some extended error handling supported by the
compiler. I don't want a stack trace, but the compiler easily could produce
for example a line number where the error was returned.

------
Animats
What's unusual is that this project uses PyPy in production. PyPy has been a
long time coming. Until recently, Python was defined by CPython, which is a
naive interpreter. As this article points out, performance is roughly an order
of magnitude better with PyPy. Now PyPy has to be taken as seriously as
CPython, if not more seriously. In a few years we may look at CPython as the
"toy" implementation.

------
falcolas
I'll be interested in coming back to Python when it isn't a headache to deploy
into production. I'm tired of an install requiring a GCC compiler on the
target node. I'm also tired of having to work around the language and
ecosystem to avoid dependency hell.

~~~
thristian
The way I deploy Python apps at $EMPLOYER:

\- CI system detects a commit and checks out the latest code

\- CI system makes a virtualenv and sets up the project and its dependencies
into it with "pip install --editable path/to/checkout"

\- CI system runs tests, computes coverage, etc.

\- CI system makes a output directory and populates it with "pip wheel
--wheel-dir path/to/output path/to/checkout"

\- Deployment system downloads wheels to a temporary location

\- Deployment system makes a virtualenv in the right location

\- Deployment system populates virtualenv with "pip install --no-deps
path/to/temp/location/*.whl"

The target node only needs a compatible build of python and the virtualenv
package installed; it doesn't need a compiler and only needs a network
connection if you want to transfer wheel files that way.

~~~
falcolas
The way I deploy Go apps at $EMPLOYER2:

\- go get

\- go test

\- go build

\- copy to target

It's possible with Python, it's easier with Go. It's a place where we could
use a lot of progress.

~~~
nulltype
It seems weird how you can't easily package python into an executable without
Docker.

~~~
falcolas
You can with various levels of success with a few "freeze" programs. They
basically bundle up the entire environment into an executable, so the
executables are stupidly large (more-or-less the size of your /usr/lib/python
directory plus the python binaries), but they _mostly_ work.

~~~
nulltype
I've done it before, but it was kind of a pain and I got the impression nobody
else used that stuff. I wonder why it's not more popular/easy.

------
hyperbovine
This page displays exactly nothing with JavaScript disabled. I realize that no
js makes me something of a Luddite, but there are solid reasons for turning it
off, particularly on mobile. Is it really too much to ask of google that the
text of a presentation in some way live inside HTML? Novel concept, I know.

~~~
gaius
Javascript is like Flash, it's great for the author of a page to show off, or
force an ad on you, but how does it actually benefit the end user? Not at all,
never has.

~~~
Cyph0n
Really now. JS can potentially improve UI and UX, providing users with a
smoother experience.

~~~
gaius
I notice you are careful to say "potentially" because it never actually
happens.

------
JulianMorrison
Huge interfaces are not how you do it. Little ones, each expressing a
conceptual unit of the functionality. Loggable. MessageConsumer. Stuff like
that, even if it only wraps one method. Make each major subsystem into an
object, connect to it through the mini-interfaces it provides, and test its
consumers by stubbing those mini-interfaces, not the whole object.

------
tinco
I've been porting a project from Ruby to Go in search of making it a bit
lighter. The project is about a 1000 lines now so I'm not allowed to criticize
Go yet, but so far it's very nice. I picked the language up in just a few
hours as I went and the whole project took just about a week and a half.

So as someone who is clearly in no position to be criticizing other projects
yet, isn't Heka exactly the sort of project you shouldn't do in Go? I say that
because I have the feeling you should use Go only for very concrete cases,
given its lack of proper abstractions.

I.e. writing a tool that receives log lines over HTTP, extracts metrics and
forwards those to StatsD? Perfect use case for Go. But writing a tool that
lets you plug in arbitrary frontends to forwards to arbitrary backends?
Perhaps they got it to work nice, but that sounds more like a case for a more
general language.

~~~
aikah
> Perfect use case for Go

Go shouldn't have "use cases". One should be able to do almost everything with
ease with a language built in 2008/9.

And unfortunatly that is not the case. Go has excellent concurrency features,
but is limited by dumb language design decisions which make it painfull to
test and to write good reusable and composable libraries for.

I'd love to replace my entire stack with Go but I can't. Something I would
write in Ruby in 10 days takes 2 months in Go. And worse, I cant write the
code the exact way I want which does piss me off. Give me some choice, not
random constraints. Aside from concurrency , I shouldn't have to ask myself
how I write something in a specific language. This is the goal of refactoring,
and it comes later.

I will personnally invest in Crystal and dump Go as soon as it runs on
Windows. It has channels, and that's all I need.

~~~
Luminarys
> Something I would write in Ruby in 10 days takes 2 months in Go.

Have you looked into Elixir? It has Ruby like syntax, but uses an actor model
for concurrency(it runs off of the Erlang VM). For handling concurrent tasks
it tends to benchmark around Go's speed, but is much nicer to do things in.
While it is admittedly immature, the ecosystem still has a decent amount of
packages and the tooling isn't bad.

~~~
rubiquity
Being a Ruby to Go to Elixir convert myself, I can only second this. Elixir is
a really great language.

------
eugenekolo2
Does anybody have some comments on the closing statements on Google's 10KB
secret?

~~~
nulltype
I suppose there's the relevant blog post
[https://www.imperialviolet.org/2010/06/25/overclocking-
ssl.h...](https://www.imperialviolet.org/2010/06/25/overclocking-ssl.html)

~~~
spoiler
Is it possible that it's just a typo, though?

~~~
eugenekolo2
Not sure what you mean, but the site linked above says they do it <10KB and
then a few paragraphs down says how they patched it to go down to a bit more
than 5KB. So, I don't think it's a typo.

------
CatDevURandom
I use and like both python and go. The presentation mirrors my own
experiences, though I haven't come back again yet.

~~~
kawera
I use and like both too; webapps in python (django/flask) and microservices in
go. My systems also use some large programs written in go but more like black
boxes (nsq, heka, influxdb).

------
sandGorgon
what is strange is that everyone seems to be using Python 2.7 in some form.
There is all this new work being done in Python 3 asyncio but there is nothing
that is being used... and this is from a guy who pretty is a core developer in
several python projects.

I look at Ruby or Go... or even Java and every new language feature has a much
more rapider adoption curve.

is this a pretty solid statement that the entire Python 3 + asyncio path is a
dead end ?

~~~
windlep
We used Python 2.7 for pypy compatibility. I really dig the new async/await
primitives added to Python 3.5 and think the asyncio path is the best path
forward. I hope the work to get all the latest asyncio stuff working on PyPy
picks up some steam.

~~~
sandGorgon
hey - thanks for replying.

if Pypy + asyncio was available, would you have built everything using that
stack ? There have been all these benchmarks that asyncio is so much slower
than threads [1] How would you compare that with Go ?

[1] [http://techspot.zzzeek.org/2015/02/15/asynchronous-python-
an...](http://techspot.zzzeek.org/2015/02/15/asynchronous-python-and-
databases/)

~~~
kintamanimatt
For certain things that they probably shouldn't be used for, not across the
board.

------
zeckalpha
Yes, it is time for Rust.

~~~
scott_karana
Yeah, Rust sounds like a _great_ Python replacement. /s

I do like Rust, but must we clutter _every_ Go-related thread with comments
about it? The zealousness is annoying.

~~~
cwyers
Well, the last slide said "time for Rust" and the presentation was from
Mozilla.

~~~
windlep
I meant that partly as a joke, I do happen to like Rust though. And in the
context of this technical problem, its memory control would make it an
attractive possibility (as would C/C++/etc). Lots of context missing as these
are merely slides, and I said quite a bit more than what is presented here.

------
agentgt
I have my own Go Heka story. I attempted to switch from Apache Flume to Heka
mainly because of Flume taking vast amount of memory. I was hoping Heka would
work but I think there must be some problems with the Golang AMQP drivers as
memory usage would just continue to grow. This might have been my fault as I
had to alter the Heka AMQP drivers to do things with the AMQP message headers.

The problem was pretty simple: pull event messages from AMQP and then shove
them into elastic search and file system. Heka and Flume were both sort of
overkill so I decided to write it in Rust. I got extremely far but alas there
were some issues with the Elastic Search Rust library that I'm still
resolving. Surprisingly the AMQP library worked pretty well.

I will vouch for the OP's point on error handling as Rust has a similar issue
to Golang but not as bad because of the awesome type system (still I hate to
admit but I really miss exceptions at times).

Anyway to relate again to the OP I went back to what I know best.. boring ass
Java and wrote the app in a hour or so. It took about the same memory as Heka
(surprising since its Java) and appeared to be slightly faster than Heka
(elastic search indexing became the bottleneck for both so take that with a
grain of salt).

Long story short.. I think the drivers and libraries really are the deal
breakers and not so much the languages themselves (with some minor exceptions
like the GIL).

------
Myrmornis
> Go is generally 50-100x faster than CPython

(slide 20)

I find that a weird sentence in an otherwise carefully written article. The
author is talking about writing software which does a lot of socket IO. So I
would expect the performance discussion to make some reference to this; I
assume what he's talking about in the quote is the behavior of pure CPU-bound
code but he doesn't discuss to what extent this is really relevant to his
project.

------
buro9
This is interesting, there are places I would still choose Python over Go. But
usually these are front-ends due to the richness of the l10n, i18n and
template options that Python has. On the backend I exclusively use Go and have
not seen route leak issues or some of the other things. The only thing that I
do feel is that some Go code is harder to mock and fully test effectively.

------
bipin_nag
My take on the points mentioned:

1\. Goroutine memory use - Post happens to be about 1.2 and 1.4 and I started
with 1.5.

2\. Debugging - yes handling errors are a bit tedious. I have not written much
boilerplate to handle errors. I just copy error handlers a lot. Maybe that’s
why error strings are a lot common.

3\. Goroutine leaks - This is scary, I have used goroutines with channels but
properly so far. Yes you can write code that leaks. This is something you will
have to check yourself.

4.Testing - not done much.

Overall - I feel author learned of some negative aspects about Go and turned
away. Some of them like Goroutine memory footrpint will improve with time
(e.g. 1.5 earned a lot of praise for GC improvement). A lot places author
mentions possible improvements for e.g godebug or latest Go with SSL but did
not try it as much. So it may not be as relevant to new Go adopters.

------
bootload
_" Go experts still can't write leak-free code"_

Big statement, anyone confirm this?

~~~
dbaupp
The next two slides have screenshots of issue tracker searches for "goroutine
leak".

~~~
bootload
yeah I saw those, what about examples from independent people reading the
article on HN?

------
gtrubetskoy
I am surprised the GIL hasn't been mentioned once in the presentation or any
of the comments.

~~~
alextgordon
node.js has a GIL and nobody talks about that either. The GIL is not relevant
because Python is _so slow_ that running it in two threads is hardly an
improvement.

~~~
gtrubetskoy
But don't you need to run a bunch of node.js instances in production on a
multi-cpu system behind nginx or haproxy? Would it be necessary had it not
been for the GIL?

~~~
alextgordon
Most node.js code spends barely any time on the v8 thread. The node code
merely invokes routines in libuv, which has a thread pool for anything
computationally intensive.

------
unethical_ban
If I can get a nuanced presentation out of a slide deck with no audio, you've
built your slide deck incorrectly.

If I can't get a nuanced presentation out of a slide deck, then maybe an
article or corresponding speech is required!

------
mixmastamyk
Well, I've read that it takes ten years to build great software. So golang has
a while to go. Python, while not perfect has the infrastructure already built.

------
giaosudau
btw, python with pypy gonna be a bright future.

------
hit8run
What color theme and IDE is the author using :D?

------
jbeja
Glad that some people see the light.

------
zobzu
"drank koolaid, used tool without knowing what its good at, lost 2 years and
probably a million or two in engineer resources"

Gratz. Next time, use C. PyPy won't solve your problem. You're writing a low
latency, high performance, large throughput, super-optimized message router.
Even the libs you like in Cpython are.. freaking C. Do you know why? Do you
understand what happens underneath the language? Do you know why PyPy is
faster?

Don't get me wrong - and in fact, maybe you get me right:

Go rocks. Python rocks. It's just not the tool for that very job. Today, it's
still freaking C.

