

Gotchas, Irritants and Warts in Go Web Development - lbarrow
https://www.braintreepayments.com/braintrust/gotchas-irritants-and-warts-in-go-web-development

======
grey-area
Great post. I've done some experimentation with Go for web dev, and
encountered similar problems; it's such a delight to write code in that I'd
love to use it in production, but can't justify spending a lot of time
debugging the immature libraries and writing new ones, particularly not on the
clock for clients. I too ran into a problem with null types in another library
interfacing with postgresql. The weak points I found were:

* Lack of an ORM (there are many, almost all incomplete, and lacking in some way)

* Immature or incomplete db interface libraries

* No db migrations (would love to see a simple sql-only solution here)

* Package management is simple and elegant but without explicit versioning, so forking is the end result to ensure stability

* No process pool management for running Go behind something like Apache or nginx.

This goagain tool looks interesting for the last point though:

<https://github.com/rcrowley/goagain>

and the routing I found pretty straightforward with something like
github.com/gorilla/mux so there are solutions, they're just not always as
fully baked or as mature as you'd find in other ecosystems. One place Go
shines though is the great built in web server which really simplifies getting
up and running and testing, and which I see they've built on here.

Given how easy and pleasurable it is to work with, and the focus on
practicality rather than language features, I'm quite confident that Go will
reach it's stated aim of becoming a popular server-side language quite soon.
For those new to the philosophy behind the language, I found this informative
- <http://talks.golang.org/2012/splash.article>

~~~
jcoby
I went on a framework binge over the past two weeks to see what the state of
web development was. I looked at a dozen or so frameworks including Go
frameworks. I came to the same conclusion you did.

Revel was the most promising Go web framework that I saw though very immature.

Go ORMs aren't ORMs. They are object persistence frameworks (at best). None of
them really provide the "R" in ORM. Most of them basically do "SELECT * FROM
table WHERE id=12" and dump that into a struct. There goes the "M" as well.

The Go language itself is very nice to work with though. It's a great
language. Give it 2-3 more years and a truly usable ORM and web framework will
crop up.

~~~
NateDad
Many people coming from dynamic languages rely too heavily on ORMs. It's not
very hard to write some SQL that will do what you want it to do, and it'll
almost always be faster than an ORM. There is a little more setup, but it's
not really that hard unless you have a really huge data model. Also, many
people are moving away from relational databases for web platforms anyway, and
non-relational databases are a lot easier to write code against. Check out the
mgo package for running against mongodb.

~~~
grey-area
I don't think people use ORMS because relations are hard, but because they are
boring, repetitive, and just complex enough to make them tricky but not
interesting. So ORMS are used to take away some of that cognitive load when
you just want to say that Story has_many :commenters, via: comments or
something similar. It's simpler to express what you want, gets rid of
boilerplate joins etc, and usually you can drop down to SQL if you need it to
fine tune a query, so why not?

Re relational versus non-relational, they're really suitable for different
kinds of data, and it's disingenuous to suggest that one is the future and one
is the past.

~~~
tptacek
This; the moment I realized I was never going to choose to do CRUD web stuff
in Go was when I realized that this was the perspective Golang developers have
on ORMs.

~~~
mbreese
That's funny... knowing that ORMs aren't everywhere actually makes me _more_
likely to use Go. I still have Hibernate flashbacks.

~~~
lobster_johnson
Hibernate is not a good example of an ORM, frankly. (I myself have horrors of
both Hibernate and TopLink, which was the top Java ORM way back when.)

Ruby's ActiveRecord is a much better choice. It has an excellent balance
between SQL and OO. It doesn't pretend that SQL doesn't exist; on the
contrary, it encourgaes SQL use, and merely maps tables to objects, adds a
bunch of useful features (data validation, change management, automatic joins,
declarative migrations) and gets out of your way most of the time.

For example:

    
    
        users = User.where("created_at > ?", Time.now - 1.year).
          order('name').limit(10)
    

Results are lazily loaded and composable, so you can do:

    
    
        users = User.where("created_at > ?", Time.now - 1.year)
        if (page = params[:page])
          users = users.offset((page - 1) * 10).limit(10)
        end
    

Joining is easy:

    
    
        users = Users.joins(:accounts).
          where(accounts: {type: 'facebook')).first
    

becomes something like:

    
    
        select users.*
        from users
        join accounts on accounts.id = users.account_id
        where accounts.type = 'facebook'
        limit 1

------
JulienSchmidt
The timeout issue was fixed in Go 1.0.3. The problem wasn't that the
connections timed out, but that the database/sql package wasn't able to handle
this correctly. <http://code.google.com/p/go/source/detail?r=b397807815a6>

Please keep in mind, that the database/sql package is - like Go itself - still
very young. There is still a lot of work to do compared to mature libraries
like JDBC, but it will improve with every release.

~~~
pico303
Yeah, like the fact two queries can't run simultaneously in a single
transaction. All queries in an single transaction must be run sequentially.
What worries me though is that this is "by design", and seen as completely
proper by the Go devs.

That's the reason I stopped using Go: they have guys who don't understand
databases writing their database code.

------
pkulak
Seems like most issues (except the nil problem, which is explained very well
by chimeracoder) are not really with the language itself, but just with its
ecosystem, which naturally is nowhere near something like Java or Ruby at this
point. I agree, and while I love Go, I would probably develop a new web app in
Sinatra. Who knows if that will be the case next year, though.

~~~
rartichoke
Yeah and this is why I don't think it will ever become semi-main stream. I
personally use Express with Node and it seems a good amount of people use
Sinatra/Flask as well.

It's much more than just that though. If you scanned Express feature by
feature you could implement most of them in Go. I would imagine someone
experienced in JS and Go could port the entire thing over in a day.

It's just then what? Now you have:

1\. Similar / worse performance.

2\. Way less useful libs to leverage.

3\. Have to write 2 languages instead of 1 (if you happen to already use
Node).

4\. Dealing with way less mature libs for crucial components that you
definitely don't want to be writing yourself.

There's basically no gains. Deploying single binaries is great but a "write
once, use nearly forever" build script makes deployment a snap with any non-
compiled language.

Testing support is amazing in JS too and debugging is leap years ahead of Go.
Using gdb is just archaic compared to using node-inspector. I'm sure Ruby and
Python have equally as amazing testing/debugging support too.

~~~
tptacek
Yes because Javascript and Golang are basically the same language with
basically similar approaches to concurrency, similar deployment
characteristics, similar toolchains, and comparable performance in most
situations. Is what you're saying, right?

~~~
rartichoke
No, they have much different ways of dealing with concurrency. Deployment is a
solved problem in most modern languages.

I think you may have misunderstood my post?

I also spent a pretty decent amount of time performing real world benchmarks
for both languages by writing applications in both and then ran various
performance metrics. Performance for general web apps with real world data
goes back and forth depending on what you're doing.

My point was Go doesn't offer nearly enough pros for it to worth switching to.

~~~
tptacek
If by "solved problem" you mean "if you accept the problem of keeping an up-
to-date deployment environment on every machine with up-to-date patches",
compared with "build this binary copy it over and run it", yes, Node and
Golang deployment is comparable.

I think you're mostly just handwaving.

~~~
rartichoke
It is a solved problem. You can't just copy over a folder (or file) and call
it a day but it doesn't take that much to get Node deployed by simply typing 1
command on 1 machine and this is the same thing you would end up doing for Go
too.

If you ever dealt with deploying to more than 1 machine you would realize that
doing it by hand is a pretty crazy idea. The first thing you would do is
create a solution that allows pretty much hands free deployment, and those
solutions exist for every modern language. Heck, it's one of the first things
I did as a developer even while deploying to 1 machine.

~~~
chrisbroadfoot
> this is the same thing you would end up doing for Go too.

No, you'd just need to copy over one binary. Go programs compile to a system
binary.

~~~
rartichoke
Ok, so you're going to be scp'ing your binary over manually every time? Do you
manually do your other build tasks too?

No, you would have a build script that minifies/concats assets, runs tests,
maybe generate docs, then finally deploy using whatever method you happen to
be using if everything passes.

This might be a git deploy, or scping files over to some server.

In either case you're never copying 1 file over because that is abstracted
away from you by your build script. In return you type 1 command and let your
build script do the dirty work for you.

Typing this one command is the same if you're using Go or Node or any other
modern language. It doesn't really matter that I have to add a few extra
commands to my build script because these are things I only have to do once.

------
Jabbles
I'm not an expert in databases, but a boolean that can take 3 values smells
wrong to me. Is this regarded as best-practice?

~~~
jcoby
NULL is one of the most powerful features of SQL. It simply means a lack of
data. This means it is neither equal nor not equal to anything else (including
NULL). NULL is not a value; it's a state of non-existence.

So, for example, if you were providing a survey with an optional question with
a yes/no answer. NULL would mean "no answer", false would mean "no", and true
would mean "yes". Storing the "no answer" as a false would be incorrect since
they did not answer the question.

It could also happen if you were adding a new column. Existing rows do not
have data and would deserve a NULL unless you had a deterministic way to fill
in a true or false value.

~~~
dragonwriter
> NULL is one of the most powerful features of SQL.

I'd argue that NULL is, from a logical perspective, the single most broken
feature of SQL.

> It simply means a lack of data.

The semantics of NULLs are less straightforward than that, and have a poor
relationship to how SQL actually treats them. Every table with one or more
nullable columns really should be a table with all the non nullable columns,
plus additional table with each combination of columns that would never be
missing together, each of which has a foreign key relationship back to the
first table.

That's for the simple case, where the semantics of missing data are always
consistent for any set of columns; in real-world databases there are often
more than one reason data that can be missing might be missing, and those
different reasons (because they are different classes of fact), for any given
column or set of columns to which they apply, each call for another table with
a foreign key reference to the table containing only the mandatory columns.

> So, for example, if you were providing a survey with an optional question
> with a yes/no answer. NULL would mean "no answer", false would mean "no",
> and true would mean "yes". Storing the "no answer" as a false would be
> incorrect since they did not answer the question.

Sure, storing it as one table with all the questions as columns and storing
the "no" answer when the answer was missing would be an error. If all the
questions aren't required for the survey to be valid, then -- from a logical
perspective -- the problem is presenting the whole thing as a single relation
in the first place. Its a set of relations, that share a key (but not
necessarily all _values_ of the key.)

~~~
tracker1
And how much overhead in logic, code and frustration would that cause in terms
of development and support.. right now, I'm dealing with an over-normalized
database close to what you are describing and needing over 20 joins in a
single query to get a complete record for display (not including actual sub-
records) but to get a complete set of properties, where null means not
there...

~~~
dragonwriter
> And how much overhead in logic, code and frustration would that cause in
> terms of development and support..

Depends on the competencies of the people doing dev and support. Personally --
both as a developer and a technical user -- I've had more problems dealing
with situations where NULLS had ambiguous semantics, where the typical naive
use of nullable columns instead of normalization into logical units of data
that must all be present or absent together resulted in avoidable data
inconsistencies, etc., than I've ever had with overnormalized tables.

Joins for queries are a solve-once development problem; data inconsistencies
and ambiguities resulting from the problems with NULL are an ongoing problem.

------
chimeracoder
> Go does not allow many basic types, such as strings or booleans, to be nil.
> Instead, when a type is initialized without a value, it defaults to the
> “zero value” for that type. This is frequently useful, but complicates
> database interactions, where null values are common.

This sounds like they weren't defining their types properly.

If your value can potentially be null, it should be a pointer to the type, not
the type itself. A string can't be null, but a pointer to a string can.

(In fact, there's no magic going on here - 'nil' is simply the zero value of a
pointer. So you always get the zero value - you just need to choose the type
that has the zero value you want... which is, in this case, a pointer, not a
value).

As explained well in this thread[0], this is the most accurate representation
of the data itself. You _could_ create your own type that automatically
decodes all null values to whatever the zero value is (empty string, etc).,
but then you lose that information.

Yes, this forces you to do a check for the null value before using the data
for the first time (or to invent your own monad for abstracting this), but at
a high level, that's what you have to do in _every_ language.

[0] [https://groups.google.com/forum/?fromgroups#!topic/golang-
nu...](https://groups.google.com/forum/?fromgroups#!topic/golang-
nuts/JOFWAqrTbUs)

~~~
pcwalton
Using a pointer adds an extra heap allocation, though. The right thing would
be to use a Nullable<T> type and define a custom marshaller once and for
all... but Go doesn't have generics, so you can't do that.

~~~
chimeracoder
> Using a pointer adds an extra heap allocation, though.

If you want to dereference it immediately (as they seem to want to in the
post), this isn't really going to affect you. You probably want to pass around
a pointer, anyway, so that you're not copying values over each time.

> The right thing would be to use a Nullable<T> type and define a custom
> marshaller once and for all... but Go doesn't have generics, so you can't do
> that.

Sure you can - that's literally what the NullBool, etc. types do

A pointer to a string IS a 'Nullable<String>' - what they need to do is define
a way to unwrap this cleanly, which is easy.

~~~
pcwalton
> If you want to dereference it immediately (as they seem to want to in the
> post), this isn't really going to affect you. You probably want to pass
> around a pointer, anyway, so that you're not copying values over each time.

Sure, I'm not saying that the extra allocation will always _matter_. In most
cases it won't. My point is just that this type system workaround does cost
some performance. For example, a heap-allocated struct that contains two
nullable ints using the pointer trick has 3 heap allocations, not one.

> Sure you can - that's literally what the NullBool, etc. types do

But that has to be done for each type. If you define a custom type Foo and
want a nullable version, you have to write the NullFoo boilerplate yourself.
This is what generics are for.

~~~
chimeracoder
> If you define a custom type Foo and want a nullable version, you have to
> write the NullFoo boilerplate yourself. This is what generics are for.

I don't want to descend down the rabbit hole of 'Go doesn't have generics',
but as I said above, in this case, pointers do exactly what is needed.

~~~
kevingadd
A pointer to T is not the same as Nullable<T>. There are many important
differences.

The expressible values for a Nullable<T> are, in theory, the following:

A valid instance of T, or Null

The expressible values for a pointer-to-T are, in theory, the following:

A null pointer (pointer-to-0)

A pointer to a valid instance of T

A pointer to a previously valid, but now invalid because it was freed,
instance of T

A pointer to an arbitrary location in memory

In systems programming (the space Go is presumably designed to be most useful
for), the distinction between valid and invalid data is pretty important, so
it's a little lazy to say 'just use pointers' when there are examples out
there of safer, more efficient alternatives.

~~~
eurleif
>The expressible values for a pointer-to-T are, in theory, the following: A
null pointer (pointer-to-0) A pointer to a valid instance of T A pointer to a
previously valid, but now invalid because it was freed, instance of T A
pointer to an arbitrary location in memory

In C. Go has garbage collection, and no pointer arithmetic, so that's not the
case in Go.

~~~
kevingadd
If you think a garbage collector and the lack of pointer arithmetic protect
you from heap corruption bugs and developers deciding they're smart enough to
manually manage memory for some extra speed, I've got some bad news for you.
;)

~~~
eurleif
How do you expect them to do that when the language doesn't allow it? Can you
show an example of Go code which creates an invalid pointer?

~~~
Symmetry
There's a library called "unsafe" explicitely for this. I think it's
reasonable to expect people to generally not use it, though.

<http://golang.org/pkg/unsafe/>

~~~
eurleif
As far as I know, it can only create a pointer of type A that actually points
to a value of type B, not reference unallocated/deallocated memory.
Nullable<T> could do that too if the language allowed it.

~~~
lagom
It's definitely possible to access unallocated memory e.g.,

// p points to address 1000

p := (*int)(unsafe.Pointer(uintptr(1000)))

Of course, this is why use of the unsafe package is heavily discouraged.

------
bfrog
I basically encountered the same annoyances, primarily with the whole
NullBool, NullString business.

Go is pretty good at most things, dealing with SQL so far was a bit annoying
however.

The Goroutine scheduler also needs some love still. The Garbage Collector and
global heap I feel like is always a bad idea but Go did it anyways. Shouldn't
they have learned from Java that global heaps and concurrency don't mix well?

~~~
JulienSchmidt
There will be a new scheduler and an improved GC in Go 1.1, which will be
released soon.

------
trungonnews
You should try out Play2 on Scala if Go doesn't work for you.

------
TallboyOne
I'm 100% a ruby fan, but it seems crazy they went from Go to ruby if ultimate
performance isn't their goal. Wouldnt a static type language be worlds faster?

~~~
pkulak
Go is pretty slow right now. I wrote a prime number finder in Go and Ruby 1.9
and Ruby smoked it. Granted that's probably mostly due to all the math in Ruby
being C, but still, you can't just say that Go is faster because it's compiled
before hand.

~~~
NateDad
Honestly, my guess is that you were doing something wrong in the Go code. Go
is definitely faster than Ruby, even on a single core machine.
[http://benchmarksgame.alioth.debian.org/u64/benchmark.php?te...](http://benchmarksgame.alioth.debian.org/u64/benchmark.php?test=all&lang=go&lang2=yarv)

~~~
pkulak
EDIT: haha, thanks so much, guys! Stupid mistake. Go is _significantly_ faster
if written properly.

I'd love to know what I screwed up. The results surprised me:

<https://gist.github.com/pkulak/99d2d5a5968b5fc03754>

<https://gist.github.com/pkulak/909d882614e0781e4525>

    
    
      bender:Desktop phil$ time go run primes.go
      Found them! 78702
    
      real  0m21.349s
      user  0m21.304s
      sys 0m0.033s
      bender:Desktop phil$ ruby -v
      ruby 1.9.3p194 (2012-04-20 revision 35410) [x86_64-darwin11.4.0]
      bender:Desktop phil$ time ruby primes.rb
      Found them! 78702
    
      real  0m7.656s
      user  0m7.651s
      sys 0m0.005s
      bender:Desktop phil$ time go build primes.go
    
      real  0m0.269s
      user  0m0.231s
      sys 0m0.033s

~~~
hndc
You're using integer arithmetic in Ruby and floating-point arithmetic in Go.
Try replacing

    
    
      math.Mod(float64(i), float64(j))
    

with

    
    
      i%j
    

My results:

    
    
      $ time go run primes.go
      Found them! 78702
    
      real	0m0.835s
      user	0m0.787s
      sys	0m0.039s
    
      $ time ruby primes.rb
      Found them! 78702
    
      real	0m21.013s
      user	0m21.005s
      sys	0m0.016s

~~~
agentS
It's worth noting that the time on the go side includes the time to run the
compiler, and linker, since you're using `go run`, instead of `go build`. Not
saying that's bad, just something to keep in mind when benchmark numbers are
in the second range.

(I understand that you did because the original poster did it)

