Hacker News new | past | comments | ask | show | jobs | submit login
Go’s Features of Last Resort (arp242.net)
120 points by kristianp 22 days ago | hide | past | web | favorite | 77 comments



As a very opinionated user of Go for ~6 years now...

interface{} is almost always a code smell. I can judge a book by its cover with a grep and line count for interface{} and reflect. If it's more than 20~30 for just about any small-medium size code base I dig deeper and look at why. Usually it is people being lazy and refusing to accept what is my twist on "idiomatic Go": a little bit of copying is better than a whole lot of abstraction[0]. Also devs telling me interface{} and type switches make it polymorphic are the ones who will always complain yet not leave the Go ecosystem. Hopefully generics in 2 solve this for everyone.

panic is great when you really want to crash. There are times to do this. I have only used it once in a batch job runner.

init() is a sin ESPECIALLY when you combine init() with the `.` (dot) imports. Pulling packages around for their magic side effects suck. Also the lack of some library maintainers to acknowledge init() can be poisonous... [1]

[0] https://go-proverbs.github.io/

[1] https://github.com/lib/pq/pull/455


Two valid use cases I've found for interface{}:

Handling large arbitrary JSON objects where I need to operate on their keys but don't need to work with their values by unmarshaling to map[string]interface{}. The alternative would likely be code generation of struct definitions with JSON bindings, which is likely preferable if you already have code generation in your project, but can be an overhead in smaller ones.

It can also be used to create your own syntactic wrappers around API's which consume interface{} such as encoding/json and database/sql:

  Unmarshal(r *http.Response, dst interface{}) ([]byte, error)

and

  BulkInsert(tableName string, columns []string, values [][]interface{}) (time.Duration, error)


> Handling large arbitrary JSON objects where I need to operate on their keys but don't need to work with their values by unmarshaling to map[string]interface{}.

If you just need the keys, you should unmarshal to `map[string]json.RawMessage` to avoid the overhead of parsing the values into objects only to throw them away without ever using them.


Handling JSON is the one place where I’ve used interface{} and not felt dirty. Sometimes you really do need to model “anything” or “whatever”, especially when passing around arbitrary data without making assumptions about its schema.


I use it for shutdown channels. It doesn't matter what you're putting in the channel if you're never putting anything in the channel


Using chan struct{} is more efficient, and guarantees there is nothing being put into the channel. That's why you see it used on the Context type's Done method, for example.

https://golang.org/pkg/context/#Context


I agree with the assessment of interface{}. However, the proverb you cite does not exist. It is: "A little copying is better than a little dependency.", which does not apply here.


That it why GP said it was their twist.


Even https://golang.org/pkg/sync/#Map requires interface{}. Any code that confines itself to raw slices and maps is probably using the wrong data structures for the problem.


Did you mean init combined with import underscore?


It's very interesting to me how bad design decisions in Go are accepted by the users with little protest. There are other things - the Go path, error handling, surprising conventions in a language with no "magic", etc.

Yes, it was created by behemoths of computer science (but more accurately in this case, dinosaurs), and they clearly have not been writing and debugging large production systems for decades now. That's why the whole thing is clearly missing the lessons learned from modern language design.


Well they did port dl.google.com from a very large C++ codebase to go. Do you have any apps you've worked on that are higher traffice than dl.google.com?


I didn't say Go is unusable - I said it's full of bad design decisions. Google has a LOT of people who are Go fanboys. It's their in-house language.

Terraform is also written in Go, and even though I refer to it to see how they manage a larger Go codebase, my question is - "so"? This is my opinion from writing Go, I have no problems pointing out what I don't like, even though at Google, I am sure, it would be unforgivable blasphemy.


That's not an argument. There are many websites that are higher traffic, yet written in slow languages like Ruby and Python.


Give me a fast language and I WILL find a way to achieve horrible performance.


Sure, instagram is a giant django app running ontop of many postgres databases. But I was pointing out that Go can be used for very large codebases running high traffic sites.


It compiles to bytecode, of course it is fast - we are specifically discussing language features and developer experience.


The only bad design I see is the missing generics.

The path, modules, and error handlings are features to me, not drawbacks.


The Go path is going away. This is being replaced by modules (the feature that I actually like a lot, especially the vendoring).

The problem is, why did it take almost a decade for a modern language, at Google nonetheless, to get a proper dependency management feature?

To me it says that the Go vetting process at Google is seriously constipated, perhaps because people there just say "this does not look right, but who am I to question the decisions of Rob Pike and Ken Thompson?". And then we end up with these language design decisions from the 1970s.

And also, amazingly, error management in Go may have actually been made worse in 1.13: https://www.reddit.com/r/golang/comments/biexq0/go_113_xerro...


Google develops everything at #head of a monorepo, so their dependency management problems consist of “please rollback ASAP, your changelist broke the world”.


Struct tags in Go always seemed like a very bad and hacky way to add metadata to fields. I really hope that's something they'll look into in Go 2.


Currently I am facing an issue where I am unmarshalling json into a protobuf struct. Unfortunately the json tag names are a bit different than the protobuf struct field names. I would love to be able to use struct tags here, but to my knowledge protobuf definitions don't allow this, and I don't think I can put them in after the fact. So I am stuck loading the data into an interface{} and manually pulling out the fields.


One technique I use is to create a struct that mirrors the layout of the target struct, then just cast between them. So if you have a

  type Person struct {
    FirstName string `json:"first_name"`
  }
And you get json like `{"name": "Alice"}`, you can decode it something like this:

  var doc struct {
    FirstName string `json:"name"`
  }
  json.Unmarshal(data, &doc)
  person := Person(doc)
I have not looked at the assembly so I can't say whether this incurs extra overhead or not.


I thought of this, but I would have to do this for every single protobuf message. Also, anytime the protobuf changes, you would have to update the code.


Your protobuf messages are changing that much? My condolences.

ETA: Also, you already are needing to add code to every encode/decode point for the `map[string]interface{}` handling. :P


is this for the protoc generated code? otherwise, could you not define different struct tags for the different serialization types?

  type Person {
      Name string `protobuf:"name",json:"given_name"`
  }


Yes it is for the protoc generated code. It would be nice if the protobuf spec allowed for something like tags as well. The main difference is the json coming in uses hyphens, whereas protobuf<->json only allows for underscore or camelCase field names.


If you use gogoproto, it includes an extension to the protobuf spec, to override json tag for the emitted Go. https://github.com/gogo/protobuf/blob/master/extensions.md

Use it like:

    import "github.com/gogo/protobuf/gogoproto/gogo.proto";

    ...

        Type myField = 1 [(gogoproto.jsontag) = "my-field"];


> Yes it is for the protoc generated code. It would be nice if the protobuf spec allowed for something like tags as well.

Have you seen Protobuf options [1]? They allow for a tag-like-behaviour, but are strongly typed. It's notably used by gogoproto [2] to allow specifying custom field types or names [3] when generating Go stubs.

[1] - https://developers.google.com/protocol-buffers/docs/proto#cu...

[2] - https://github.com/gogo/protobuf

[3] - https://godoc.org/github.com/gogo/protobuf/gogoproto


Thanks, I have looked at options. I think it could work, but I think it would require writing a custom marshal/unmarshal function for each protobuf struct, even if they are essentially the same for each one. I will have to take a closer look at the links though.


gogoproto worked for me, using the moretags extension, thanks for the links!


Code-gen with help of the 'template' library might be a way out of the code repetition.


I haven’t had any issue with them. I certainly like them better than Java annotations, which encourage way too much black box magic and induce too much code coupling. The fact that they’re a little awkward to use is good, nobody should have to figure out what sort of magic happens with a pile of metadata on a field.


How do Java annotations encourage that more than struct tags? If you can write metadata, you can write code that reflects on it. I don't see any structural difference other than IBM hasn't started writing enterprise WebSphere code in Go, yet.


Are struct tags meant to be similar to Java Attributes and C# Annotations? Seems like a fine pattern but needs more tooling.


They're much weaker than annotations and attributes. You can amend the type system with annotations (e.g. nullability detection, as well as compile time validation - check out micronaut to see an example of what they're doing with annotations).


I don't know about Java or C#, but it's a key-value pair on a struct field that you can get back with reflection.


I think the fallthrough keyword also belongs in this category. It's a surprisingly ugly feature of the Go language, and I recall an interview where two of the three language designers said its their least favorite part of the language.


>>> pass a map[string]string to json.Marshal()

This is my goto speed hack when prototyping. It always works. You don't have to worry about deep nesting. And it frees you to design the data model as you prototype. It's also less punishing than reflecting on every node in a list.

And yes, golang / json (and go / node) interactions need to evolve better developer patterns and runtime performance ;)


Say you put a single type Foo in its own package to ensure that nothing ever — intentionally or unintentionally — touches its unexported (private) fields.

If "Dot Imports" are frowned upon, it means that your other code using that type is always going to have to refer to it as the stuttering "foo.Foo" rather than "Foo". That's not great.


Do that, then do a type alias in the package you would "like" for it to be imported from:

    mypkg/internal/foo/foo.go:
        type Foo struct { … }

    mypkg/mypkg.go:
        import "mypkg/internal/foo"
        
        type Foo = foo.Foo
Then, it can be imported from "mypkg` and used as `mypkg.Foo`.


Neat, you win at puzzle solving! I forget that the user-level type alias feature exists... maybe since it was introduced relatively late in the Go 1.x timeline.


> Say you put a single type Foo in its own package to ensure that nothing ever — intentionally or unintentionally — touches its unexported (private) fields.

This is not a good reason to move a type to its own package. Packages are for grouping related functionality.


Packages have to be used for access control as long as there’s no other way to do it. You’re committing to support uses of anything you make public.


What? Exporting a type or field is an opt-in decision, you don't need a special package per type to do it.

If the fear is that other types or functions in the same package can access the unexported fields, that's unavoidable, making a new package per type doesn't reduce the risk.


This is a symptom of another underlying issue with golang. It doesn't let you import structs or functions from other packages, you can only import the top level package name, and then have to refer to entries in the package by the package name. Compare to Java, Scala, Kotlin, etc. where you are able to import individual classes (and methods and functions in the case of the latter two).


Putting a type into a type-specific package like that is considered (IMHO at least) unidiomatic in the first place, so this isn't a situation which can happen in practice.


What's the counterargument to the expressed benefit?


It breaks the ability to Ctrl-F to find uses of the package foo. It’s surprising to other Go programmers.

If you’re working by yourself, go wild and do what you want. On teams, I’m much more likely to need to search through somebody else’s file to understand it or find something.

It also means that additions to foo might conflict with definitions in your package.

The reason this feature exists is to break circular dependencies in unit tests. So in foo_test, you might

    import (
        . "foo"
        "fooutil"
    )


This makes sense but it's more like general advice for why typical packages should not be import-dotted. I was talking about a very specific kind of low-pollution package that will export essentially nothing bar a single type (maybe a NewFoo constructor too). The package isn't a bucket, it's a barrier.

(I hope people move beyond ctrl-f/grepping, to tooling that has semantic understanding of definitions/references. Go go gopls & GoLand!)


If you already have the answer, why are you asking the question?

(You can’t find Foo just by looking at syntax at all, you would need to parse both the package where it is used and the package where it is defined. Using normal imports, you can get by with only parsing the call site.)


____

Exceptions are generally considered to not be worth the cost

____

Why is exception handling being added in 2 then? Sure, they're calling it check/handle instead of try/except(catch) because Go ought to be hip and different but if it moves like a duck and quacks like a duck...


check() and handle() have rather different semantics; it's still just error returns, just with a bit of syntax sugar.

It's also just a proposal which received a lot of feedback/criticism, and eventually they decided to go with another try() proposal which ended up being abandoned after criticism. I'm not quite sure what the current status is here.

So nothing is "being added" in Go 2 thus far.


I am not very familiar with Go but panic() and recover() seem like neat features.


It's just exception throw/catch. The only difference to other languages is that you're not supposed to use it, as a matter of principle.


They're not meant to be used like that (and terminate the function, not the block), so the implementation can have more overhead than try/catch. What bothers me about it is that there's no syntax for it: recover as it is looks like a dirty hack.

I've been writing my first production software in go this year, and I must say I'm quite favorable towards its error handling now, ugly as it may look. Even during testing there haven't been any crashes except in the places where you can expect them (like writing x := y.(string) when y isn't a string).

For me, the only real downside in go: nil pointers in combination with the absence of non-nullable types.


This is my experience after 7 years of Go as well. Error handling is actually fine. People make too big a deal about the error check (and Rust people make too big a deal about the compiler not forcing you to handle every error). The substantial opportunities to improve error handling involve establishing a standard, typesafe interface for getting at the relevant, structured information in an error so callers can handle things properly; however this probably depends on generics and sum types and will probably look a lot like Rust’s result types (which seems great to me). In practice these are not where your bugs are. On the other hand, sum types would actually go a long way in reducing bugs (including the nil pointer problems you mention).


You're not supposed to use struct tags, type aliases, "interface {}" or reflection either... that's a lot of features Go developers "aren't supposed to use"... Why are they here then?


Frequent use of these patterns is a good smell test that one should reconsider design choices. Are there times when usage is justified? Yes. Should it be the first / default mechanism? No. Learn the rules first, and then it will be more clear when you should break them.

I have no problem with the general rule nor a built-in mechanism for breaking them.

Simple example: if I ever have a row lock I wrap the corresponding block in recover to ensure that I never have a deadlock. But I certainly don’t wrap every function in recover...


These features do not scale so one should take care to use these language features only when necessary. Also the OP is not Gospel of Go, merely an expressed opinion.


There's no problem with using any of those features; they just come with some drawbacks and in many cases they're not the best tool. But in some cases they are, and then by all means go ahead and use them. They exist for a reason.


It's not really, because there is no straightforward way to express something like:

  def f():
    bar()

    try:
      foo()
    catch KeyError as exc:
      print(exc)
Calling recover() in a defer applies to the entire function, so you'll need to wrap foo() in an anonymous function. You also need more work to not use "Pokemon exceptions" (gotta catch 'em all) and recover() only KeyError


The only time I have used recover in production code is in the rollback path of a highly concurrent transaction manager.


try/throw/catch work at block-level. panic/recover are not an option until function-level


Rather, it should (not must) be used for exceptional cases, like if something happens and you have no idea how to deal with it.


I would add the sync package to this list. Maybe not the same extreme, but if you force yourself to use channels you’ll end up writing more maintainable code.


I would have said the opposite. Channels are great on occasion, but half the time I see them used in places where a simple mutex would be much cleaner and easier to read or where a mutex could be hidden from the user entirely but a channel is used instead and has to be exposed to the user. They each have their place. There are specific things in the sync package I see abused frequently, but I wouldn't call use of the sync package itself code smell.


empty interface{} and struct tags really shine when used as a library author, or authoring a bit of extremely reusable code that’s pervasive throughout a codebase.

Think things like json encoding (entirely réflection based), or writing a set of functions that can operate on something more generic. It’s not scary and it’s not really that slow, you just need to be very particular and choosy in your application of it.


I'm always wondering how .net developers feel towards Go. Anyone can relate?


From my perspective its missing a lot of features and the culture of 'Go is perfect, features are bad!' is a real turn off. C# is one of those multi-paradigm languages so you can usually write things however you want and I like that.

A lot of the features that do exist seem to have some strange quirks. Package/dependency management comes to mind but maybe its just unfamiliar to me.

Fundamental channels and goroutines is really nice though.


It is a good replacement for C like programs, that is all.


And also Python like programs. And probably Node.js like programs. And really anything where you have lots of I/O to do.


I would rather use PyPy in Python's case.


I would not because I still have to deal with the language and tooling, even if the implementation is much faster.


Dealing with the language is exactly why I would rather advise to use PyPy, instead of completely change ecosystem.

Go would not even be on my list for Python developers looking for performance.

There are much better AOT compiled languages with a feature set similar to Python.


Such as? Go is what Python should have been in my mind, although I would love a more expressive type system. But overall it generally replaces Python for me.


Switched from being a full stack C# developer to Go backend about 3 months ago. I never want to go back.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: