Hacker News new | comments | ask | show | jobs | submit login
Go 1.12 Release Candidate 1 is released (groups.google.com)
138 points by theBashShell 8 days ago | hide | past | web | favorite | 56 comments





Here's the list of changes: https://tip.golang.org/doc/go1.12

Modules in 1.11 have some rough edges. Internally we have written a merge driver which is imperfect for handling version conflicts, and a couple of my coworkers keep having to delete $GOPATH/pkg . I was hoping these would be addressed in this release but it doesn't look like they are.

The module cache in Go 1.11 can sometimes cause various errors, primarily if there were previously network issues or multiple go commands executing in parallel, where the workaround would then be to delete GOPATH/pkg (or run 'go clean -modcache'). I would guess your coworkers were seeing some variation of that.

If so, that is addressed for Go 1.12:

https://github.com/golang/go/issues/26794


We definitely weren't deliberately executing commands in parallel and I think it's pretty rare that we have network issues.

I've had 2 coworkers who use different IDEs report that apparently their IDE is running `go` commands in parallel, because they'll get the corrupted module caches.

We had some odd issues when sharing a mod file accross 2 main methods that reached slightly different parts of the API. Each would rewrite the mod file automatically in their own way.

I understand that your mod file is always up to date when this happens, but imo you should be able to use them like ivy files for example. (Using the readonly flag unfortunately does not work either).


I thought 1.12 was the release that was going to include a go team maintained language server. Is 1.13 a reasonable time frame to expect it to land?

When 1.11 came out I started using a project with modules, which caused many go tools pulled into vscode to completely break. One of those tools was the sourcegraph language server:

https://github.com/sourcegraph/go-langserver#go-language-ser...

Bingo (referenced in above link) seems to at least work, but seems to be a bit slow sometimes.

I am assuming at this point that this change set is just laying the foundation:

https://go-review.googlesource.com/c/tools/+/136676#message-...


Something like this likely won't be a part of a release or on the Go release schedule. If it's made, it will be a separate tool with its own release schedule (I suspect).

There is a language server merged into the "tools" repo [1], so something is definitely in the works!

1: https://github.com/golang/tools/tree/master/internal/lsp


I see now about a month ago it was renamed gopls, at least for the user facing command.

https://github.com/golang/tools/tree/master/cmd/gopls

There's also a vscode directory, so maybe it's worth checking out. I'm guessing it's too soon, but at least I know where to look more closely now.


I've held off of using modules for this reason. I can't take that productivity hit.

Very happy to be getting tls1.3 support :)

I wish the implementation was more extensible/customizable though. I understand why it's not. But I have extensions I need to implement, early data I need to append to ClientHello, some parts I want to reuse for DTLS, etc. Even if it were in /x/cryto/tls and all opened up, that would be great. As it is now everyone has to copy it out and hack it up.

The concurrent modules IO fix is the one I'm really looking forward to. This will massively speed my build times up, since I can start caching dependencies on my CI/CD server.

crypto/rc4

This release removes the optimized assembly implementations. RC4 is insecure and should only be used for compatibility with legacy systems.

---

So why did they get rid of the optimized implementations and kept the slow one? What was the actual reasoning behind this decision?


> why did they [...] kept the slow one?

For compatibility with legacy systems

> why did they get rid of the optimized implementations

Probably to give people less reasons to choose rc4, in case this was enough reasons for these people to choose rc4. Also because this is less code to maintain.


> For compatibility with legacy systems

I understand, so why didn't they keep the optimized version?

> Probably to give people less reasons to choose rc4, in case this was enough reasons for these people to choose rc4. Also because this is less code to maintain.

But they just said that they kept it for legacy reasons. It has already been chosen, and most of the time you simply can't choose to move to another one. The only difference is now that legacy systems will be stuck with a slower implementation. For what reason exactly?

Is "less code to maintain" really a valid concern? Do they actually have to maintain it after it's been written? Do you have to touch the already-existing optimized code? I assume there are no bugs in there, so I don't know what there is to maintain about it.


> Is "less code to maintain" really a valid concern? Do they actually have to maintain it after it's been written?

Yes, it really is. And yes, you still have to maintain code that's been written.

See discussion at https://github.com/golang/go/issues/25417

The pure Go code was already faster than the assembly on some CPUs. For the other CPUs where the assembly was faster, we'd rather just fix the compiler to optimize better.


> The pure Go code was already faster than the assembly on some CPUs. For the other CPUs where the assembly was faster, we'd rather just fix the compiler to optimize better.

Yeah, it makes more sense to me than the reason (perhaps it was not intended to be the reason but I read it as such) provided on that page. Thanks for clearing it up.


The optimized versions (plural!) were only available for a few platforms, so a portable version needs to be kept around anyway.

https://github.com/golang/go/commit/30eda6715c6578de2086f03d... is the removal commit, which also points out that the optimization didn’t seem to have mattered much (depending on the CPU).


> which also points out that the optimization didn’t seem to have mattered much (depending on the CPU).

OK, this reason makes more sense to me. Thanks.


Lockless channels when?

[flagged]


I like the hackernews tradition of using every dot release as a platform for restating your case against go or whatever happens to be getting a dot release that day. Atom updates are the best for this.

Then allow me to be the first to tell you about the magical land of r/programming...

It should be taken as a sign of health when there's active hate on a language, and it keeps on going. Something is being done right. (Doesn't mean everything is being done right, however.)

I've never used X and i've never missed it.

[flagged]


I guess I live in a world where enums aren’t the end-all be-all of language constructs. I can imagine a few nice to have use cases but I wouldn’t call them “necessary”. But since you clearly feel strongly about it, have you considered writing up a treatise on the necessity of enums for modern programming instead of making sarcastic, content-free snarks on barely related HN posts?

when I stated my argument I was accused of being ad hominem, making sarcastic, content-free snarks on barely related HN posts, arguing against enums is as stupid as arguing for null pointers (which ironically exist in golang too!), it's not minimalism if you don't have enums, it's intentionally lacking an elementary feature that does exist in almost every other modern (well, in even old) language, you can argue with "deal with it" just like it was the case with package management, error handling and many other features, that wouldn't be as bad as derailing the argument to "enums are not necessary"

also, I feel strong about getting rid of cancer, should I write a treatise about the importance of getting rid of cancer since you may consider that not very important too?


Understand before you criticize. There are plenty of scenarios where enums can be undesirable.

One off the top of my head that I run into all the time is in client stubs of service APIs, where a client-side enum representing a string value will break at runtime if the service adds a new string value.

In other words, enums inhibit API evolution.


API version in use should be negotiated. If a 34.x response needs a value that didn't exist in 33.x, I have to map it to something that 33.x clients understand. If I can't do that, they won't be able to either, and we aren't communicating.

API versioning has its own tradeoffs as well. Now there's a negotiation protocol to follow, which introduces a bunch of other issues. E.g. if I'm looking at a field that hasn't changed between versions, then a version mismatch doesn't matter. Clients have to opt-in to new versions, which introduces another set of problems.

I've done a fair amount of Go programming in large teams, mostly doing distributed systems stuff, and I've never run into a situation where I was like "Damn I wish I had enums here." Strings have worked just fine, and we've never had any bugs related to that. I have, however, run into situations where I was like "Wow, good thing I wasn't using an enum here." And in some of our Java code, we've had multiple production issues caused by over-eager devs using enums everywhere. They implicitly enforce a contract on the value of the data, which can be quite undesirable if they don't actually care about the contract (e.g. they're just passing values through to something else).


[dead]


It doesn't seem hopeful that you'll stop breaking the guidelines, so we've banned the account. If you'd like to start commenting civilly and substantively you're welcome to email hn@ycombinator.com.

https://news.ycombinator.com/newsguidelines.html


Golang's iota enums seem to me to be at least equivalent to C-tier enums. The iota syntax helper is even an improvement.

https://golang.org/ref/spec#Iota


They're not. Better C compilers understand for warning purposes stuff like "you've used all but one of this enum in a switch case block, are you sure you don't want to use all of them"?

That's not a language feature though, that's a compiler feature.

The Go compiler should be technically capable of doing that without breaking iota or introducing a new feature. Have you considered raising a request for it if one doesn't already exist?


It might even be best as a vet or metachecking feature.

Go’s compiler toolchain does not “warn” users by design: only hard errors.


Yeah you're right, I should've been more specific than "compiler" here.

In fact, the varcheck checker in gometalinter probably even covers this use case: https://github.com/alecthomas/gometalinter/blob/master/READM...


I've been very happy having switched from gometalinter to golangci-lint: https://github.com/golangci/golangci-lint

It does not.

People tend to put all kinds of random stuff in const blocks, because it looks like import blocks, and a file level thing rather than a soecific set of constants thing.

Provided C-tiers enums are just constants associated with an integer value, go's `const` are already providing you with the same thing, and are even more strongly typed.

    enum foo {Bar, Baz, Foobar};
    void f() {
        enum foo f = 5; // Will happily compile
        int f2 = Baz;   // Will happily compile
    }
vs

    type foo int
    const (
        bar foo = iota
        baz
        foobar
    )
    
    func f() {
        var f foo  = 5    // Will happily compile, i.e as bad as C-tier enums
        var f2 int = baz  // error: cannot use baz (type foo) as type int in assignment
    }

It's hard to argue with someone who sees these 2 examples are the SAME thing, if you really see the 2nd example is equivalent in _SAFETY_ and readability as the first one, you probably haven't written much code or used many languages

In every other language, people keep arguing about advanced features, moving forward and such, only in Go people defend the undefendable and reject any kind of the simplest obvious improvements, I say so and I am sad because I use Golang very heavily, it's sad to see a modern language so popular and its core designers are still stubborn about doing anything in the right direction even if it is as simple as adding enums


I don't say it's equivalent in safety, I say (and defend it with my example) it is even safer. Feel free to provide me with an example where C-style enums are more type-safe than the equivalent go constructs.

Readability-wise, this is disputable, the go version is a bit more verbose, but I can't imagine a developer not understanding what happens there at first sight.


Instead of an ad hominem, please clearly explain the main way that the example Go code is less safe than the C example code.

I didn't mean the C example to be applied in C compilers, I meant having the first example in Golang itself, enums are not just constants, they are type safe constants, you can't just use any other constants in some switch-case and get away with it, that's the raison d'etre of enums in the first place

I didn't know that enums are so controversial, unnecessary and equivalent to just constants, but maybe a look of how it's done in any other language can give you a clear difference between enums and constants

https://doc.rust-lang.org/1.30.0/book/2018-edition/ch06-01-d...

https://doc.rust-lang.org/rust-by-example/custom_types/enum....


You first asked for C-style enums, which are not type safe constants (but nothing more than integer constants), then asked for type-safe constants, and then, you say they are not enough either and you want full-blown rust-style enums, which are not enumeration types but true sum types.

Thing is, as soon as you provide users with one more feature, some of these users want even more features: "yeah, enums are great, but I would like to have type safety with them; oh, type safety is important, but why not have sum types, after all? Oh, now that we have sum types, why not add pattern-matching?"

All of these features are great, but they make the language harder to master, and tooling harder to write. This is a tradeoff, there are many languages that implement all the features its users want (I can think of C++, Rust, probably C# too), why not let other language designers try another way?

I have to admit I wouldn't dislike an enum construct in Go, just syntactic sugar that would be equivalent to the type foo int + const block, but I certainly won't push for it.


Relevant discussion is at https://github.com/golang/go/issues/19814, showing both the current status quo (iotas), suggestions to wrangle them to / from string and iteration, and arguments for or against them.

Yes, I'm sure it's just not being done yet because there's no funding or not enough smart people working on it.

More likely Rob Pike doesn't believe programmers can comprehend the immense power of enumerated types.

I disagree with the view that programming languages need to keep adding features to be successful. C has made only minor changes in the last few decades and seems to be going strong. I like that Go has taken a similar approach, prioritizing minimalism. This approach has led to a very different language with different set of trade-offs, even if they haven't pleased everyone. The alternative, taking the same approach as every other language, could lead to a monoculture.

But I'm sure someone on the Go team will see your witty comment and rethink their approach entirely.


I don't think anyone is arguing that programming languages need to _keep adding_ features. I don't even think anyone is arguing that Go isn't successful already.

The argument is that the programming language should have the good features. Maybe that's an unwise stance, but it doesn't seem to be the one you're arguing against.


""" The Go runtime's timer and deadline code is faster and scales better with higher numbers of CPUs. In particular, this improves the performance of manipulating network connection deadlines. """

I hope so, i was testing running a service as a lambda function in AWS,

I am using time.NewTicker()/time.After() and co to control go routines behaviour (via channels).

To my surprise, this is not behaving well in lambda, many times, a Ticker configured for 1 second for example, will tick after 12 sec, sometimes even 200sec, it's not consistent at all.

When i increase the memory allocated to the lambda function (which will increase the number of cpu) it behaves better.

It's a large service with millions of call per day. i don't know if this was related to the GC stopTheWorld, Or something is wrong with lambda runtime.


AIUI, Lambda operates on requests. (Either "calls" to a function, i.e., just invoking a lambda function w/ args and getting the result over the network somewhere else, or as an HTTP handler or a CloudWatch handler, but those are really specialized cases of the first.) Using time.NewTicker/time.After strikes me as something you would do to maintain some sort of background process, which Lambda is not intended for: IIRC, Lambda can arbitrarily freeze your process b/c no requests are being served presently, and that "capacity" isn't needed as far as AWS is concerned. I think under the hood they're doing a cgroup freezer or something, but the result would be exactly what you see: timers firing way too late. They wouldn't fire until the underlying container is unfrozen. This wouldn't be a bug in Go or Lambda; Lambda is specifically designed to do this, so as to free you from needing to care about how much capacity you require¹. (At the cost of making stuff like background timers not work; that's not its model.)

See: https://aws.amazon.com/blogs/compute/container-reuse-in-lamb...

¹ish. take that statement w/ a grain of salt.


Well, this is the issue, when my function exceeds the timeout configured in the function yml, lambda will respond with internal server error, and will freeze the VM, if it decides to reuse the VM for another request, then the go-routines of the last call will resume (because i didn't call os.Exit).

This sounds like an amazing pain in the ass. I wonder if this impacts my own lambdas.

please do, and let us know



Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: