
Go 1.12 Release Candidate 1 is released - theBashShell
https://groups.google.com/forum/#!msg/golang-announce/r0R2jijOjBo/Egi-Q4uWGQAJ
======
kamac
Here's the list of changes:
[https://tip.golang.org/doc/go1.12](https://tip.golang.org/doc/go1.12)

------
canadev
Modules in 1.11 have some rough edges. Internally we have written a merge
driver which is imperfect for handling version conflicts, and a couple of my
coworkers keep having to delete $GOPATH/pkg . I was hoping these would be
addressed in this release but it doesn't look like they are.

~~~
puddums
The module cache in Go 1.11 can sometimes cause various errors, primarily if
there were previously network issues or multiple go commands executing in
parallel, where the workaround would then be to delete GOPATH/pkg (or run 'go
clean -modcache'). I would guess your coworkers were seeing some variation of
that.

If so, that is addressed for Go 1.12:

[https://github.com/golang/go/issues/26794](https://github.com/golang/go/issues/26794)

~~~
canadev
We definitely weren't deliberately executing commands in parallel and I think
it's pretty rare that we have network issues.

~~~
LukeShu
I've had 2 coworkers who use different IDEs report that apparently their IDE
is running `go` commands in parallel, because they'll get the corrupted module
caches.

------
jpeeler
I thought 1.12 was the release that was going to include a go team maintained
language server. Is 1.13 a reasonable time frame to expect it to land?

When 1.11 came out I started using a project with modules, which caused many
go tools pulled into vscode to completely break. One of those tools was the
sourcegraph language server:

[https://github.com/sourcegraph/go-langserver#go-language-
ser...](https://github.com/sourcegraph/go-langserver#go-language-server-)

Bingo (referenced in above link) seems to at least work, but seems to be a bit
slow sometimes.

I am assuming at this point that this change set is just laying the
foundation:

[https://go-review.googlesource.com/c/tools/+/136676#message-...](https://go-
review.googlesource.com/c/tools/+/136676#message-11c783bc9a9f6adf6119bbb85c89510fda25abe9)

~~~
zegl
There is a language server merged into the "tools" repo [1], so something is
definitely in the works!

1:
[https://github.com/golang/tools/tree/master/internal/lsp](https://github.com/golang/tools/tree/master/internal/lsp)

~~~
jpeeler
I see now about a month ago it was renamed gopls, at least for the user facing
command.

[https://github.com/golang/tools/tree/master/cmd/gopls](https://github.com/golang/tools/tree/master/cmd/gopls)

There's also a vscode directory, so maybe it's worth checking out. I'm
guessing it's too soon, but at least I know where to look more closely now.

------
lclarkmichalek
Very happy to be getting tls1.3 support :)

~~~
kodablah
I wish the implementation was more extensible/customizable though. I
understand why it's not. But I have extensions I need to implement, early data
I need to append to ClientHello, some parts I want to reuse for DTLS, etc.
Even if it were in /x/cryto/tls and all opened up, that would be great. As it
is now everyone has to copy it out and hack it up.

------
jniedrauer
The concurrent modules IO fix is the one I'm really looking forward to. This
will massively speed my build times up, since I can start caching dependencies
on my CI/CD server.

------
johnisgood
crypto/rc4

This release removes the optimized assembly implementations. RC4 is insecure
and should only be used for compatibility with legacy systems.

\---

So why did they get rid of the optimized implementations and kept the slow
one? What was the actual reasoning behind this decision?

~~~
fooyc
> why did they [...] kept the slow one?

For compatibility with legacy systems

> why did they get rid of the optimized implementations

Probably to give people less reasons to choose rc4, in case this was enough
reasons for these people to choose rc4. Also because this is less code to
maintain.

~~~
johnisgood
> For compatibility with legacy systems

I understand, so why didn't they keep the optimized version?

> Probably to give people less reasons to choose rc4, in case this was enough
> reasons for these people to choose rc4. Also because this is less code to
> maintain.

But they just said that they kept it for legacy reasons. It has already been
chosen, and most of the time you simply can't choose to move to another one.
The only difference is now that legacy systems will be stuck with a slower
implementation. For what reason exactly?

Is "less code to maintain" really a valid concern? Do they actually have to
maintain it after it's been written? Do you have to touch the already-existing
optimized code? I assume there are no bugs in there, so I don't know what
there is to maintain about it.

~~~
bradfitz
> Is "less code to maintain" really a valid concern? Do they actually have to
> maintain it after it's been written?

Yes, it really is. And yes, you still have to maintain code that's been
written.

See discussion at
[https://github.com/golang/go/issues/25417](https://github.com/golang/go/issues/25417)

The pure Go code was already faster than the assembly on some CPUs. For the
other CPUs where the assembly was faster, we'd rather just fix the compiler to
optimize better.

~~~
johnisgood
> The pure Go code was already faster than the assembly on some CPUs. For the
> other CPUs where the assembly was faster, we'd rather just fix the compiler
> to optimize better.

Yeah, it makes more sense to me than the reason (perhaps it was not intended
to be the reason but I read it as such) provided on that page. Thanks for
clearing it up.

------
kaixi
Lockless channels when?

------
freecodyx
""" The Go runtime's timer and deadline code is faster and scales better with
higher numbers of CPUs. In particular, this improves the performance of
manipulating network connection deadlines. """

I hope so, i was testing running a service as a lambda function in AWS,

I am using time.NewTicker()/time.After() and co to control go routines
behaviour (via channels).

To my surprise, this is not behaving well in lambda, many times, a Ticker
configured for 1 second for example, will tick after 12 sec, sometimes even
200sec, it's not consistent at all.

When i increase the memory allocated to the lambda function (which will
increase the number of cpu) it behaves better.

It's a large service with millions of call per day. i don't know if this was
related to the GC stopTheWorld, Or something is wrong with lambda runtime.

~~~
deathanatos
AIUI, Lambda operates on requests. (Either "calls" to a function, i.e., just
invoking a lambda function w/ args and getting the result over the network
somewhere else, or as an HTTP handler or a CloudWatch handler, but those are
really specialized cases of the first.) Using time.NewTicker/time.After
strikes me as something you would do to maintain some sort of background
process, which Lambda is _not_ intended for: IIRC, Lambda can arbitrarily
freeze your process b/c no requests are being served presently, and that
"capacity" isn't needed as far as AWS is concerned. I think under the hood
they're doing a cgroup freezer or something, but the result would be exactly
what you see: timers firing way too late. They wouldn't fire until the
underlying container is unfrozen. This wouldn't be a bug in Go or Lambda;
Lambda is specifically designed to do this, so as to free you from needing to
care about how much capacity you require¹. (At the cost of making stuff like
background timers not work; that's not its model.)

See: [https://aws.amazon.com/blogs/compute/container-reuse-in-
lamb...](https://aws.amazon.com/blogs/compute/container-reuse-in-lambda/)

¹ish. take that statement w/ a grain of salt.

~~~
freecodyx
Well, this is the issue, when my function exceeds the timeout configured in
the function yml, lambda will respond with internal server error, and will
freeze the VM, if it decides to reuse the VM for another request, then the go-
routines of the last call will resume (because i didn't call os.Exit).

~~~
sten
This sounds like an amazing pain in the ass. I wonder if this impacts my own
lambdas.

~~~
freecodyx
please do, and let us know

