> The compiler now performs significantly more aggressive bounds-check and branch elimination. Notably, it now recognizes transitive relations, so if i<j and j<len(s), it can use these facts to eliminate the bounds check for s[i]. It also understands simple arithmetic such as s[i-10] and can recognize more inductive cases in loops. Furthermore, the compiler now uses bounds information to more aggressively optimize shift operations.
Brad, is this presentation available anywhere else (say, exported to PDF)? Our company blocks all sharing sites, so docs.google.com is no go for me (and I cannot reshare it with my team). Would it be possible for you to drop it on golang.org somewhere?
"Go programs currently compile to one WebAssembly module that includes the Go runtime for goroutine scheduling, garbage collection, maps, etc. As a result, the resulting size is at minimum around 2 MB, or 500 KB compressed."
considering it includes the runtime, this isn't a large file compared to images / videos you find on the web today
And this is temporary. In the future we should be able to both do better size-wise & also to generate multiple WebAssembly output modules, perhaps one per Go package, to enable better (more fine-grained) caching.
I really enjoyed following the WASM CLs. A huge effort from both neelance and the reviewers. Thanks to everyone that donated their time to make this happen!
If the compiler output is deterministic perhaps the Go runtime
and Go standard libraries could be bundled as modules, allowing for aggressive cache policies. Maybe even served from a central CDN. Just a thought ...
> generate multiple WebAssembly output modules, perhaps one per Go package
With Go disallowing circular package refs and WASM support for func imports this should be doable. However, they'll all essentially have to share a single imported memory instance, so there'll be some central runtime heap coordination which is plenty reasonable.
I still think it's too big and look forward to size reduction. One of the things I found was also a bit heavy is init with tens of thousands of instructions to initialize the unicode package with those dozens of structs.
Nice. Looking at what's exposed in the unicode package (did a tad bit of research [0]), there may be limited opportunities to reduce that since the structs are exposed. Maybe fewer goroutine suspend points between cpu-only tasks inside init, I dunno, not very familiar. Maybe a bunch of structs initialized with only const exprs in their fields could be eval'd at compile time and become a data section in the binary (until mutated maybe or just copied from the data section since there're no public immutable vars).
Now that the binary size issue has finally come into the spotlight thanks to the WebAssembly, hopefully, embedded devs could benefit from the future efforts of size reduction too.
That sounds to me like the size of Hello World. Start including more and more of the runtime and it’ll get quite a bit bigger, much the same as native Go binaries.
IIRC, hello world that uses println is actually smaller. It's once you import fmt and unicode directly or indirectly (most libs) where those reported sizes start from.
Go does a lot of things right without having to carry legacy mistakes like other languages, it's such a breath of fresh air in a landscape of constant change and competing implementations.
I'm very optimistic that the modules system is another step in the right direction, however long it took to get here.
Thanks everyone working on Go.
> Go 1.11 supports the upcoming OpenBSD 6.4 release. Due to changes in the OpenBSD kernel, older versions of Go will not work on OpenBSD 6.4.
I wonder when Go is planning to stop using the kernel ABI on BSDs and macOS directly - in direct contradiction to stability guarantees (or lack thereof) by those platforms - and start using the appropriate APIs, such as libc. Or is it going to be stuff like this or https://github.com/golang/go/issues/16606 forever? Right now, I stay away from Go partly because of this - it feels like a bad idea to use a software stack that is guaranteed to be broken on future OS releases by design. Especially when that stack is advertised specifically for system programming...
"On macOS and iOS, the runtime now uses libSystem.so instead of calling the kernel directly. This should make Go binaries more compatible with future versions of macOS and iOS."
It's easy to complain about, but I've been using go professionally for 3 years and having a gopath hasn't affected me since I installed it. Obviously it's better to not have the gopath requirement, but it was like 2 min of set up.
Eh, it wasn’t the best, but it was still better than any other language’s equivalent except for Rust’s. In any case, modules have worked wonderfully for me so far.
> but it was still better than any other language’s equivalent except for Rust’s
I'd say almost any language with project-specific deps folder (e.g. Node, Elm) are better than GOPATH and its module system.
For example, can't just write `import "./util"`. It needs to be fully qualified, every import depending on full project fs hierarchy including even the project user/name on github. This is hilarious when you just want to fork a project from github and get it running.
Also, it's really not that bad considering that go works (`go get` grabs git repo) with git repos. So you can go into your GOPATH (or vendor dir) and checkout your fork in place of the origin.
In the end, a little pain to get started to get used to it, then it's second nature.... but I suppose if you primarily work in other languages it is likely tiresome.
All three replies to me suggest that you can do relative imports in Go.
Can you link me to documentation that demonstrates them?
I don't understand. Either relative imports are not supported or they only work in some context that nobody uses. BTW I'm not a Go beginner, and my issues with Go aren't just an issue with "getting started" as you suggested.
I just tried them in an existing Go project and I get the usual "can't load package" error.
> So you can go into your GOPATH (or vendor dir) and checkout your fork in place of the origin
Imagine the real-world case where you're working on more than one project at a time. Project X and Project Y both depend on a Util project on GOPATH, but they require different versions of Util.
Each time you switch projects, you cd into GOPATH and check out the right version of Util?
I'm sure this stuff works for a megarepo like the one Google has. But for the rest of us, this stuff was solved by package managers. A lesson Go ended up having to learn in the end.
By the way, wasn't trying to suggest that you are new to go, just that I understand there is some tribal knowledge involved that make it difficult for new comers and people who don't spend much time working with the language.
I vaguely recall wanting relative imports back when I first started dabbling with Go. Then I moved on and really have completely forgotten it was an issue.
If you want to fork a project, I'm pretty sure there's a program (gorename maybe?) that rewrites these import paths for you, but I've always just used `sed`.
I haven't used Elm, but Node's `node_modules` folder has always been a hassle. Grepping for anything is a pain, and regularly the directory seemed to just get borked so you'd have to `rm -rf $(find . -name node_modules) && npm install`. I don't want to overstate this problem; Node wasn't what I had in mind when I was thinking of difficult project conventions.
Yeah, I'm sure this is fine if you're a full time Node developer. I just pop in; also I use a lot of tools besides grep and ag (for example, vim and VSCode) and having to configure every one of them is tedious. Also, IIRC if you want to make a Docker dev image with your dependencies installed (into which you mount your source code) it gets challenging to interweave your dependencies with your source files (this probably has a nicer solution; I just recall this being a pain point when we set up our Docker infrastructure). Again, I don't want to overstate the problem, I just happen to prefer dependencies out of the source tree.
> Eh, it wasn’t the best, but it was still better than any other language’s equivalent except for Rust’s.
I disagree. Gopath was the most brain-dead idea being implemented in a current programming language. It handled the problem in a far worse manner than any other programming language in current use. This problem was then made far worse by the way the language developers insisted in forcing it upon go users in spite of all the repeated compains.
Really there are relatively few complaints from Go users. Most people just read the paragraph explaining how GOPATH works and get to work. Most of the complaints come from outside of the community--people who are frustrated that Go doesn't copy their favorite language's project files, build tools, project structure, etc. Anyway, since you don't have any actual criticism of GOPATH (just that it's "braindead" and "far worse"), I can't really do much except disagree, because GOPATH was already quite a lot nicer than the equivalent in most other languages, and modules makes it even better!
Modules release is definitely a "we told you so" moment. Depending on outside packages has always bothered me, and kept me from doing more with the language.
https://golang.org/doc/go1.11#performance-compiler
> The compiler now performs significantly more aggressive bounds-check and branch elimination. Notably, it now recognizes transitive relations, so if i<j and j<len(s), it can use these facts to eliminate the bounds check for s[i]. It also understands simple arithmetic such as s[i-10] and can recognize more inductive cases in loops. Furthermore, the compiler now uses bounds information to more aggressively optimize shift operations.