We had some weird corruption of memory issues late in the 1.5RC cycle.
It's possible that https://github.com/golang/go/commit/3ae17043f7f29f0d745aa35b... fixed it, but each test for bisection cost us 4 hours, and the bug didn't always show. (When it did show, it definitely showed without -race mode on, contrary to the comments in the patch)
Anyway, if you have a large memory usage highly concurrent go app, I would still tread lightly and carefully with this release; we spent a goodly amount of time blaming our own code before realizing that 1.4.2 did not exhibit the problem.
Totally. We've been mostly running tip for some time, and sending in bug reports when we can be useful, overall a big win for us, but not without its risks. :)
Honestly, I could have seen rebranding 1.5 to 2.0; there are major under the hood changes you guys have made.
I can see that, and I've been idly wondering over the past few days what the largest N will be for Go 1.N. The contrast with most version numbering, however, is that we've defined a meaning for Go 2: it breaks all your Go 1 programs. Being a Go 1.X release is actually a stronger statement about how your code will behave.
"Go1 2.0" is meaningless, Go X refers to Go X.Y.Z.
Many FOSS programs treat X, Y and Z as numbers rather than digits, and increment X when there is a breaking change in the API or some serious restructuring/change (this is starting to change, recently we see programs incrementing major version periodically [most remarkably, Linux, Firefox, GCC have jumped on this bandwagon], these days only FOSS libraries are seriously sticking to that.) so yeah, it's probably going to be 1.10 and so on.
>"The compiler tool chain was translated from C to Go, removing the last vestiges of C code from the Go code base."
This is really cool. Writing a compiler in the language that you're compiling seems kind of meta. The language has evolved to a point to where it can consume and build itself. Far out mannnnn.
How has this affected compile speeds? Go proponents often tout the speed at which it compiles code as an important feature, so I wonder if there's been a noticeable regression.
The small increase in build times doesn't bother me at all when I'm generating a binary, but unfortunately it's very noticeable and irritating when using tools like vim-go or syntastic which type-check/lint/gofmt your code. Using these now causes vim to be unresponsive for up to a second every time I save, whereas the delay used to be insignificant.
>This is really cool. Writing a compiler in the language that you're compiling seems kind of meta.
It's been done before, for more than one language, IIRC.
The technique by which they make an executable of a compiler for language L on platform B, using an existing executable of the same compiler for L on platform A, together with the source code of the compiler for L on A, is also cool. I forget the full details right now. Maybe someone else will pitch in and explain. It's called bootstrapping. It's also related to cross-compiling.
Perhaps this is a dumb question, but what language was used to compile the compiler? Was it compiled in C? Surely it must have had to of been compiled in a different language somewhere down the chain.
You can start from the C version and move forward and check that you get the same result as starting with released Go binaries.
Unless you think Ken somehow backdoored the original C compilers to backdoor every C compiler in the world to backdoor the Go compilers that came about decades later. :-)
This might be a dumb question:
What happens if you find a bug in the language? Let's say something that you had written in C was being misinterpreted, and now the go compiler still uses that byte-code to compile new versions of the compiler.
I think Russ showed the translated code produces the same binaries as the Go 1.4 C variant. So this isn't very likely given he has the option to pipe roughly any Open Source Go project through the compiler.
If there is a source directory d/vendor, then, when compiling a source file within the subtree rooted at d, import "p" is interpreted as import "d/vendor/p" if that exists.
When there are multiple possible resolutions, the most specific (longest) path wins.
The short form must always be used: no import path can contain “/vendor/” explicitly.
Is there any scoping on what vendoring means for forking projects? i.e. if I pull something full of packages like domain.com/user/pkg, would I now be better served just dropping the whole subdirectory tree in a /vendor/ directory?
Congratulations on the 1.5 release. I already updated my dev machine and everything looks to be still running without errors. Soon my servers will get updates, too!
PS: If you want to benefit from 1.5 improvements don't forget to recompile your go programs ;)
And you can recompile in shared object mode if you want tiny programs and shared linking too, though you'll lose the benefit of the single static binary.
go install -buildmode=shared std
go build -linkshared
So, who's gonna be the first to blog about compiling a Go lib as c-shared dynamic lib and call it from Rust for serious HN karma? Seriously though, there are several well-written Go libraries for accessing systems that I wonder if the runtime isn't too much baggage to keep the shared libs from being used outside of Go.
At the last Gophercon Delve was presented as the go to tool, I remember the pres was fast but It was convincing enough.
You're also free to add a vim or any IDE view with interaction based on it.
> This attitude is so toxic, it's like saying isn't assembler good enough for you?
Real programmers don't need an assembler; ed(1) is sufficient and if you can't write a program without anything else, you shouldn't be allowed near computers.
Good tools take time. Complaining about not having them is not very sporting behaviour. Fact is, if you don't have a tool, use printf-style debugging. It has worked for computers since before the very first debugger was ever made, and as a skill it should not be discounted in favour of more, shall we say .. consumer-ist .. attitudes about what 'should' and 'should not' be a normative in programming toolkits. No debugger? No problem. Debuggers are a luxury and a time-saver, but knowing your code-base well enough, because you printf-debug too, is also a time-saver.
If you read carefully, you'll see that I was saying the "good enough" attitude is toxic. I was not complaining about not having the tools(I don't even write Go) and a contributor mentioned above that it just wasn't a priority(which I have some beef with) but is a fair statement.
There are some problems that can't be solved by printf debugging. They include:
-GPU Performance profiling: All your commands are async and appear immediate, teasing out performance characteristics takes a massive suite of tests and may not even reproduce your issue.
-Thread concurrency issues: Adding a printf can actually make these bugs disappear as printf usually has some sort of memory fence or flush semantics that change behaviour.
-Platforms that can't handle large amounts of trace logging: Sony PSP was one of these where each printf() required the tcp ack before the buffer would clear after a trivial amount of logs.
One of the core tenants of comp sci and technology in general is building on what came before. That's where our productivity gains come from. Not taking that approach is a dangerous mentality that can have serious implications on your velocity and agility and is not a choice to be made lightly.
The funny thing is, in several years of using Go in production in what are certainly non-trivial applications, I was able to find any bug either through log-debugging, often with the help of isolating code in unit tests, or through pprof, in particular the net/pprof integration. Just looking at my own experience, I never _needed_ an interactive debugger, or deemed it essential, but then, that's just one data point.
Leaning on debuggers is something newbies and guru's do, when they need it, if they need it, if they have it. If they don't, it doesn't stop the debugging.
I use printf-debugging the vast majority of the time. But that's fine if you know you have an alternative and you're just lazy to use it.
Recently I was working on an embedded platform which had no debugger support at all. There was some On-Chip debugger option, but it required some soldering and hacking some jtag cable, still too lazy.
Go at least has stack traces with no extra tooling, which is a big deal.
Anyway, the more tools the better; you're gonna miss them when you need them most.
Why, do we only strive for "good enough" in tooling?
Contrast your response to Russ Cox's answer: "We are well aware of the hole, though, and we'd like to plug it, but other work took priority this release cycle".
Logging can be of great practical utility, but being able to stop a program and freeze it in its state to figure things out is also pretty nice right. You can walk up and down the stack, inspect the scope and the state of things, walk through line after line of code to try and find things in a way that would take a really long time with logging.
So no, printf is not good enough. That's why people have created debuggers.
While I can see the argument for and against printf, with the cpu and memory profiling tools, you can basically figure out where most of the issues in your code are. I'd say for most applications, logging, printf, etc. can get you 80% of the way there. Profiling gets you another 10%. For those last mile apps, sorry, but you have to use GDB.
If you don't care how long it takes, sure, tracing with printf works. But a debugger accomplishes the same task in minutes rather than hours. Even if you know only the very basics of how to use a debugger, it's an extremely effective tool.
Kudos to the Golang team and its many contributors. The reduction in latency with concurrent GC is a massive step forward. I wouldn't be surprised to see Golang gain significant traction in the research and HPC communities in the near future.
Also glad to see wide adoption in Chinese consumer internet running at gargantuan scale: Tencent, WeChat, Didi Kuaidi and many more. Are you guys at all surprised to see it take off on the mainland?
I'm still a beginner with code. I've been learning HTML (not yet CSS, as far as I know) and Markdown, and I'm also in the middle of learning Python using free courses on Udacity. This is all in my free time. No courses taken. Would this be a good language to learn right now, or would I be better off waiting till I'm more experienced?
It is supported. So you can both compile your go packages as a shared object file and link against it and you can compile it into a shared library and load it from C.
- Go code linked into, and called from, a non-Go program
In the Go 1.5 release this mode is implemented, using a .a file, for most Unix systems.
- Go code as a shared library plugin with a C style API
In the Go 1.5 release this mode is implemented for linux-amd64, linux-arm, darwin-amd64, and darwin-arm. When using gccgo it is implemented for any supported target.
- Go code as a shared library plugin with a Go style API
This is not implemented in the Go 1.5 release.
- Go code that uses a shared library plugin
This is not implemented in the Go 1.5 release.
- Building Go packages as a shared library
In the Go 1.5 release this is implemented for the linux-amd64 target only. When using gccgo it is implemented for any supported target.
I'm very excited to see Go supported as a library for server and client development on iOS and Android. Are there any good case studies of app developers using Go as a full-stack language?
lol... Just built 1.4.2 (or whatever latest 1.4 was) last night.
I was wondering whether or not to go against master. I wonder if there were even any commits since then or just a tag.
We had some weird corruption of memory issues late in the 1.5RC cycle.
It's possible that https://github.com/golang/go/commit/3ae17043f7f29f0d745aa35b... fixed it, but each test for bisection cost us 4 hours, and the bug didn't always show. (When it did show, it definitely showed without -race mode on, contrary to the comments in the patch)
Anyway, if you have a large memory usage highly concurrent go app, I would still tread lightly and carefully with this release; we spent a goodly amount of time blaming our own code before realizing that 1.4.2 did not exhibit the problem.