I'm always happy to see improvements to Go's profiling - the standard library's profiling is good, but it hasn't improved much since Go 1.0 (it took years just to get flame graphs integrated, and it's still near-impossible to get a usable dump of the in-memory object graph).
That said, I'm _very_ wary of tools that require a fork of the compiler and/or runtime. Uber's programming language research group is small (though mighty) and has a lot of demands on their time. Even over the medium term, it'll be hard to allocate time to keep this project current.
In the Go issue tracker, there's a long discussion  of the formal proposal  for these improvements. My understanding of the feedback is that the changes introduce too many Linux and x86-specific APIs. I'd love to see the author narrow the proposal to just improve timer fidelity when the OS exposes the PMU, without any new APIs. The more controversial proposal to collect richer platform- and architecture-specific data can be discussed separately.
Discussion : https://github.com/golang/go/issues/36821
Proposal : https://go.googlesource.com/proposal/+/refs/changes/08/21950...
The Android ART profiler today is still kinda limited (too high overhead or too imprecise) so we tend to switch over to simpleperf . However I think there are things that only in-langue profilers can do.
This, lately, is my #1 gripe. I just cannot get viewcore to provide valuable insights. Anybody know of any projects in this space?
This is the reason I don't liKe Go: anything Google deems unimportant (like generics or packaging) either take many years or never happen. The whole language reeks of such zealotry. In fact there's many Google projects where I've seen popular GitHub issues linger for many years because the core devs just don't care about usage outside big G.
Google generally does a bad job of open source stewardship and Golang is no different.
It's a shame but not a surprise that outside companies who have married their horses to Google and Go find themselves fighting hard just to have decent tooling that virtually every other language has.
My experience with Go is very limited, but in my tests it was always slower than C. Sometimes just a bit, sometimes 2-5 times. So my question is: looking back, don't you guys regret choosing this language?
Please don't misunderstand me, I don't intend to start any flamewars, but it seems like you're very much focused on CPU-bound performance, and the choice of language is not neutral in this case.
Inside an enterprise, there’s more to a language then just the performance (though that is a large factor). You also have to take into account existing tooling (both internal and external), developer experience, whether you can find enough developers to code it and source code maintainability, as well as many other common concerns with a language. Most languages will do well in a few cases but none are best in class in all cases (or everyone would use it). Go does well enough in the performance category while also doing moderately to extremely well in other categories. In CPU bound tasks that don’t rely on CGo, go does extremely well in my experience. I think in general though, for most enterprises, Go strikes a happy medium and makes the right trade offs that most developers are willing to make.
If you have some fairly simple function/task, then yeah, a C version will probably blow the Go version away almost all of the time. But that's not necessarily indicative of real-world performance of a full application.
And of course, there are other interests than "maximize performance" too, such as developer efficiency.
Overall I agree. I'd take a speed hit for ease of development most of the time, but there are degrees of speed hit that are acceptable depending on the context.
In nearly all cases, there was plenty of room to make the Go service faster. A more careful choice of data structure and algorithm, finer-grained locking, fan-out across a goroutine pool, or just avoiding a zillion heap allocations solved most problems. I don't recall any cases where Go was simply incapable of meeting our performance needs.
As a side benefit, services with more stringent performance requirements often exposed inefficiencies in widely-used libraries. Optimizing those libraries made every Go service faster and cut our overall compute spend. Avoiding rewrites in C++ or Rust let those wins add up over time.
That said, I’m very thankful that tools like this are being shared with the community, even if it’s less than perfect. It’s great that we have access to so many tools and so much research.