Hacker News new | past | comments | ask | show | jobs | submit login
Go's Tooling Is an Undervalued Technology (nullprogram.com)
459 points by dilap 32 days ago | hide | past | web | favorite | 358 comments



Coming from the C# and JVM world to go, tooling looked like one of the worst parts of Go.

The debugging story is very limited:

- No function execution

- No edit-and-continue

- No conditional breakpoints

- No watchpoints

Package management works OK on the consumption side, though it is much more clunky than with classic package managers (e.g. set an ENV variable to get a custom proxy, but write something in go.mod file to get a per-package proxy). However, the publishing side is all over the place:

- Relies on source-control integration

- Horrible import paths for packages published from large repos

- Versioning based on source-control tagging, exact format undocumented

- Major version changes require source-code level changes in all files touching the package

- Multiple Go modules in same repo use-case is not documented (can you have separate versions from same commit? how do they reference each other?)

The source-code level tools/plugins available are extremely basic (except for GoLand):

- No refactoring support other than "rename variable"

- Reference-finding across modules rarely seems to work

- No way to automatically find all implementations of an interface - especially problematic in a language with implicit interface implementation

- The tools are pretty slow, constantly re-parsing large chunks of code; go pls has fixed some of that


I agree 100%, and I pretty strongly dislike Go's tooling because of it. Especially due to the horrible debugging options (though something is far better than nothing, of course). I've bashed my head against every single item in your list multiple times, and they've made extremely little progress over the years so I'm quite pessimistic about them changing.

Though for completeness, what it does have that I find very useful, and has had from early days:

- pprof for cpu (tunable sampling rate) and memory allocations (sampling and I think tracing)

- a standard build system (as annoying as it can be (GOPATH, no file-as-input-list, etc)) so builds are indeed generally very simple and consistent: `go build .` (plus `glide install` or `dep ensure` in many cases, but that's easy too)

- race detector (trivially catches an incredible amount of issues in less-than-highly-skilled code, which is the vast majority that I interact with)

- `go fmt` spawned a whole ecosystem of "autoformat ALL the things" tools

- (I personally don't give them credit for vendor, as vendoring was a thing the community hacked together for years before they finally half-supported it, though now it's great. GOPATH should have died that day, but alas...)

.... though that might be my complete list tbh. And yeah, Goland's tooling is miles ahead of everything else, refactoring and find-usages is dramatically more accurate than all other tools I've tried.

---

Many of those have been built (far more powerfully) for other languages, but part of the "go has good tooling" claim is that it has those out of the box. It's nowhere near enough to be "great" IMO, though modules get it much closer, but it gets a "good job" sticker on at least a few things even if the whole isn't particularly impressive.


This nailed it for me. I came back to write some go last night after not writing it for a long time. The modules, go dep, etc stuff really put me off.

Whereas beforehand (been writing stuff for a few years, not hobbyist, just not super complex) it was intriguing..now it seems like they've just moved on in one direction for packages. However, that direction is not really super well documented, intimidsting, and sure as hell doesn't work as well as nuget or npm.


No way to automatically find all implementations of an interface - especially problematic in a language with implicit interface implementation

This is fundamentally a problem with duck typing, and it creates a massive amount of cognitive effort when trying to understand an unfamiliar code base.

I don't have the qualms you have about the rest of the Go tooling, which is pretty good IMHO. The fact that it comes out of the box is a significant bonus too.


I understand duck typing of interfaces to types imposes some limitations, but it is by no means an absolute limitation. Given a set of Go source files, you can absolutely automatically find all types in those source files which match an interface. Sure, there could be implementations in other places as well, but the same is true for Java or C# or C++.

Also, note that the compiler already does this check whenever you try to use some type as a specific interface type, this is not dynamic binding at runtime.


I understand that, but it means I need to rely on tooling (which has limitations) to do something that is zero-effort in other languages, and that impedes easily understanding unfamiliar source code. It's not so much "Which types implement this interface", but rather "Which interfaces does this type implement".

For this reason, I'm not at all convinced that duck typing contributes to the fundamental need to write easily understandable and maintainable code. What does it add, or what problem is it trying to solve?

One commenter recently suggested putting, eg:

    var _ MyInterface = MyType{}
at the top of the file to assert explicitly the interface implementation. This works but is a bit of a hack.

Duck typing is my main gripe about Go, but generally I like it very much, and find it very practical, especially (funnily enough given the article) it's tooling.


Oddly enough I've never needed to look up "what implements this interface" in Go. I've been writing Go professionally in a senior/team lead roll for over 4 years.

Usually I know what implements a given interface, as in the package I'm writing, I know what I've imported. Packages are usually small/narrow enough that there isn't ambiguity (i.e. I'm not reaching for some arbitrary implementation).

The only pain point that comes to mind is the bytes, io, and strings packages where there are many common interfaces and many implementors, but it's never really headache inducing.

The interface assertion you mention is useful for implementors to catch at compile time if they didn't implement a given interface (rather than find it at call sites), and catch where implementations break when an interface definition changes. You could put it in a test file, if you like, for the same effect.


> What does it add, or what problem is it trying to solve?

In Java, if you get a type from a library, and it doesn't implement a particular interface, you're stuck. You have to wrap it up in some intermediate type. It's useful to be able to use types that don't know about a particular interface even if they happen to conform to that interface.

However, I would agree that it would be useful to be able to specify which interfaces you intend a particular type to implement. (Or maybe Go allows this? I'm just talking about duck typing here, as I don't know Go very well.)


The worst? What exactly other language has:

- built in race detector ( go build -race . )

- multi-platform / multi-os easy compilation ( GOOS=windows GOARCH=amd64 go build . ) <- you can even do that from your Rapsberri pi

- very easy and powerful profiling with pprof

- go fmt / go vet

Yes it's not perfect but it's better than most languages so I woudn't be that harsh on tooling, especially since most of your issues are not related directly to Go tools (Packages publishing side ) and for the last part Goland works fine and is the most popular IDE with VSCode.

Go overall has good tooling for a relatively young language compare to more matures ones like Java and C#.


The race detector is nice, and can be a massive boon to productivity.

Mulți-platform/multi-arch compilation is simply not relevant to most common languages in use today. Still, it's nice that it exists, though it is only so simple because of Go's (flawed) static-linking-only model.

Go fmt is a nice tool, but is very low impact in my experience. Go vet is OK, but there are better static analyzers. The fact that it comes out of the box is a plus, but it's not the most important characteristic in a static analyzer.

Profiling with pprof is nice for very simple problems, but it can't match a visual profiler for complex issues, and its memory profiling is extremely bare bones. Java heapprof and VisualVM are equivalent or superior, and they also come out of the box with the JDK. Similarly, the kinds of memory analysis you can do by collecting heap dumps and analyzing them in GDB is much more powerful (and has many more uses) than what pprof offers on that front.

Go overall has decent tooling, especially for its age. But its tooling compares unfavorably to what is available for free (not to mention if you are willing to pay) for any other popular language except maybe python.

As a final note, package publishing is absolutely related directly to Go tools - it is go mod download/go get that define what it means to publish a go package, how you specify its version and everything else I raised.


Let's not forget most of the good JVM tooling, especially early on in Java's history, was and is 3rd party and most cost money. Even today, it's still a sea of confusion, especially if you are new to all of this. For example if I search for "best JVM profiler", I get a list of 10 results all claiming some form or fashion of being "the best" or "most popular"

As someone who spent the first half of their career on the JVM and now is mostly Go, I don't understand how someone who has been on the JVM most their life can say Go's tooling is "the worst" Most of the things you have to configure and tweak on the JVM and profile for aren't even considerations in Go, because they are either simply non-issues in Go, or don't exist (because Go does not have a virtual machine to begin with)


Sure, but whether a tool is 3rd party or 1st party matters less if there is significant difference in the advantages that tool brings. Being free and/or open source is more important, but at least currently, there are many excellent free and open source Java tools in all the categories I mentioned.

Also, I didn't say that Go's tooling is the worst. I said it's one of its worst aspects. I still miss Maven, jmap, VisualVM, and the Java debugger almost everyday, not to mention IntelliJ.

> Most of the things you have to configure and tweak on the JVM and profile for aren't even considerations in Go, because they are either simply non-issues in Go, or don't exist (because Go does not have a virtual machine to begin with)

I'm not sure what you are thinking of specifically here, but in general the same considerations apply between Go and JVM, it's just that Go doesn't allow you to tweak things like GC algorithms and parameters. JIT is the only other area where I remember spending time thinking about Java performance that is truly a non-consideration in Go, but Go has other performance considerations to worry about (pass by copy vs pass by pointer, number of system threads off the top of my head).


GoLand solves many problems though, for example they have conditional breakpoints and function execution in debugger (though not that powerful). And code refactoring/navigation is also pretty good.


Imagine how much worse other popular programming languages' tooling is, given Golang is constantly praised as having great tooling.

When integrating into a new dev team, half the work is grokking their local tooling choices. In an ecosystem with strong "one best way to do it" tooling, that effort is almost non-existent.


I remember using a 4 year old programming language on the JVM. When you installed the Eclipse Plugin you could do all the code navigation, refactoring, etc. This was a custom written plugin specifically made for this programming language so it didn't really take advantage of copying the Java plugin and extending it. It also had a fully fledged package manager, build system, code formatter and all the fancy bells and whistles. The tooling was amazing and it shows that whether tooling for a given programming language is good or not merely depends on how much the community values good tooling. Especially in python the community gives a crap about good tooling. There is a lot of tooling with which you can cobble something together but it never leaves you feeling that it was worth the time investment or that the solution was particularly elegant.

Deploying applications is a problem on which most people just give up and just ship a virtualenv to production... or they waste a week on finding a better solution. One might argue that this one week will save me time in the long run but then you start thinking about all the python programmers who don't give a crap and they clearly don't want to invest that week worth of time and so the cycle of bad tooling continues.


Actually I do 100% automated deployments with Ansible roles. I don't use the ansible commands though, I wrapped it in my own command which fixes all my problems with ansible: https://yourlabs.io/oss/bigsudo

My roles: https://galaxy.ansible.com/yourlabs/

I'm extremely happy with it. Currently I'm writting my own Docker/Compose on top of podman and buildah in order to remove all `Dockerfile.` and `docker-compose..yml` from my projects in favor of a single pod configuration file, which can variate depending on a "profile" (default, dev, prod, review...). Once that is solved I will continue my life with 100% satisfaction of the tools I use for my CI and deployments.

But yeah, prior to going crazy like that and rewriting the world, during years, I was actually doing my best to not invent new tools, and get the best out of every tool that was out there, I can easily imagine that's how most people would practice their craft, by just focusing on what they are supposed to ship, considering that "if I could make a better tool for that then somebody else would already have, so I'll just use anything and not really bother, perhaps try to contribute or wrap in bash scripts".


Does bigsudo have a license? I admit I only glanced quickly, but I didn't see one in the repository.


But then, that is the question : which language is that? C# and Java are not it. C and C++ have excellent dev tools and debuggers, though it's true package management is essentially non - existent. Python, JavaScript and TypeScrpit have better tools than Go, even with the limitations of a dynamic language (the package ecosystem of npm is horrible, but npm itself is a pretty decent tool, definitely much simpler than go mod).


> C# and Java are not it.

This needs elaboration. Those languages pretty much are the canonical example of "not a great language, but stellar and unmatched tooling".


Unfortunately I was pretty confusing. The G-GP had said 'imagine how much worse other languages are', and I was pointing out that I don't know which those other languages with worse tooling are. That phrase was meant to say 'Java and C# are obviously not languages with worse tooling'. That is, I agree with you, and considered it obvious that Java and C# have stellar tooling.


> not a great language

it really depends on how you define a great language. btw. C# can do everything that golang can, but has a different inheritance interface. Also C# has some stuff that golang does not can, like an easier way of working together with C++.

Btw. Java is nearly in the same boat. It has a different inheritance interface and can nearly do all things that golang can (except maybe csp/channels and can not be as memory efficient than C# and golang). But the point still stands. Golang basically can do less than both.


Rusts tooling is pretty stellar.


Depends. Package Management, compiler errors? Absolutely. IDE support? Not so much. I don't think it's fair to compare it to C# / JVM languages, but even compared to other languages, I wouldn't say the tooling is all that great.

Macros make IDE support a lot harder than in other languages, the atrocious compile times sure do as well. While IntelliJ Rust and rust-analyzer both try to do their best (kudos to the teams working on them!), it's still nowhere near anything I'd call "stellar".

There are mitigations being worked on right now, like compiling macros to WASM[0] and Cranelift[0], but these endeavours, while very promising, I wouldn't call production-ready yet.

[0] https://github.com/dtolnay/watt [1] https://github.com/bytecodealliance/cranelift

EDIT: Also, debugging. I'm not sure of the current state of it, but last time I checked, it was pretty much just GDB integration that was working. CLion (which I think uses GDB?) is able to give you a nice UI for basic debugging, don't know about VSCode integration.

Still, even if it works, it's not at all comparable to debugging in C#/JVM, you're limited to very basic stuff compared to these languages / the well-known IDEs for them.

So, again, it may be "fine" or even "good", but imho it's not exactly stellar :)


VSCode with gdb at the backend gives you IDE like debugging.

VSCode with RLS gives you decent IDE support - not Visual Studio for C# standard, but much better than a large number of languages.


At the caveat that RLS is a bit fragile - I have a project with a moderately large proc macro that generates multiple items and RLS falls over on that project now. Rust Analyzer is more robust but doesn't have a debugger or as comprehensive a feature set yet.


Re compile times: in nearly all cases I've seen, un-optimized builds (opt-level=0 or 1) are still quite fast. Do you know of any counter-examples? I'd love to see what the cause is.

Optimized builds though, yeah, you routinely hear about multi-hour builds. As long as there's a dev-friendly fast mode though, slow optimized builds don't bother me all that much.


What counts as "quite fast" for you? If you're talking about multi-hour builds, then you might not be talking on the same scale as others. I would consider 10 seconds to be pretty slow for a small/medium sized project.


Yea, fair. But for compiled languages, a few seconds is still fast for the field - Go takes that long on anything but tiny projects. My job's main Go binary takes about 30 seconds for a rebuild, for instance, and I know of other projects here that take well over a minute.

Many of the hours-long optimized builds for Rust projects that I've seen have turned to <10 seconds without optimizations on a clean build (possibly libraries are still cached tho), and even fairly large ones are still less than a minute. Tiny projects are around a second or less. I'm usually looking at pathological cases though, so I'm not sure how well those hold up in general.

So I'm not looking for interactively fast - very few languages achieve that on even medium-sized codebases, even if they're interpreted. But under a minute for a couple million lines of code in the project and libraries fits in my "reasonable" range.


In my experience, the problem is that debug builds (which, I assume, are not optimized) are still annoying. I recently built a new PC and got a Ryzen 3900x, so I'll probably have to check again with that :)

Now, "atrocious" may have been bad wording without context: I think it's just taking very long to finish the normal edit-start-debug cycle, compared to other languages (Go, Java, C# for example). I don't have any numbers right now, but I'll try to update my post once I get home, I've done a few basic emulator projects in Rust :)

At least, it was long enough to be annoying to me. Maybe I'm just too used to C#, which I use at work.


I've seen some pretty long compile times in C#, at my previous job building a local copy of the whole solution was around 40mins to an hour (400+ csproj).


I guarantee that a 400+ module Go program or 400+ Rust crates will build much slower (if no other reason, simply because Go or Rust compilers do a lot of work that is deferred to runtime in C#).


Rust copied tooling from Go and Python. Also its tooling cannot work without internet and need to download MB's of data for simple development, and on the other hand proclaims itself to be systems programming language to replace C and C++ which doesn't require active internet to work.

Also C++17 is much better version taking away the deficiencies of earlier versions and if not better at least provide similar run-time guarantee and safety as Rust (which also needs to rely on unsafe code to do anything meaningful with systems programming).

So if you use either modern incarnation of C++ or Rust, you will get same safety guarantees. Just with C++ you will not have to climb the mountain of learning just to do simple things.

I think Rust gets a lot of hype because it's syntax is complex and learning curve is much higher than C or C++ and programmers see new shiny language which got some traction as Panacea for all the programming problems.


> which also needs to rely on unsafe code to do anything meaningful with systems programming

Rust allows you isolate such instances of unsafe-ness to specific parts of your code base. When you're auditing for safety, you only need to audit those specific parts because everything else is safe by default. For example, the implementation of std::vec::Vec uses unsafe, but it exposes a safe interface. So once you've audited the few hundred lines of std::vec as being safe, you're sure that the millions of lines of code relying on it are transitively safe.

> which doesn't require active internet to work

cargo build, cargo clippy, rustfmt, rust-analyzer, rustc all work without internet. If you want to fetch dependencies from the internet, you need an active connection. If you're in a situation where internet isn't available, you can work around this by writing all the libraries you need. Same with rustup - it needs the internet to download new versions of rustc and cargo.

> you will not have to climb the mountain of learning just to do simple things.

I'm sorry you felt this way. Learning materials are improving all the time and the language is improving too. For example, future combinators were a pain but with the advent of async-await it's easier to write and read code that deals with Futures. Give it another try :)


> you only need to audit those specific parts because everything else is safe by default

This deserves clarification. You only need to audit those specific parts because everything else is safe IFF those parts are safe.


> If you're in a situation where internet isn't available, you can work around this by writing all the libraries you need.

Why can’t cargo be simple enough like pypi where all the dependencies are in intranet and private or another way is all the dependencies are in my own simple folder and can update it separately whenever I need rather than relying on an internet connection.

Rust cannot match C and it will always need to interface with C code, so in the end it will only be as safe as the C code it is interfacing.

I do not understand why to create complex syntax and semantics and difficult to learn language in spite of so much development in compiler world. Like Swift which is similar to Python style syntax. Also Swift is performant and secure enough to provide replacement for objective-c.

Hopefully FP (functional programming) becomes more mainstream and take away all this burden of imperative languages like Rust. But being pragmatic understand for hardware C will be there at least for the foreseeable future and Rust and any FP language can only do so much.

On a personal level Python made code beautiful by white-spaces, pep-8 and cultivated a habit of trying to write beautiful code. It did not succeed completely in writing every library beautifully because in the end it needs to interface with C libraries where practicality beats purity.

Rust copied gofmt and introduced rustfmt. Still Rust has miles to go until it is just able to be as clean as Python for beautiful code and cultivate it as very important habit like Python. It will take Rust may be another 30 years or more to come close to it, given its syntax and reliance on C interface.

Any other language I see at present with safety guarantee and as clean as Python is Swift.

I like lisp, Haskell and other FP which can achieve a lot with very little code, easier to refactor and maintain in the long run. Rust is just opposite of it. It’s as difficult to learn as Haskell with complex syntax and semantics, but less powerful than C.


> Why can’t cargo be simple enough like pypi where all the dependencies are in intranet and private or another way is all the dependencies are in my own simple folder and can update it separately whenever I need rather than relying on an internet connection.

It does have registries[1] and vendoring[2].

[1]: https://doc.rust-lang.org/cargo/reference/registries.html

[2]: https://doc.rust-lang.org/cargo/commands/cargo-vendor.html


Swift is not a competitor for Rust. It may be a competitor for Scala/Kotlin native or Go, but definitely not in the same performance league as Rust or C++.

> Hopefully FP (functional programming) becomes more mainstream and take away all this burden of imperative languages like Rust.

Pure FP is overrated. It is a nice academic model of computation, but it doesn't reflect the way how hardware works. Hardware is mutable and memory is limited.

> It’s as difficult to learn as Haskell with complex syntax and semantics, but less powerful than C.

This is a subjective opinion, not a fact.

> as clean as Python

:D :D :D


Swift is replacing Objective-C (a systems programming language a dialect of C). So indeed it is a direct competitor to Rust as it also provides hardware abstraction in a safe manner to replace C dialect.

I will wait and watch when Rust will be used to control Apple or Android hardware as performant with similar safety guarantee like Swift or now Kotlin.

It's also same as C++ used by Microsoft to provide underlying abstraction of hardware on Windows.


Kotlin and Swift fall short on memory management and both need runtime and GC. Currently Kotlin native is an order of magnitude slower than Rust, and Swift is also losing in benchmarks sometimes by 10x (although, not always - purely numeric Swift code is the same speed, because it is going through the same optimiser).

As for hardware abstraction - not sure what is your point - Rust can use any C API.


It doesn't matter if improvements are still required, Apple is doing Swift, and Google is doing Kotlin on their mobile OSes, not Rust.


Swift and Kotlin are used for application programming there, not systems programming. Also, Google does Rust as well. And Microsoft is slowly switching some pieces to Rust as well. These are big companies with many teams and use many different tools.

Don't know how iOS, but Android does offer native APIs, so Kotlin is not the only officially supported choice there. Actually for anything that needs performance, e.g. games, Kotlin is a nogo.


Swift is used for systems programming on Apple platforms, as already replied on another thread.

Google already mentioned a couple of times at Android Fireside Q&A sessions that it is weighting Kotlin/Native adoption on Android.

And even if not, Rust isn't taking C++ and Java place on Android, specially after Project Treble changes, where drivers can even now be written in Java.

Google and Microsoft are indeed adopting Rust, with some products already in production, although they are also among the major ISO C++ contributors, so it remains to be seen how much Rust love from their security teams will spread into the OS development teams.

For example, I still look forward to the day that Azure Sphere actually offers something else other than C, in spite of its security sales speech.


I would put it in another way, Rust is not a competitor to Swift on Apple's ecosystem.

Apple is not adopting Rust to replace their C and Objective-C code, rather Swift, as they quite clear mention on the Swift documentation.

> Swift is a successor to both the C and Objective-C languages.

https://developer.apple.com/swift/


Why would we need to limit our programming language abstractions to how the machines work underneath? Are there some stateful objects running around in the hardware? Just because the abstractions are detached from the hardware doesn't make these design patterns useless.


It is nice to have a connection between abstractions and hardware, because it makes it easier to reason about performance, which in turn makes it easier to achieve great performance.

Now, not everybody actually need great performance, so some distance between hardware and abstraction is ok, but the amount depends on the use case.


Well, I agree, that's why we have both low and high level languages. High level languages, setting you free from the actual constraints of the underlying machine, allow you to define abstractions that can increase productivity greatly - that's their whole point of existence.

I mean I don't see people complaining how you don't need to manually manage memory in SQL expressions.


> I mean I don't see people complaining how you don't need to manually manage memory in SQL expressions.

Well, they don't complain about not having to manually control the low-level bits of query execution, but later they frequently complain about performance problems. And then they add hints, setup buffer sizes, indexes, etc.

Also, Rust (and to some degree C++) shows that you can have both very high-level, productive abstractions and low-level control. They call that zero-cost abstractions. The price to pay is a steeper learning curve, but not actual productivity or final performance.

This is the most exciting part about Rust actually. I know how to write Kotlin, Scala, Java or C# code that's close in performance to C or C++. But this will be ugly, unsafe and hard to maintain, non-idiomatic code. Rust gives ability to write code that has almost Python-like expressivity but is still fast as if it were hand-optimised loops and pointers.


I'm not sure what your point is. I feel we may have a different idea of what very high level abstraction means in programming languages.

Just about everything in software engineering is a compromise of sorts. Not knowing Rust or C++, I doubt they're as high level as something like Haskell or domain specific languages like SQL. Are monads first class citizens on C++? Type classes? Generalized Algebraic Data types? Parametric polymorphism & pattern matching? And so on.

Would you really achieve the same amount of type level guarantees, equally concise control flow and concurrency on C++ with an equal amount of lines of code as would be possible on Haskell? If not, then there obviously is a productivity penalty. Just like on Haskell there is that performance penalty for not being able to drop down close to the metal.

From what I can tell, these languages are not aiming to be very high abstraction level languages but instead solid systems level languages with some convenient abstractions and design patterns baked in from higher level languages.

Sticking only to some zero-cost abstractions limits how high level abstractions it is possible to bring to the language. This in turn limits productivity.


Rust is very strong in its ability to build extremely powerful abstractions. With built-in compile-time procedural metaprogramming I could risk a statement that in some domains it may be actually even higher level than Haskell. Maybe its type system can't prove all the same guarantees as Haskell, but on the other hand, Haskell can't prove some guarantees that Rust can. They offer different set of abstractions with some overlap (e.g. first class type classes and ADTs). If is really hard to say which one is higher level. They are just different.

Anyway, type safety is not the same as productivity. If it was, then nobody would use Python or Ruby and everybody would use Idris. I became much more productive quickly in Rust than Haskell.


Interesting, what kind of guarantees does Rust give that Haskell's abstractions cannot do?

Haskell also has metaprogramming in terms of Generics and Template Haskell, allowing you to create custom DSLs and such. Sadly, it kind of leads to pretty ridiculous compile times and mixed editor support so I'm trying to avoid that.

We should remember productivity is really subjective and not the same thing as high level abstractions. We're typically the most productive on the language we have the most exposure to, whatever it may be.


> We should remember productivity is really subjective and not the same thing as high level abstractions. We're typically the most productive on the language we have the most exposure to, whatever it may be.

I agree with you on this we are most productive in the language with most exposure.

Indeed many in Rust community do not realize that the LLVM infrastructure they use for Rust to make its code executable by real hardware is itself written in C++.

Rust is still miles away to compile itself and may not happen. So survival of Rust is dependent on progress of C and C++. So I doubt it is even be viable to call it their replacement.


> Indeed many in Rust community do not realize that the LLVM infrastructure they use for Rust to make its code executable by real hardware is itself written in C++.

And the operating system is written in C, and the CPU in VHDL (or sth similar). Compiling itself is a property that only academics take care of. LLVM and C are not going anywhere anytime soon. There are more important things to do now. That's why Rust is already much more loved and popular language than Haskell - because it focused on important stuff and getting job done, not theory that looks only nice on paper, but doesn't match how hardware (and generally the world) operates. Real stuff is mutable. Not being able to mutate stuff in Haskell directly is a productivity killer. Many of the "abstractions" you can build in Haskell exist solely to workaround this limitation.

Exposure has nothing to do with this. Rust is new and Haskell is since forever, yet Rust already far surpassed Haskell in terms of adoption.


Not all OSes are written in C, thankfully a couple have moved into C++.


Template Haskell is not standard Haskell, it is an extension. But even with it, Haskell type system can't reason about lifetimes of objects. It doesn't give any guarantees about object destruction nor even object construction, due to laziness. Hence, you can't prove anything about resource usage of a Haskell program. Which makes it usable only for cases where you don't care about resources and where the program doesn't interact with external systems.


Well Haskell's prelude is purposefully small and extended by a variety of extensions in the GHC. Prelude + libs + extensions effectively make what we call Haskell today.

Just like with Elixir and many other high level languages I don't think it's meant for building low level systems where you're heavily constrained by resources.

But why wouldn't it be able to interact with external systems? I'm not having any problems building an API that interacts with a database, caching layer etc.


Bartosz Milewski's blog is a source of knowledge about Haskell and how to apply that knowledge in C++.

https://bartoszmilewski.com/2014/10/28/category-theory-for-p...


Sure, I didn't say "useless". The post I was responding to was worded in such a way as if FP was the only suitable paradigm everybody should be using. FP is a nice and useful abstraction, but in some areas it is just too inefficient and despite all the great compiler progress of the last decade, it is still not there yet.

FP languages are not in the same problem domain as system programming languages, so claiming FP should replace Rust is just ridiculous (and as a side note: Rust borrows many things from Haskell/Scala as well, so some FP is there).


> So if you use either modern incarnation of C++ or Rust, you will get same safety guarantees.

As a C++ Dev who's done some rust on the side this is utter horseshit. Regardless of which C++ version you use it is utterly trivial to write code with undefined behaviour. Even if you're extremely diligent you will eventually be hunting down a bug caused by undefined behaviour.


Kind of, if one belongs to the developer category that doesn't use static analyzers as part of the compile/build cycle.


> So if you use either modern incarnation of C++ or Rust, you will get same safety guarantees. Just with C++ you will not have to climb the mountain of learning just to do simple things.

This is absolutely, completely wrong. If you want to write safe C++, you must follow a very strict discipline, essentially equivalent to appeasing the Rust borrow-checker. The main difference is that straying from this discipline in Rust leads to a compile-time error, while straying from this discipline in C++ leads to unsafe, but probably working, code, that you will only discover is unsafe after much work.

And in both languages, you sometimes need to drop from the safe subset to do some things. In Rust, that is signaled explicitly with `unsafe`, in C++ it just means using more C-like constructs.

It would also be very interesting to see what subset of modern C++ vs Rust could be used for safety-critical code. Given that several C++ idioms like RAII depend to some extent on exceptions, and those are not permitted in safety-critical code (per Bjarne's own standard), I would not be surprised that Rust will become very attractive in this sector once it matures.


You can get the compile-time error when using tooling like Visual C++, or Eclipse CDT with static analyzers turned on.


Not for all the things that Rust catches. And not for the most important things.

Static analyzers typically catch trivial stuff like returning a pointer to a stack from a function. Which is easy to spot during code reviews. But they don't catch more sophisticated UB that can result from bad interaction of code in different units. And this it the kind of UB we're the most interested in being protected from.


If you mean lifetimes, yes you can, with the latest Visual Studio 2019, although it is WIP.

And compiling in debug does enable bounds checking for arrays, vectors and iterators invalidation.

Rust still needs to define what actually is UB in unsafe blocks, how multiple implementations might affect the language and a memory model.

Not saying that it is perfect, rather that it can be made safer than how many make use of it, which is important, because there are plenty of codebases out there that will never get rewritten into something else.


In C++ lifetimes are not encoded in types. Therefore I seriously doubt it can do the lifetime analysis with such accuracy as Rust can, particularly if the whole source code (e.g. libraries) is not available.


> I think Rust gets a lot of hype because it's syntax is complex

Rust syntax is simpler than C++. Rust is a much smaller language. To me it appears much more like a mixture of C and Haskell, rather than C++.


Did you forget how much apt-get build-essentials is going to take in terms of space?


What on earth did Rust copy from Python, toolingwise?


Just based on what I see rustfmt is copied from gofmt which borrowed it from Python. Cargo is just another pypi with certain improvements.

Rust RFC's process is also from there.

Hopefully rust can have something like PEP-8 in Python to inculcate habit in programmers to write beautiful readable code.


Eh, what part of gofmt is borrowed from python? Python hasn't even bothered to have all parts of its standard library formatted to follow PEP-8..

As for pypi, it is Perl that (as far as I know) invented language specific packaging. Then Bundler, a Ruby project took and improved the situation and let people easily create repeatable and predictable installs. Some of the people behind Bundler then went on and built Cargo. Packaging for Python is and has always been a mess, which people shy away from rather than copy. There are some signs of improvement in later years, but it is not from pypi.

As for RFCs, I would assume the RFC process should reasonably be attributed to IETF. They were a bunch of decades ahead of Python there..


In my experience, the debugging story isn’t as bad as most think, because compilation times are so fast. If I need to run some code at a random location to see what’s going on, I just add it and rerun the unit test, usually it takes less time than configuring a debugger to do the same thing, and I’ve also now got a place to put a UT to cover whatever the problem was.


Edit-compile-rerun is as fast or slower in Go compared to Java, C#, Python, Ruby and many others, and it's still an option in those languages. Comparing to C, C++ or Rust, or, surprisingly, JS, instead, you're probably right.

However, unit testing is not by any means a substitute for all uses of a debugger. You can add unit tests to reproduce an error in any language, but once your problems start crossing module boundaries, a debugger (or printf debugging at least) often becomes indispensable. And having an edit-and-continue debugger is extraordinarily helpful when dealing with bugs that only reproduce in complex application states (when your cycle becomes edit-compile-rerun-send50CommandsToPutAppInCorrectState-reproduce).


Refactoring is a bit more advance than just renaming though. gofmt can transform one pattern to another. https://spf13.com/post/go-fmt/


I wonder what the future of package management will be. The author of the Node-revamp language Deno (https://deno.land) chose to basically follow a strategy not unlike Go and forego package.json and similar dependency manifests, while also adding things like explicit network access flags to prevent, I believe, NPM-type package hacking vectors.


I just don't see what deno is going to build with access flags that couldn't be solved by writing off local code and wrapping everything npm does into containers.

There's nothing special to install in linux for a long time to use containers and once WSL2 is done, if it's made available to all windows either that or native containers it can just be the way npm works, managing containers.


Agree with a lot of that especially this part:

> Major version changes require source-code level changes in all files touching the package

I've always thought the tooling was half baked but would get there eventually. The introduction of modules has caused a split and put the tooling either further behind IMO.


Go also doesn't compile as quickly as people make you think. A regular sized project will compile more slowly than a similar sized c# project.


It's part of Go's typical "we assume/declare that we do better/simpler without checking what/how other languages do" schtick.

Take error handling and package management for example - different approaches, hacks piled, etc, instead of just going with the program, seeing how others do it successfully (e.g. Maybe/Optional error, exceptions, etc), and get on with it...


Do you think Go's developers and users are simply unaware of exceptions, optionals etc? Or is it that the trade offs that they made aren't net positive for most use cases? Even if the second was true, that isn't necessarily a bad decision; since C# continues to exist, there may be more use for a language targeted to some niche than just a copy of C#.

I've used C# and Go extensively and like both languages. In some big ways they are similar: garbage collected, (fairly) fast compile times, large ecosystems, good tooling (if you include the JetBrains tooling for Go), built in lightweight concurrency, etc. It's not surprising people compare them.

For the areas where they differ, Go in general is more explicit and more low level. Handling errors as return values instead of exceptions is more typing, but I find it easier to check that error handling has been done correctly in my own code and in third party libraries. Instead of the class/struct dichotomy pointers are explicit and any type can be used as a value type (there's also more flexibility in using Go types with unmanaged memory, and the ability to take interior pointers). The way interfaces are implemented they can be used in more places without boxing.

C# has generics but I've run into frustrating limitations with them, for example not being able to write numeric code that works with either 32 bit or 64 bit floats. Go's generics are currently limited to some built in types but the proposals being iterated on seem like they'll be more useful to me than C#'s implementation if they make it into the language.

Other things: Go has a single concurrency story so the ecosystem isn't divided into sync/async worlds, is a less complex language overall, and is a bit less verbose aside from error handling (no public static void etc). Overall I probably prefer Go, but I like C# also and could easily imagine projects where it might be a better choice especially in areas where the lower level aspects of Go aren't useful.

Reading HN, it seems like a lot of people believe that people who use Go (and even the language designers) are simply ignorant. I'm open to the possibility, but can you flesh it out for me a little bit more? What am I missing here?


Proof? have you tried to compile Kubernetes and compare that to a 1M lines C# project?


Do you compare it with C# AOT compilation?


Why would you?


Because compiling to executable machine code and compiling to abstract high-level bytecode is a completely different things.


Pretty sure `go guru' satisfies reference finding across modules/packages and finding all implementation of an interface. It is slow because go guru (and gorename) needs to search all packages in modules and GOPATH but it is also correct.


> No way to automatically find all implementations of an interface - especially problematic in a language with implicit interface implementation

C-c C-o-i (go-guru-implements)


Go guru is extremely slow, go pls is faster but doesn't implement this (yet, hopefully).

Having multiple incomplete tools that do the same thing in the official tooling is another problem with Go tooling.


Yes it's always amazing how these new environment skimp on debugging support. After all that's the activity most of us doing real software spend the most time on, and no, no amount of "safe programming" will avoid debugging. I guess debugging is just not cool.


Agree!


> Coming from the C# and JVM

Not that I disagree with you but comparing C#/JVM to go is like comparing a commercial airliner to a single seat monoplane. These projects are diametrically opposed from the get go in every way I can think of.

There are only two languages I ever liked working with C and go. Because they're simple, full of compromises, but simple. I can't say that of C# or java, you probably need a PhD just to be able to use 1/3 of c#/java tooling and IDE features.


> I can't say that of C# or java, you probably need a PhD just to be able to use 1/3 of c#/java tooling and IDE features.

That is absolutely incorrect. The whole point of having tooling at your disposal is to make the process of writing and reading code dead simple.

Ypu don't need a PhD to pause the debugger, edit some code, hit continue, and have the new code executing. You don't need a PhD to select some text, click Extract Method (or press the shortcut), type in the method name, and have the new method/function on the screen.

By the way, C has excellent tooling on all of these sides as well - C IDEs and debuggers are extremely powerful, unlike Go.


Yes, _commercial_ C (an Java) IDEs are extremely powerful, but it's a bit unfair to compare them to _open source_ Go IDEs/editors. From what I heard, the only commercial Go IDE available so far (GoLand) is extremely powerful too...


Not that I buy this argument in the slightest but:

https://code.visualstudio.com/docs/languages/csharp

(Are you going to say, it's unfair to compare something with the commercial backing of Microsoft, because Google backs Go)


>because Google backs Go

The degree of backing is completely different though. They appear to only do the minimum effort that they need for themselves. Microsoft provides good C# tooling because they want you to stay in the Microsoft ecosystem.


They want you to stay in the Microsoft ecosystem is why they made an open source IDE and great tooling for it that competes directly with an IDE they sell for up to 1000$/seat/year that actually tries to get you to stay in their ecosystem with things like free Azure credits, preferential Azure pricing, free Windows Server and MSSQL licenses

Not that I think any of this is not entirely tangential to the original point...


Eclipse is far more powerful of a C and Java IDE than any non-commercial Go (I)DE. NetBeans is also a great Java IDE. They're not as powerful as IntelliJ or VS, but they are still miles ahead of any Go plugin I've used.

Also note, for C you have gdb, you have ctags, you have all sorts of great plugins for Emacs or Vim or whatever. You also have stellar plugins for Javascript and TypeScript, for VSCode, for Emacs, for vim and many others.

I'm not just comparing VS with vim+go plugins.


Can't agree here. Go is going for simplicity of the language but definitely not for simplicity of tooling. If anything, I would say golang tooling is more complicated then c#, with all that myriad of small tools. Debugging story is just objectively better in c# and it's pretty easy to use.


I definitely appreciate many parts of go's tooling ecosystem: pprof, gofmt, go get (in GOPATH mode), and go bug are all pretty good ideas. pprof in particular and the various profiling tools build out around it are great.

That being said, I haven't found all aspects to be as nice - most notably among these is the lack of a good debugging environment. There is no debugger in the go tool, and users rely on delve[0] for their debugging needs. The dlv command line tool has actually gotten a lot better over the last couple years (props to the maintainers!), but integration in to visual front-ends is still lacking. VS Code and GoLand integrations at least are lacking to the point of it being nearly unusable except in the simplest cases. For a few examples, stepping is painfully slow (there are several machines/OSes I use), conditional breakpoints don't seem to work, keeping the tools up to date doesn't work consistently which means updating the underlying go/delve tooling or the IDE will frequently break the experience, and defining your debugging processes (e.g. via VS Code's launch.json) is tedious. None of these are go's fault per se, but it diminishes the overall experience.

In general, it would be nice to have a better debugging experience, esp. since I prefer a GUI tool to the command line for debugging. In the meantime, I might switch back to just using command line dlv.

But regardless, definitely agreed that go's tooling on the whole is great. pprof has saved my skin on countless occassions!

[0] https://github.com/go-delve/delve


I think the biggest barrier is still the compiler itself. The thing is built for compilation speed and not usefulness in terms of compille time semantics or readability of errors like the bad old days of GCC and GHC.

In this day and age, I shouldn't have to understand how a parser/lexer works to understand what an error really is, especially for super simple stuff like missing semicolon, bracket, parents or comma, misspelled symbol.

Small stuff like this is just a nussience for veterans, but I've seen it prove fatal on many an occasion for beginners who quicky get intimidated and start feeling stupid.


Delphi and Turbo Pascal are also built for speed, with more expressive languages, and are still able to provide sensible error messages, all the way back to MS-DOS days.


The compilation times were extremely short, and that’s on a 4.77 MHz CPU. (Too bad we are unlike the Ancient Egyptians who would’ve stuck with it for a couple of millennia.)


I never really felt that way about Go error messages, but I also came from Python, Java, C, C++ (via GCC and Clang) where the error messages are far, far worse. Maybe there are languages/toolchains where the error messages are better?


This is confusing to me, the pre-compilation tools catch these simple things instantly and display them right in your editor, which is far superior to having to wait until you compile to catch them.


It's crazy that Haskell has a better debugger (in ghci) than Go, despite Haskell being way less debugger-friendly of a language.


"print/log" based debugging in Haskell is harder and thus there is more pressure to solve the debugging needs with a proper debugger.

I rarely feel the need for a debugger in Go and other procedural languages because I often find it easier to just patch the code with some ad-hoc debugging logic that lets me understand what's going on. I find that usually easier to do rather than figure out how to attach a debugger in the right environment where the issue can be reproduced.


Print based debugging isnt necessarily harder in Haskell:

https://www.stackage.org/haddock/lts-14.21/base-4.12.0.0/Deb...


The laziness can make it a bit of a challenge to follow, though. I know trace itself is strict, but that thunk will still be lazy from the environment around it.

Debugging has the same problem too, actually. "Step next" is much more challenging to know what is actually going to be "next" than in other languages. On the plus side (no sarcasm), it's a great way to get a feel for what the laziness really does. I recommend stepping through some non-trivial code of your own with the debugger for any intermediate Haskell programmer, even if it's working perfectly, because it'll actually show you the lazy evaluation order as it happens.


Oh the debugger in Haskell is not used that often. Usually people debug their logic in a repl session.

Print debugging is a little weird in pure code, but it's useful and the same as any other language in any code that does IO.


I think in highly concurrent apps a debugger might become even more difficult to use since at the end of the day it allows one only to walk linearly through the application.

But in any case, you can use gccgo and gdb.


I feel your pain with the debugging and I have a feeling this will be an aspect of go which matures greatly in time. Especially with respect to goroutines, it can be tough tracing the program flow, which makes good debuggers even more valuable. Also a very hard challenge.

I'd love to be able to see a diagram of goroutines and channels, like a chemical plant. That's probably asking for too much :p


Do people really use a debugger ?

This is a serious question because in 5+ years of programming I have never used a debuggers. The only time I used a debugger was when I was learning to program and to trace out for loops.

Now I just use print statements and read code to figure out what’s going on.


Yes, absolutely, it's an indispensable tool - and I've been doing this software gig since 1990. Some languages, like JS and Python which are quickly interpreted, are amenable to printf debugging, while others, like C++, C, Go, really benefit from debugging. A debugger is your best bet at debugging multiple threads, for example, especially since you can pause them at strategic locations to recreate a race condition. This is really difficult with printfs.

The world of JS development has tremendously powerful live runtimes/debuggers in the browser developer tools, those are a fine example of what a debugger can be.

I write go using Goland by JetBrains, which uses Delve under the covers as a debugger. Compared to dbx (which was amazing), gdb, etc, it's deficient, but better than nothing.


How do you know how you got to that point in the code without a debugger? What's the call stack like?

How could you debug a call in a DLL that is being loaded by your program if you do not use a debugger?

It might be alright for tiny programs but it's essential for "big" applications.

printf'ing isn't going to help with 2 million+ lines of code or where there's a leak somewhere in that code (eg. someone else wrote some bad code or bad reference counting and didn't release when they should, eg. in COM wrappers in Windows), or someone is double-freeing some memory or trampling on memory and you need a hardware breakpoint to know what is modifying that memory at any given point.

This doesn't affect just compiled languages either - how would you know what is constructing this object, for example? https://github.com/magento/magento2/blob/420a8b6209a4e62ede9...

EDIT: I say all this in a generic "use of debugger" sense, not a Go-specific debugger usage.


I step through almost every line of code I write in a debugger. I'm pretty visual based, and that combined with the action of clicking through forces me to slow down and really think about what my code is doing. Whenever I open the debugger, I find bugs that I can fix! :P

I've heard it said that there are two types of programmers: those that stare at their text editor all day, and those that stare at their debugger all day.


When I write a new unit test I pretty much always step through it in a debugger to make sure it is doing what I think it is doing.


Yes! Try it!

I often find myself working on a legacy codebase. The application will enter a state that I don't think should be possible. The debugger can usually help me figure out how that happened.

It's worth mentioning that debuggers are less essential in languages like Go where the control flow is purposefully very straightforward. I often debug Python and end up being totally surprised to see the execution jump somewhere else because of a decorator.


I believe Rob Pike and Ken T are firmly in the 'add prints' and think camp rather than use a debugger so you're in good company.


Rob Pike has definitely used a debugger: http://www.informit.com/articles/article.aspx?p=1941206

Preferring logging statements is one thing, never having used a debugger is another.


It's as though you didn't even read the very article you linked


It's as if you didn't read the comment I wrote and started foaming at the mouth at the chance to say RTFA as is HN tradition. (Or didn't read the article closely enough yourself)

He said explicitly, he used a debugger.

He came to understand the value of having a proper mental model of your code, but the comment I replied to said they had never used a debugger.

My comment was about the difference between knowing how to problem solve without a debugger, and straight up never having used it

There's nothing wrong with having more tools in your toolbox


> but the comment I replied to said they had never used a debugger.

Except it doesn't say that.


> This is a serious question because in 5+ years of programming I have never used a debuggers. The only time I used a debugger was when I was learning to program and to trace out for loops.

They're saying it themselves, "yeah technically once I used it when I was figuring out how for loops work".

I wouldn't say I've ever written a Brainfuck program even though once I messed around with a hello world on an online interpreter

Leave it to people here to say a comment doesn't say literally the exact words it says

-

The spirit of multiple replies, not just mine is debuggers are an option you should at least be familiar with before tossing aside, even if you do end up not needing them


Except that is not the comment you replied to.

> I believe Rob Pike and Ken T are firmly in the 'add prints' and think camp rather than use a debugger so you're in good company.

And that comment doesn't make any mention of Rob Pike never having used a debugger.


Except it is because this is a comment chain...

None of our comments exist in a vacuum.

-

Person A says: I never used a debugger

Person B replies: You're in company with Rob Pike on that

I reply to Person A and B: Yeah but even Rob Pike uses debuggers

-

The fact I have to spell this out is appalling.


Hint: the word rather

It's a preference not an absolutism.


It's the preference of the people referenced by Person B.

Person A the word _never_, the definition of absolutism.

You're literally making my point, a preference towards not using debugger not a limitation, an absolutist approach of never having used one professionally could be.

It's not like I said "OMG go use one right now or your career will fail" either before you latch onto that too. I said even those guys with a preference away from them, have used them professionally

Didn't think I'd be spelling out 1st grade reading comprehension on HN but here we are.

Why this sudden increase who think turning off their critical thinking skills to go "gotcha" because you didn't write out your comment like a thesis is something positive?


Really depends on your compile->deploy->launch iteration time.

Printf'ing your way to victory works but I'm easily 5x faster with a debugger.


To give an opposite metric, on one of the C++ software I develop the "add printf, compile, and run" takes in average 1/2s while it take sometimes up to a minute to load it in gdb or lldb. Which sucks because a lot of things are incredibly easier with a debugger.


That's very unusual, is it an embedded environment? I'm used to C++ taking much longer than other languages to compile, which in turn discourages printf debugging.


not at all, standard Qt desktop software - when split correctly into object files and shared libraries the incremental build time in C++ can be very fast (also, being on linux and using lld as a linker helps tremendously, building on windows is on average 3-5 times slower for me, and lld links in the bat of an eye compared to bfd (GNU ld) and gold)


You're right. But you need to have been exposed to a good debugger before you understand this. Preferably you also have had a colleague that was proficient in it to get you started.


My impression has been that developers way too often rely on the subsequent use of the debugger when writing code. It’s like, “I don’t really know what I’m doing, but the debugger will help me understand my code... Oh, what a piece of crap I wrote! Let me start over...” Looks almost like people are debugging themselves.


They're much more useful in maintenance environments where people have already written a million lines of code before you got there. Of course you have no idea what they've done. After a few years they probably have no idea either. You can go grepping in the codebase, but the sheer speed of asking the debugger "how did I get here?" is hard to beat.

They're also useful in layered environments. Sometimes it's a bug in other people's code and you need to get in there to find out what's happening. When all you have is disassembly, and can't readily insert print statements, the debugger is absolutely invaluable. One of my own "debugger greatest hits" was finding a bug in Windows CE stack unwinding this way.


For me, it's almost always been for understanding other people's code, especially poorly documented third party code internals.


Exactly, the more sophisticated tools we have to manage incomprehensible programs, the worse they get in terms of maintainability.


> Now I just use print statements and read code to figure out what’s going on.

Print is worse and more work. You should set one (or several) breakpoints and then debug, which replaces possibly many run/debug loops at once. You also have no chance of accidentally committing a print statement.

Hearing that people don't use debuggers feels like hearing that someone uses a screwdriver instead of a drill that someone gave them for free and is just sitting in the garage, untouched.

There are literally no drawbacks to switching to a debugger.


> You also have no chance of accidentally committing a print statement.

Debuggers are ephemeral---unless they are "reversible", which is a subject of ongoing research. You can't replace logging with debuggers, and some print statements used for debugging are in fact better described as post-hoc logging. On the other hands, print statements do change your program and can mask some other problems especially in multi-threaded programs.

> There are literally no drawbacks to switching to a debugger.

Only for properly prepared environments. In some environments enabling a debugger itself can be annoying or even close to impossible (for example, some non-native socket-based debuggers tended to be fragile). I use debuggers when I can, but being able to not rely on debuggers is useful from time to time.


You seem to have inferred two straw-man arguments from my comment: 1) that people should stop logging, and 2) that debuggers can be used in 100% of cases.

> You can't replace logging with debuggers

I never suggested this. I only discussed whether to use printing or debuggers for debugging.

> Only for properly prepared environments

Of course, but these are the vast majority. Most devs are working with Java, Python, C#, TS/JS, Visual Basic, Go, or some combo. All have debuggers.


While I meant not to fully refute but rather to build upon your statements, some clarification seems necessary:

> I never suggested this. I only discussed whether to use printing or debuggers for debugging.

Yes, but I specifically mentioned some usefulness of printing for debugging ("post-hoc logging"). Printing can be used both for debugging and for logging simultaneously, and many debugging prints can be readily converted to actual logging. As far as I know there are no widespread equivalents in debuggers---but please let me know if any, I'm genuinely curious.

> Of course, but these are the vast majority. Most devs are working with Java, Python, C#, TS/JS, Visual Basic, Go, or some combo. All have debuggers.

Their quality wildly varies though, and they are especially fragile when multiple languages or environments are in play. In my recent case, Visual Studio 2017 froze when debugging mixed C++ & C# codebase. I tried a lot, but eventually I didn't want to investigate further and decided to use print debugging for the moment. It was finally fixed when I and my team upgraded to VS 2019, I still don't know why it didn't work and it fixed itself.


Sure you can, thanks to OS trace points like ETW.


I believe you do not always keep ETW or similar instrumentions on, while logging is typically always on.


Logging is also not always on, unless one has CPU cycles to burn.


Then probably I was talking about something else. I meant persistent logging for later inspection, not ephemeral logging for one-off debugging (yes, it is confusing...).


This is a hard question to answer easily, IMO. I find that Java folks tend to overuse the debugger, spending 20 mins stepping through something that a simple value print would have solved.

That said, some code -really- needs a debugger. You can theoretically spend hours adding prints everywhere, but even that pales in comparison.

In short, I find that the need for a debugger is rare, but when you do need it, it's irreplaceable.


If you know where something is going to go wrong and strategically put a print statement, how is that better than just putting a breakpoint there?


There's literally no difference between a print and a breakpoint except that the print you actually have to write code while a breakpoint not only do you don't have to write code... you can see the entire program state.

There is literally no argument here. Debugging with breakpoints is categorically less work.

When you see people stepping through code it's because they don't know where the error occurred. It's the equivalent of putting print statements everywhere. You tell me which one is better.... littering your code with print statements? or breakpoints?


It's simpler to click a few lines of code, get the entire program state or a selected subset, and then get a perfectly line by line granular understanding if desired, then to write a bunch of print statements. The only reason printlns would seems easier is because it takes s bit more time investment to learn debugging tools.


How do you deal with multothreaded debugging?

In my ide I can see all threads, I can freeze and unfreeze threads as I want, I can run a threads to the same point. All while inspecting the variables in use.

Or pausing code and seeing the whole call stack, having it navigable by just clicking.

This would seem a nightmare with print debugging!


Lots of reasons to love a visual debugger:

- Easily set breakpoints and localize faults quickly.

- Enter the debug shell and evaluate statements based on the current program context.

- Tinker with variables at runtime and see how the code responds.

- Conditionally catch things that are going wrong and evaluate program state at the time.

- Simulate difficult-to-reproduce things like race conditions by messing with program state.

- Use it like a CLI to do stuff from the program, like send requests using the actual program code.


I do use a debugger.

A debugger is not just useful to debug. I've found it a joy to actually be able to step through every step and see e.g. a request being transformed through the different libraries and middlewares. In Goland, you can even step into the stdlib!

I will admit there a few "blind spots" but 99% of the time, a debugger is a super fun way to understand your code rather than inserting print statements everywhere.


Yes. Especially if the debuggers are easy to use and aren’t just gab-style shells.

I can definitely lived without a debugger, I’ve done it before and you get used that after awhile. But given the choice between a programming experience with a good graphical debugger and one without, the former will almost always win out.


"Do people really use a debugger ?"

As with many other perennial programming debates, I think we underestimate the diversity of our experiences when having this debate. There are codebases that debuggers don't really help that much on, perhaps because they're technically very difficult to get running, or perhaps because they were written in such a way as to not really need them very often.

There are other code bases where debuggers are a necessity.

Some of those considerations are not even strictly speaking in the code themselves, but related to the code + developer; for instance, I really love debuggers for codebases I don't understand because I just came to them. You can read the code all you like, but until you watch it run, you don't realize that this one if clause that you kind of skimmed over actually invokes an entire chunk of code you didn't even know existed, etc. Whereas someone intimately familiar with that code may not need the debugger so much. Any sort of recursive data (parsing, HTML nodes, etc.) is a pain to deal with in printf debugging too because you either fail to dump out enough context to understand where you really are, or because you dump out so much context that you drown in it.

My call is that A: they're absolutely indispensable tools that you should have in your toolbelt so that you don't hesitate to use it when you need it, but at the same time B: you should try to avoid depending on them, because a codebase that you can't handle without a debugger is a code base that you can't understand, and that's bad.

I very deliberately say "should try", because it's not always practical; maybe you're just visiting this code base and have no reason to understand it, or maybe it's just intrinsically too darned large to understand, etc. But you should still strive to not need debuggers to follow subtle, complicated code, but make your code do what it says it does and no more. (e.g., stop using globals, don't write code that tries to figure out what lies to tell to other bits of code to make them coincidentally do what you want, etc.) I also say avoid "depending" on them, because if you don't depend on them, you can use it freely. I've only used a Go debugger a handful of times, but each time it saved me a ton of time. But the codebase in question remains comprehensible and straightforward, because I don't make skilled use of a debugger a prerequisite for following it. (I've inherited other code bases which do.)


> As with many other perennial programming debates, I think we underestimate the diversity of our experiences when having this debate. There are codebases that debuggers don't really help that much on, perhaps because they're technically very difficult to get running, or perhaps because they were written in such a way as to not really need them very often. There are other code bases where debuggers are a necessity.

This is absolutely true and very wise; there are a lot of strange kinds of programming environment out there, and we need to be aware of the full spectrum of tooling possibilities. And failure possibilities.

The worst environment I ever had to deal with was a strange embedded one where my code ran as a subprocess of a 3G module. There was a single-step debugger but not a proper JTAG one - so if it crashed, the device reset rather than dropping you to a debug prompt. Printf was sort of possible, but over buffered USB - and if the device crashed, it lost the last few lines in the buffer which would have told you what the crash was. I ended up leaving a breadcrumb trail in bits of memory that were known not to be zeroed on boot.


Using a debugger means that, when my inspection shows nothing wrong with one value, I can immediately check out if other values are wrong.

Whereas with a print statement, I need to change code and then shutdown and rerun the program. The feedback loop of a debugger is just a lot shorter.


>Do people really use a debugger ?

Absolutely; everything from libraries, language development, even in REPL languages like J and R, I use debuggers all the damn time, native J/R/whatever, GDB, the whole lot. If you deal with hairy issues, there's no way of solving the problem with print statements, and if you're used to them, they're a lot more productive. You just have to pay the toll of learning the debugger.

I haven't uncorked any golang in a long time (~5 years), but I do remember thinking WTF regarding the debugger. I think the idea is to write enough unit tests you don't need the debugger so much. Seems questionable, but what do I know; Google would never hire me. Otherwise I liked golang a lot as a very pragmatic language and ecosystem. I had wished the GC was optional somehow to replace C/C++ tier "systems" stuff, but whatever; it's a fine successor to Java type things.


>Now I just use print statements and read code to figure out what’s going on.

So, instead of using tool created with debugging in mind, you prefer to edit code in order to achieve "tool that helps you debug"?

what you do if you want to check what's the result of e.g "array[9]"? you write printf(array[9]) instead of use evaluate expression at runtime?

I see no point in that


I did not for the first 10 years :)

These days I feel completely disadvantaged without an interactive debugger (such as gdb, msvc or a good repl)


I don't usually use debuggers in functional programming languages. Since it's mostly expressions I usually use REPLs.

But for imperative languages like Go, a fast and good debugger can boost up productivity a lot if properly used, it's basically a REPL for imperative languages - since imperative languages mess with the ordering and the environments much more than functional languages.

Instead of inferring how to code would be executed, you would debug at reasonable places, know how it executes and play with it directly with some expressions. It's more about the feedback speed.

Only using print it's not sufficient because there are a lot of data structures that are too large to serialize. Or like functions that are not serializable at all, you have to reference it in the debugger or it doesn't really help.


Maybe you don't have the code.


Seems most people here assume debugger is synonymous with source debugger.


Some debuggers will disassemble (to machine or byte code) code without source.


It depends on the language. I've used debuggers in java, much less in c++, and hardly in go. I feel there is not much of a need in go for debugging, writing good tests and the occasional log/print statement do fine.

Java, often having more complicated structures and layers (more due to the culture, one could write more go-like simpler java as well) is more reliant on debugging. More dynamic (less predictable) languages such as python even more.


My debugger usage pattern varies between languages. For C# I rely heavily on the debugger because it is just so easy and pleasant to do so. For Go using GoLand I use the debugger regularly, but I expect that it won't be quite as versatile as debugging C#. For Python or JS I find it easiest and most intuitive to just print-debug for everything.


You can't use print statements in external code like libraries.


Depends on the language. Python? No. C++ yeah, occasionally.


A debugger session is the equivalent of N runs with different prints and value changes. I do not consider I can work seriously without a debugger.


I used to use one a lot, in python. My go code compiles and runs so fast that I don't really need one, though.


How do you do that with a Production system ?

That's where debuggers are essential.


?

I'd hesitate to stop the world in prod.

In prod, logging is usually the answer. (Or OS-level tools.)

On JVM (and I expect all other major platforms) you can tune logging at runtime so just leave the statements in as long as the don't compute anything.


Especially on JVM, conditional breakpoints are really nice for this. (And at least most classic Java-Webservers are really compatible with this approach)


Core dumps are quite useful in production, and they pretty much require a debugger.


I see. That makes sense. To be honest I (almost) never work at that level so I only thought about live debugging.

My bad.


Agreed, debuggers can be nice to have but they are mostly a crutch.... Write modular code with unit tests and you find your need for debuggers is really a rarity


Not every codebase you are working with (or would be working with) has unit tests.

Sometimes you’ll inherit such a project from a dev who left and you would be begging for a debugger.

Otherwise failing that, then the rewrite begins... Debuggers save us that effort in such a codebase.


This. Only time I use debugger working on C# is on crash dumps.


Conditional debugging definitely works for me?

I have no experience with terminal debugging mind you, work pays for GoLand because it definitely speeds me up by more than the 0.15% or whatever of my salary it costs.

I have noticed slower run time with debugging on, but it's about half the speed usually, not too drastic. I'm usually debugging DB and network traffic and data flow anyway, so processing time is rarely an issue.

VS Code integration is mostly hopeless from what I've read, with the go language server taking up massive amounts of RAM, so have you tried GoLand properly?


It's interesting as talking to Go->Java guys they echo your sentiment and talk up goland.

They also write "non-Go" go.

Coming from deeply embedded and low level programming I've been perfectly fine without any more than printf style debugging... Tho we leverage logs/stats heavily.

I aim to write simple and easy to grok code with few surprises and abstractions tho so maybe it's a mindset difference?


Delve also does not seem to be able to evaluate code fragments. This is something I miss from Java and Python.


I've seen a number of comments like this here today. I stopped liking source level debuggers when optimizers got decent in the second half of the '90s. Go has an unusual usage of the stack but mdb has support for it https://www.joyent.com/blog/mdb-support-for-go


When I first got into go, a few of the Go Opinions kinda rubbed me the wrong way.

- I have to source GOPATH in my rc files? Annoying.

- File paths are network URI's? Ugly.

- I have to deal with every err != nil? Verbose.

But over time I've grown to love go more than almost any language (python is still bae, especially with type hints, and even then, it's close).

- dependency pathing is a pain in most other languages. Singular source of truth is great (I've had virtualenvs leak and cmake do bizarre things when multiple libs are on the file tree)

- host/paths are file/paths. I love this pattern now. It's just so obvious and natural.

- ok the err != nil still drives me nuts. But it drives me to write things in a way where I don't need to deal with errors as much. Reduces fractal complexity and paths through the code. It also forces "the buck stops here" sort of pattern where you have some atrium which is resiliant and is where most errors bubble up to


> dependency pathing is a pain in most other languages

This has been a solved problem in the Ruby community for nearly a decade thanks to bundler. When I first started writing Ruby it had all the same problems as other languages, there were various attempts to solve it (similar to virtualenvs) but nothing really worked properly. Then bundler came along, and even from early releases it had pretty much figured it out.

The way it works is by having a human generated manifest file with a list of libraries you think you need (Gemfile) and the tool will then fetch those libraries (they can be installed in a vendor directory or globally for the specific Ruby version). You don't need to specify versions or deps (but can if required), as when the tool runs it will generate another manifest file (Gemfile.lock) which lists the versions of all libraries and their dependencies. Both files should be committed, so when you coworker checks out the project they know they have the exact same versions as you. There's another command to list outdated libraries, and to upgrade all or a specific library later.

The tooling checks the versions of libraries on every execution (unlike npm or yarn) and spits out an error if your local versions don't match the manifest. It automatically creates wrappers around binaries so you don't need to prefix commands from other tooling. There's built in support for installation in a deployment environment, so you don't install testing libraries there.

Together with tools like rvm/rbenv, it makes managing multiple Ruby versions and multiple projects, just work.

https://www.cloudcity.io/blog/2015/07/10/how-bundler-works-a...


Do you know if python or rust have something similar?


Python doesn't really. It's quite funny that Python has much better library dependencies available but Ruby has a much better system for managing library dependencies.


I don't know about Python but the standard Rust dependency management solution (Cargo) is extremely similar.


Rust was inspired by Ruby's Gemfile.

Python is starting to make strides with Poetry but generally packaging Python is an exercise in pain.


> Rust was inspired by Ruby's Gemfile.

cargo and bundler share some core contributors, for example wycatz (Yehuda Katz), who held a talk about the common concepts and differences at the rubyconf portugal: https://www.youtube.com/watch?v=Bwk8mdU6-ZY


have you tried (ana)conda?


I am an average Go programmer, so I may not be deep into Go as you might be. However the mention of GOPATH catches my eye. All my issues with GOPATH went away for me when I started using the Go Modules: https://blog.golang.org/using-go-modules. Also Go's approach to err is one the reasons I started gravitating to Go -- my old Go code is more readable because of the verbose (error as strings in my code) and hence easier to maintain. Go is a surprisingly practical language for distributed, system oriented, undertakings I'd say


I'm not going to lie, I never understood why the gopath was so hard for people until I saw a friend using it.

The go developers where all unix heads, and as unix heads setting a path env variable was so natural it almost doesn't bear mentioning.

A unix dev is going to follow the following flow: cd into the project i'm working on in the terminal (I use fasd for this) export my gopath from history (ctrl+r GOPATH=) launch emacs on the files I need develop

However, many "younger" devs grewup with IDE's. They interact with a project by launching goland, which means mucking with this stuff isn't first class.

Not that it adds much, just some food for thought on why GOPATH existed and you found it clunky.


> which is resiliant and is where most errors bubble up to

a.k.a Pokémon exception handling? (gotta catch'em all)


>- ok the err != nil still drives me nuts. But it drives me to write things in a way where I don't need to deal with errors as much. Reduces fractal complexity and paths through the code. It also forces "the buck stops here" sort of pattern where you have some atrium which is resiliant and is where most errors bubble up to

This was actually a huge flaw in the design of go. The way forward for this type of thing was sum types but go failed to implement this modern programming concept hence why your stuck with it. See how rust handles errors... that is the proper way.


installing or updating go is still a massive pain, it's mind-boggling why you'd have a modern language that behaves this way


I have not experienced problems installing or updating go in 5 years of use. What problems have you experienced?


Nor have I had problems, simply saying it is too much hassle for little gain so I rarely bother, whereas with other languages it is a breeze so I update all the time.

Please document your workflow to update and compare it to other modern langs, it's all relative.


This workflow works fine for me: https://github.com/golang/go/wiki/Ubuntu


Ok so for ubuntu (which has what desktop share again?)

> _Add a 3rd party repository to apt_

> sudo add-apt-repository ppa:longsleep/golang-backports

> sudo apt-get update

> sudo apt-get install golang-go

> Note that golang-go installs latest Go as default Go. If you do not want that, install golang-1.13 instead and use the binaries from /usr/lib/go-1.13/bin.

> If that's too new for you, try:

> $ sudo add-apt-repository ppa:gophers/archive

> $ sudo apt-get update

> $ sudo apt-get install golang-1.11-go

Am I being trolled here?

How would you compare this for both memorability and user experience compared to say "rustup update"?

Please take a step back and look at that question without personal bias. Please!


This comparison is entirely unfair, because you're not comparing the same thing. This is what one has to do to install Rust:

    # Install rustup
    curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
    # It automatically installs the latest version
This is what one has to do to install Go:

    # Add the PPA
    sudo add-apt-repository ppa:longsleep/golang-backports
    # Fetch information from the PPA
    sudo apt-get update
    # Install Go
    sudo apt-get install golang-go
1 command vs 3, but in both cases only one "odd one" you have to remember specifically. More important, however, is the fact that from now on Go will update along with the rest of you system. You upgrade your system like this:

    # Upgrade the system
    apt-get update && apt-get upgrade
rustup turns that into the following:

    # Upgrade everything
    apt-get update && apt-get upgrade
    # Upgrade Rust
    rustup update
Now I'm not trying to say rustup is necessarily worse than Go, because it does offer advantages, but for the simple use-case of always running stable it adds an extra step rather often, instead of once.


>This comparison is entirely unfair

Updating rust: "rustup update"

Updating Go: Search a bunch of stuff and then do the stuff some random website tells you.

Entirely stand by my earlier statements. Apparently the thousands of paid computer scientists at google are simply incapable of automating the basic process to update their lang. It's utterly absurd and should be actively mocked by anyone with half a clue. We should mock this stuff, we really should, it's hilariously dumb.

How are all the people getting paid SF wages literally incapable of creating "go update"?


> Updating Go: Search a bunch of stuff and then do the stuff some random website tells you

No, updating Go just happens when you update your system. You don't need a separate command to do it. All of the other stuff you mentioned in your previous post is about downgrading Go.


Just upload your 3 lines in a gist and curl pipe bash it if that's the issue ...


That is good advice. I really should document it but only use go now and then for various reasons and never seem to enjoy updating, don't really have a horse in this race. It just feels so clunky, which was my original point.


That’s not really a fair comparison. This code is installing Go where as you’re assuming Rust is already installed in your example.

Plus I’ve had way more issues with Rust versions than I have with Go.

Ultimately though this all just a very small part of writing code in either language.


You're right, it's assuming Rust is installed since it's talking about updating the compiler. But let's install Rust instead then:

`curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh`

Now you can `rustup install <version>` easily with everything taken care of. Of course, the latest stable release is automatically installed already. And Rust provides automatic install scripts for Windows, Mac, and Linux. Whereas Go is more involved than is expected for any modern language.


You can just download the go tarball and go from there. Technically you don’t need to install Go, you don’t need environmental variables set etc.

But I do get your point. You are absolutely right that go doesn’t hand hold you through that process and thus a thousand different people have a thousand different methods for doing it.


Well, using apt is one of the standard ways of installing/updating software on Linux, so most advanced Linux users should be familiar with that. But I'm sure you can also use a graphical package manager to do the same thing. "Rustup" on the other hand is a customized rust-specific tool, so of course it's easier to use (once Rust is installed)



    export VERSION=1.12 OS=linux ARCH=amd64 && \
# Replace the values as needed

  wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz && \
# Downloads the required Go package

  sudo tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz && \ 
# Extracts the archive

  rm go$VERSION.$OS-$ARCH.tar.gz
# Deletes the ``tar`` file

# Set the Environment variable PATH to point to Go

    echo 'export PATH=/usr/local/go/bin:$PATH' >> ~/.bashrc && \
  source ~/.bashrc
Please tell me you are kidding. This is literally what you posted on how to update go.

How would you compare this to say "rustup update"?

How would one do this this same workflow you posted on other platforms?

How memorable is it would you say?

Can you do it freehand without looking it up?

Is this considered good UX for developers?

Looking forward to your answers here, thanks.


All the instructions basically boil down to:

1. download version X of go

2. Unpack to location Y

3. Tell your OS to look at Y

Sure it's not a single command, but it's hardly complicated.

If by UX you mean some kind of install script that handles those things you can always write it yourself.


> those things you can always write it yourself

The idea of everyone repeating themselves over and over again seems to be almost desirable amongst the community. The concept of DRY is essentially dead as far Go is concerned.

Why exactly can the core team not write this tooling? Why should thousands of independent people do it all themselves to varying degrees? It's a laughable situation.


apologies, I was being snarky. Around when I was told to run this command was when I stopped using go.


As opposed to piping curl to sh?


As far as I know, most new features are opt in - I’d be curious what issues you’ve seen? I recall one recent one that bit me in dependencies on private repos is the new mirror/sum where we need to `export GOPRIVATE=github.com/<us>`. In a way I’m thankful it was so simple, but while I figured out what happened I wasn’t very happy.


Go Modules are opt-in if you are trying to submit pr's? Either you use >1.13 or you don't submit the pr.

I hate to be the rust shill but updating it is "rustup update" I dont have look anything up nor put in any effort, why exactly should a common process which will happen millions of times be so hard for Go to replicate? It's absurd and should be called out.

Updating out of the box is a massive pain and I stand by that statement, it's hilarious how people get so defensive of what is absurdly poor UX for a modern language.

Docs for anyone interested. You need to uninstall the previous version before going through this process. Again to reiterate, all it take in rust is "rustup update".

https://golang.org/doc/install


brew install go?


Have you tried using gvm? https://github.com/moovweb/gvm


Go would be about right, had it been released around 1992, when we had Oberon, Oberon-2, Component Pascal, Modula-2+, Limbo making the university rounds.

As it is, I see it mostly as C replacement.

Yes, as C replacement, as projects like TamaGo, Android's GPU debugger, gVisor hypervisor, Go's bootstraped compiler, Fuchsia's TCP/IP stack, TinyGo for microcontrollers, show.

However for anyone that enjoys being able to take advantage of what happened since 1992 in mainstream computing, Go isn't it.


IDK. On paper I agree with you, but I'm just so darn productive with Go. C is great until I need to open a socket. Rust is awesome but slows me down. Dynamic languages need types for serious projects, and I can't be bothered to add a build step just for that. I use them all, but I only picked up Go relatively recently, and I've been very impressed.


I was also very productive with plenty of 80's and 90's languages, most of them more expressive than Go currently is, e.g. Turbo Pascal.

From the research papers I usually read, plenty of others were quite productive with 60's and 70's languages, many of them more expressive than C.

So while Go is a good contribution to reduce the amount of written C per year, it is hardly a novelty, including its compile speed.


Your opinions of Go are quite clear, given every time there is a thread on HN about Go your relish telling everyone how backwards a language it is. Yet weirdly you’re not the only competent engineer out there.

I’ve been programming for 30+ years and used many of the languages you’ve described. In fact I’ve written code professionally in well over a dozen different languages (I stopped counting somewhere in the last decade). Yet I still find myself productive in Go. It’s not just the compiler speed either. You talk as if people are idiots or are delusional when actually Go is quite a nice language for a great many people with real world problems to solve.

I’m sure I’ll get downvoted for saying this but there is so much snobbery and elitism in these kinds of threads and it’s absurd. Live and let live guys.


When one learns about Inferno and Limbo, it is quite visible that Go was already here back in 1995, but Bell Labs isn't Google.

People aren't idiots, yet magpie programming is a reality.


No mention of Alef? I'm pretty sure it existed pre-1995, and in a way it was more Go-like than even Limbo.


Rob Pike et al gave up on Alef when implementing Plan 9, using a C based library for communication instead.


Google has fuck all to do with my decision to use Go and nor has it ever been a consideration. In fact I’m about as decoupled from Google as one can reasonably expect to be.

I think you need to do some research into what “personal preference” actually means because you seem to think your own preference is gospel and everyone who disagrees is a magpie (which is just an indirect way of saying “idiot”). And frankly it’s getting tiresome watching you derail every software development thread.


That understanding lies on the beholder, not on me.

If you find it tiresome. just don't bother replying and go read something else.


... says the guy who seems to find it necessary to reply to every mention of Go.



Yeah. But your reply to pushpop just lost validity.

And you might at least consider whether the person who's wrong on the Internet might in fact be you.

And you might consider whether, even if you're right, you're also being counterproductive in how you're going about trying to persuade others. When someone raises a valid point against something you say, you don't acknowledge the point. Instead, you find something - anything - that you can quibble with in what they said. This makes you look like someone who wants to win the argument at any cost, rather than someone who wants to have a reasonable conversation. This makes some of us wonder how biased your interpretation of events is. That is, your hyper-argumentativeness makes you less persuasive*, rather than more.


...or you could just show a little respect for other people’s experience and opinions ;)


You too noticed pjmlp keeps regurgitating the same stuff about Go on every chance he gets?

Pretty combative and low effort content. Snarky replies with little respect to others opinions. Always posting jabs at Go about how 90's languages are better in every way yada yada. Do mind the tendency to veer into offtopic too.

Given the effort, I sometimes wonder how painful it must be for this person to see others enjoying and being productive in the language.


The "live and let live" point would be stronger if there was not a stream of people arguing that "go is so productive", as if other languages were adding wikipedia links in compiler logs to distract us.


Those two points aren’t mutually inclusive.

Go is productive in specific ways. Other languages are productive in other specific ways. That’s why we have different programming languages solving different problems.

So if someone says they like Go because they’re productive in it that doesn’t mean all other languages are shit. That only means Go solves the specific kind of problems they need solving.

This is why I hate languages flame wars and why commenters like the GP exasperate me. They have different problems they need solving and talk about programming languages like there is a one size fits all. That’s not the case.


Are you advocating something, or just sad that Go is more popular than you think it deserves?

What languages do you think people should be using instead? My use case is mostly backend network services. My next two choices would be Rust (absolutely love it but slows me down and async isn't ready) or Clojure (would rather avoid a VM). Would you suggest others?


More disappointed with its design decisions, and how some believe that Go is special in some way, just because they don't bother to learn computing history.

Example, Go's compilation speed is nothing to wonder about, when one enjoyed languages like Modula-2 and Object Pascal compiling on 4 Mhz CPUs.


Fair enough. I agree it's sad how much gets lost from generation to generation. Unfortunately progress is not monotonically increasing, even in the internet age. But that doesn't mean Go isn't one of the best languages we have today, which has been my experience.


Seems you've ignored the main behemoths: Java, C# and C++ ?


I've done lots of Java and C++. C# looks like a better Java to me, with a few really cool features like LINQ. But I used to be a MS hater so never tried it, and now with Go/Rust I see few reasons to use a VM language. Performance is fine for lots of my use cases, but deployment is strictly worse than compiled languages IMO.

I've never been willing to make the religious commitment to truly get good at C++. It's insanely complicated.


That's fine if you just prefer the language. I don't really agree that performance or deployments are any better in Go though, compared to those. That's where I was curious, I've only toyed with Go, but havn't really seen anything better on those angle.


They're adding AOT compilation to .Net.


I agree with you, but Go's lack of language features appeals to me. First off, I'm not a language geek. I code to solve problems, and not explore programming language advancements. That's not to say that more advanced languages can't be used to write simple solutions, I know they can. But, I'm just a blue-collar software engineer, and use modern Java (8+), Go, and SQL daily. They are fine and get the job done without much fuss.

I also use JavaScript+Babel daily, and it feels like a kitchen sink of language features with no holistic thought about language direction. Many times I think, that's cool, but did it really make the solution a) easier to write b) easier to read and/or c) easier to test/debug?

Back to the original article about tooling. Having used VB back in the day, c#, Java, etc..., Go's tooling is passable. I think it's the bare minimum to get by today. One thing that I love that Go pushed is gofmt. Having it be the default out the gate is great for teams.


So strange that nothing important is implemented in Oberon, Oberon-2, Component Pascal, Modula-2+, and Limbo. Implementers must be really dumb. Maybe they should read more papers.


Their designers weren't Google employees at the time.

Should we talk about the Plan 9 and Inferno commercial failures from the Go authors?

Interesting enough, on your blindness to attack me, you just called Rob Pike dumb.

Oberon is a success, even if a minor one, it keeps being used as OS systems language at ETHZ and many Russian universities, and brings food to the table to the Astrobe owners.


Well by that standard Go is putting food on lot many developers table (including me). And your constant berating of Go about some lack of "modern" features is like telling its users to be ashamed of themselves of not knowing any better.


Limbo essentially became Go, gaining a competent corporate sponsor along the way.


This is backwards in my opinion and omits a lot of nuance.

There is no other mainstream language which combines seamless async-everything-to-the-ground with good performance.

Whenever I write in another language I feel like I'm in the stone age only because of this one feature already.


Rust async isn't quite "seamless" (no async in traits) but getting there. Albeit if "seamless" is what you care about, you can always use threads - the overhead is negligible for typical workloads. That's what Go does under the hood; it just uses threads.


I think there's been a miscommunication.

What I mean is that what you do in rust using async/await is done in Go by default for every function and function call. There is no distinction. _there are only async methods in go_ and that is what I want.

In other languages, like rust, there is a big ecosystem divide between libraries that are async and those which encourage you to use blocking calls and threading.


Have you looked at elixir? It's pretty close to async everywhere, though dramatically different than Go I think.


I've used it, and it's great! However, as I've written in a sibling comment, it's nowhere near as performant without using c-libraries. Which make the schedulers' effectiveness nastier (same in go if you use cgo).


C#, VB.NET, F#, Erlang and Elixir come to mind.


I've used F#, it uses computational expressions for async functions. And functions are either async or not. Same for C#. Same for Scala, Haskell and others. Future[T] is so unergonomic and it creates a huge divide in the ecosystem. Been there, hate it.

I understand why Rust made the choices it made, they're going with zero cost abstractions, but for other kinds of languages async everything is fundamental to me after having used Go.

Erlang and Elixir are not fast, they use C implementations of functions for computationally heavy code, which of course blocks the scheduler for the duration of the call as far as I know and also can bring the Erlang VM down with exceptions. But yes, ergonomically wise they are what I want in this context.

Java Loom seems to be trying to add transparent asynchronicity, but it's not here yet.

In Go I write straightforward seemingly blocking code, and it just works.


Can you specify what you mean by seamless async everything and why it's significant? Are you talking about the execution model here? You mean having a message passing system like in Elixir that allows you to simply construct most of your program by writing callback functions?

Even in a language with no such system like Haskell when you have a web request it runs in its own green thread, allowing you to write mostly synchronous code. I'm not sure what the benefit of some kind of natively async execution would be here.

Edit: formatting


Ok, let's first look at Rust here. Rust has synchronous functions and asynchronous functions. Asynchronous functions return a Future which you can await in an asynchronous function or run in a future executor. An asynchronous function really gets transformed into a state machine with states in the various yield points (mainly awaits, or lower level primitives). Now if you make a blocking - synchronous - call somewhere deep inside then... Great, you've blocked an executor thread for the duration of the call.

Now let's look at F#, Scala, Java, C# (I didn't use Haskell asynchronicity, but I suppose it too has a Task/Future monad). Here you have the same as in Rust with the calling a blocking method deep inside. Other than that, in the Scala that I've seen you end up having Future[T] instead of T everywhere. Which causes you to use monadic functions everywhere instead of using T directly. This is cumbersome and unnecessary in my opinion, because in most applications I'm working on you'll just make 90% of the code async + synchronous helper functions.

Now coming back to Go. There's no distinction between synchronous and asynchronous functions, because there are only asynchronous ones. That's also why Go code will look "blocking" for somebody used to the monadic approach. You don't have to await a function, there's a yield point implicitly inserted at every function call. There's no real blocking in Go. And I'm working on T's all the time, no Future[T]'s.

Sure, it gets hairy if you use cgo, but the scheduler can't reason about C code execution, so you'll block a worker thread for the duration of the call. And there's the proverb "cgo is not go" too.

I'm not meaning to say the other approaches are invalid and the Go one is clearly better. But in practice, in the kind of software I tend to write (microservices, storage systems, service meshes) the Future[T] is just unnecessary clutter which I don't need.

Also, regarding the Elixir parallel, Go doesn't handle everything as callback functions, you just write code comprised of imperative function calls like you would in C (you know what parts of "like you would in C" I mean I hope) and there's no blocking. I agree that Elixir is another example of a language I like by the execution model, but it's much slower than Go if you don't use C-backed libraries.

You also have the point of being able to spawn tons of Goroutines with micro stacks because of how stack growing and moving has been implemented. The main point is that in Go everything is first class green.

Project Loom, as far as I've talked to my Scala writing collegues, seems to be aiming for the same in the JVM ecosystem. Making synchronously written code asynchronous by default. But it's not here yet, so no comparisons to be made.

EDIT: I may seem to be trying to show off with my knowledge of other languages, but people often make a point about developers being too incompetent to use something other than Go as the reason for Go's popularity. For them I'm trying to make a point, that I've been there, tried the approaches, and this really is the one that stuck with me in practice and which I like the most. To each their own.


Alright thanks for taking the time to detail it for me!

Most languages/runtimes indeed don't have this kind of execution model. It wouldn't often make sense for a general purpose programming language to function like this, but then again both Go and Elixir are specifically designed for the web.

You're right in that when there's no built-in design pattern for async stuff the language community finds various, perhaps conflicting, ways of doing it. In Haskell you could use threads or some Async library (that uses threads) to achieve concurrency, there are many different abstractions. But in a typical web API it's not common to be needing lots of async functionality, exactly because each request is already served in its own thread. It's ok to write synchronous code there as it does not block the runtime or any other thread.


Could you please elaborate on the last part?

In a typical web API you don't want each request to spawn a thread, at most a green thread. (at least if you have traffic that requires you to have more than one machine) As I understand the term "green threads", they are scheduled on standard "worker" threads. Synchronous functions will block green threads and this way block the underlying worker thread. If you have green threads, then you need asynchronous functions. (which in Go are just the default, so you spawn a goroutine - or more - per request, without thinking much about it)

And I do actually disagree about the general purpose language part. I think that only really performance oriented languages (like Rust) should go the way of sync/async distinction. Because there's hardly any loss in async-everything otherwise.


By threads I meant the corresponding runtime thread (not OS thread) on each platform. In Elixir they're called processes, in Haskell (green) threads.

Both Haskell and Elixir web frameworks spawn a new thread for each web request they receive. Those threads on both platforms are very lightweight. You end up writing mostly synchronous code on both platforms for handlers that perform the work for those web requests. In some typical smaller API I may not have even a single piece of async code (async statement/expression) on either of those platforms, it's all synchronous in terms of code. It's all thanks to the execution running in its own thread.

I don't understand what possible gain there would be to make everything implicitly asynchronous in this kind of scenario. Haskell already evaluates lazily, and on Elixir you can always just pass messages to other processes. In a typical web request you still need to fetch something from the DB, manipulate the data a bit and then return it. Synchronous code serving a single web request makes perfect sense to me, having everything asynchronous sounds like it'd just make everything more complicated for no reason at all.

Addition: The point of lightweight runtime threads is exactly to allow concurrency without having to write asynchronous code.


> Java Loom seems to be trying to add transparent asynchronicity, but it's not here yet.

True, but you can download an EA build and try it today: http://jdk.java.net/loom/


It just works is not what I would call writing into possible closed channels or eventual deadlocks accessing shared data.


Writing to closed channels never happened to me in practice. Sure, you have to write idiomatic go code but then it's a Non-Problem. Deadlocks are indeed a problem but at least there's the deadlock detector which crashes your app with a stacktrace which makes it a minor inconvenience in practice.

Being able to launch a goroutine for background housekeeping, like sending keepalives, in a structure constructor without thinking much about it is extremely liberating to me.

My point is that Go is not extraordinary in the language constructs it presents. It's extraordinary in its implementation, from-ground-up async and the family of problems it completely abstracts away.

I'm not saying you have to agree with me in that Go is the best languag for you to use. But suggesting it's a language which feature wise is left behind in the 80's is plain dismissive and makes you seem uninformed to anybody who's written any nontrivial amount of it

I've written code in most of the languages you've compared it against. Even explored heavy functional approaches. And even though they were interesting, often innovative, Go still leaves them behind in the dust in practice for me. Because it hides the stuff I don't care about and works well for creating real-life software which I later have to operate, extend and extinguish when it's burning in prod.

Edit: Just to add, implicit interface implementation is great for composing software.


I hardly see the difference between,

    go func () { }
and

   Task.Run(() => {})


You can't? Perhaps this is why you encounter writing to closed channels and deadlocks accessing shared data.


Please enlighten me how it is less error prone than TPL and Dataflow.


I haven't encountered those problems in the Go I've written (though they are possible with any concurrent programming model powerful enough to allow for optimization). Perhaps it's just an experience / practice difference.


But all the languages you mentionned can deadlock too so?


Or even the simpler panic on concurrent read/write on maps.


In general, if one is trying to do a concurrent read/write on maps, one his "holding it wrong." Access to map data should usually be channel-gated with a goroutine serving as an accessor to the map data. Mutex-locked if you need it faster for some reason (though chans are already pretty fast).


Although arguably not a mainstream language, concurrency isn't hard with Haskell either.


naive question: What is implementation difference between async and synchronous function that makes many languages differentiate them?


I'm not sure if you're talking about the literal `async` keyword in JavaScript or the general concept of asynchronous functions. I'll answer both questions.

Declaring a function `async` in JavaScript causes the function to return a Promise immediately instead of executing all the code within the function (and blocking until that execution is completed). A Promise can be thought of as a "placeholder" for data not available yet; something can later put data into it (which will trigger all code waiting on the promise to have data to be executed). If you call another async function within an async function, you're allowed to use the keyword "await" to block execution of the calling function until the called function either satisfies or fails the returned Promise, at which point the return Promise result is used as the return value of the awaited function and the waiting function finished execution.

More generally (speaking loosely), synchronous functions are evaluated to completion before the next line of code is run, and async functions return immediately but either queue up some work to be done later or kick off some work in a separate system (additional thread, additional processor, maybe the network card or graphics card) that happens at the same time your main program body is running. So, for example, "Math.pow(3,3)" is synchronous and your code won't continue until three-cubed is calculated, but "XmlHttpRequeest.open" with its async parameter set (https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequ...) will return immediately and let more of your JavaScript code run while the browser itself communicates with the network software / hardware to make an HTTP request, fetch some data, and give it to your program; you can register a function to be run once that data is available.

ADDENDUM: WHY WE CARE SO MUCH

It's generally impossible (I'm lying, but it's a useful lie) for completely synchronous code to "deadlock," which is where there is no way for the next line of code to be run. With asynchronous code, deadlock happens when an asynchronous function is waiting on some result that will never come; the program cannot continue, and (if it blocks your user interface) your computer is frozen. Without some discipline of ordering of events or ownership of task responsibility, deadlock is very easy to cause with asynchronous code.

Trivial deadlock example:

thread A: start async thread B, wait for a '3' from B, then send '5' to B

thread B: wait for a '5' from A, then send '3' to A

These threads get stuck forever; A is waiting for B to give it a '3', and B is waiting for A to give it a '5'.


"However for anyone that enjoys being able to take advantage of what happened since 1992 in mainstream computing, Go isn't it."

Which radically new programming language concepts has been invented since 1992? To my mind most new languages today seem like buffet plates of features picked from existing languages.


When it comes to mainstream computing, non-nilable types did not exist in 92 mainstream languages. That is a huge improvement.


Not many, but it would help if Go at least took some ideas from CLU (1975) and ML (1973), which while considered academic experiences in 1992, by now most mainstream languages have adopted their ideas, with exception of Go.


You can't really cite Oberon as 1992 mainstream computing, already back then Wirth was rejecting a lot of the things that were happening -- and continues to do so. Heck, Oberon-7 eliminated multiple returns…

Both Wirth and Pike had the various PARC systems as inspiration. Sure, things like the BLIT or the Ceres tried to do such GUI things with way less hardware to throw at it, but their rejection of some concepts goes beyond mere RAM constraints. Or else they would've rectified that once powerful consumer hardware went mainstream. Wirth's opinions on "proper" design haven't changed that much.

I'm also not quite sure that the arc of the programming universe errs towards productivity improvement.


One thing that happened since 1992 in mainstream computing is multicore processors, and Go is one of the few broadly popular languages that attempts to handle concurrency in a sane way.


Applications are open for YC Summer 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: