Coming from the C# and JVM world to go, tooling looked like one of the worst parts of Go.
The debugging story is very limited:
- No function execution
- No edit-and-continue
- No conditional breakpoints
- No watchpoints
Package management works OK on the consumption side, though it is much more clunky than with classic package managers (e.g. set an ENV variable to get a custom proxy, but write something in go.mod file to get a per-package proxy). However, the publishing side is all over the place:
- Relies on source-control integration
- Horrible import paths for packages published from large repos
- Versioning based on source-control tagging, exact format undocumented
- Major version changes require source-code level changes in all files touching the package
- Multiple Go modules in same repo use-case is not documented (can you have separate versions from same commit? how do they reference each other?)
The source-code level tools/plugins available are extremely basic (except for GoLand):
- No refactoring support other than "rename variable"
- Reference-finding across modules rarely seems to work
- No way to automatically find all implementations of an interface - especially problematic in a language with implicit interface implementation
- The tools are pretty slow, constantly re-parsing large chunks of code; go pls has fixed some of that
I agree 100%, and I pretty strongly dislike Go's tooling because of it. Especially due to the horrible debugging options (though something is far better than nothing, of course). I've bashed my head against every single item in your list multiple times, and they've made extremely little progress over the years so I'm quite pessimistic about them changing.
Though for completeness, what it does have that I find very useful, and has had from early days:
- pprof for cpu (tunable sampling rate) and memory allocations (sampling and I think tracing)
- a standard build system (as annoying as it can be (GOPATH, no file-as-input-list, etc)) so builds are indeed generally very simple and consistent: `go build .` (plus `glide install` or `dep ensure` in many cases, but that's easy too)
- race detector (trivially catches an incredible amount of issues in less-than-highly-skilled code, which is the vast majority that I interact with)
- `go fmt` spawned a whole ecosystem of "autoformat ALL the things" tools
- (I personally don't give them credit for vendor, as vendoring was a thing the community hacked together for years before they finally half-supported it, though now it's great. GOPATH should have died that day, but alas...)
.... though that might be my complete list tbh. And yeah, Goland's tooling is miles ahead of everything else, refactoring and find-usages is dramatically more accurate than all other tools I've tried.
---
Many of those have been built (far more powerfully) for other languages, but part of the "go has good tooling" claim is that it has those out of the box. It's nowhere near enough to be "great" IMO, though modules get it much closer, but it gets a "good job" sticker on at least a few things even if the whole isn't particularly impressive.
This nailed it for me. I came back to write some go last night after not writing it for a long time. The modules, go dep, etc stuff really put me off.
Whereas beforehand (been writing stuff for a few years, not hobbyist, just not super complex) it was intriguing..now it seems like they've just moved on in one direction for packages. However, that direction is not really super well documented, intimidsting, and sure as hell doesn't work as well as nuget or npm.
No way to automatically find all implementations of an interface - especially problematic in a language with implicit interface implementation
This is fundamentally a problem with duck typing, and it creates a massive amount of cognitive effort when trying to understand an unfamiliar code base.
I don't have the qualms you have about the rest of the Go tooling, which is pretty good IMHO. The fact that it comes out of the box is a significant bonus too.
I understand duck typing of interfaces to types imposes some limitations, but it is by no means an absolute limitation. Given a set of Go source files, you can absolutely automatically find all types in those source files which match an interface. Sure, there could be implementations in other places as well, but the same is true for Java or C# or C++.
Also, note that the compiler already does this check whenever you try to use some type as a specific interface type, this is not dynamic binding at runtime.
I understand that, but it means I need to rely on tooling (which has limitations) to do something that is zero-effort in other languages, and that impedes easily understanding unfamiliar source code. It's not so much "Which types implement this interface", but rather "Which interfaces does this type implement".
For this reason, I'm not at all convinced that duck typing contributes to the fundamental need to write easily understandable and maintainable code. What does it add, or what problem is it trying to solve?
One commenter recently suggested putting, eg:
var _ MyInterface = MyType{}
at the top of the file to assert explicitly the interface implementation. This works but is a bit of a hack.
Duck typing is my main gripe about Go, but generally I like it very much, and find it very practical, especially (funnily enough given the article) it's tooling.
Oddly enough I've never needed to look up "what implements this interface" in Go. I've been writing Go professionally in a senior/team lead roll for over 4 years.
Usually I know what implements a given interface, as in the package I'm writing, I know what I've imported. Packages are usually small/narrow enough that there isn't ambiguity (i.e. I'm not reaching for some arbitrary implementation).
The only pain point that comes to mind is the bytes, io, and strings packages where there are many common interfaces and many implementors, but it's never really headache inducing.
The interface assertion you mention is useful for implementors to catch at compile time if they didn't implement a given interface (rather than find it at call sites), and catch where implementations break when an interface definition changes. You could put it in a test file, if you like, for the same effect.
> What does it add, or what problem is it trying to solve?
In Java, if you get a type from a library, and it doesn't implement a particular interface, you're stuck. You have to wrap it up in some intermediate type. It's useful to be able to use types that don't know about a particular interface even if they happen to conform to that interface.
However, I would agree that it would be useful to be able to specify which interfaces you intend a particular type to implement. (Or maybe Go allows this? I'm just talking about duck typing here, as I don't know Go very well.)
- multi-platform / multi-os easy compilation ( GOOS=windows GOARCH=amd64 go build . ) <- you can even do that from your Rapsberri pi
- very easy and powerful profiling with pprof
- go fmt / go vet
Yes it's not perfect but it's better than most languages so I woudn't be that harsh on tooling, especially since most of your issues are not related directly to Go tools (Packages publishing side ) and for the last part Goland works fine and is the most popular IDE with VSCode.
Go overall has good tooling for a relatively young language compare to more matures ones like Java and C#.
The race detector is nice, and can be a massive boon to productivity.
Mulți-platform/multi-arch compilation is simply not relevant to most common languages in use today. Still, it's nice that it exists, though it is only so simple because of Go's (flawed) static-linking-only model.
Go fmt is a nice tool, but is very low impact in my experience. Go vet is OK, but there are better static analyzers. The fact that it comes out of the box is a plus, but it's not the most important characteristic in a static analyzer.
Profiling with pprof is nice for very simple problems, but it can't match a visual profiler for complex issues, and its memory profiling is extremely bare bones. Java heapprof and VisualVM are equivalent or superior, and they also come out of the box with the JDK. Similarly, the kinds of memory analysis you can do by collecting heap dumps and analyzing them in GDB is much more powerful (and has many more uses) than what pprof offers on that front.
Go overall has decent tooling, especially for its age. But its tooling compares unfavorably to what is available for free (not to mention if you are willing to pay) for any other popular language except maybe python.
As a final note, package publishing is absolutely related directly to Go tools - it is go mod download/go get that define what it means to publish a go package, how you specify its version and everything else I raised.
Let's not forget most of the good JVM tooling, especially early on in Java's history, was and is 3rd party and most cost money. Even today, it's still a sea of confusion, especially if you are new to all of this. For example if I search for "best JVM profiler", I get a list of 10 results all claiming some form or fashion of being "the best" or "most popular"
As someone who spent the first half of their career on the JVM and now is mostly Go, I don't understand how someone who has been on the JVM most their life can say Go's tooling is "the worst" Most of the things you have to configure and tweak on the JVM and profile for aren't even considerations in Go, because they are either simply non-issues in Go, or don't exist (because Go does not have a virtual machine to begin with)
Sure, but whether a tool is 3rd party or 1st party matters less if there is significant difference in the advantages that tool brings. Being free and/or open source is more important, but at least currently, there are many excellent free and open source Java tools in all the categories I mentioned.
Also, I didn't say that Go's tooling is the worst. I said it's one of its worst aspects. I still miss Maven, jmap, VisualVM, and the Java debugger almost everyday, not to mention IntelliJ.
> Most of the things you have to configure and tweak on the JVM and profile for aren't even considerations in Go, because they are either simply non-issues in Go, or don't exist (because Go does not have a virtual machine to begin with)
I'm not sure what you are thinking of specifically here, but in general the same considerations apply between Go and JVM, it's just that Go doesn't allow you to tweak things like GC algorithms and parameters. JIT is the only other area where I remember spending time thinking about Java performance that is truly a non-consideration in Go, but Go has other performance considerations to worry about (pass by copy vs pass by pointer, number of system threads off the top of my head).
Imagine how much worse other popular programming languages' tooling is, given Golang is constantly praised as having great tooling.
When integrating into a new dev team, half the work is grokking their local tooling choices. In an ecosystem with strong "one best way to do it" tooling, that effort is almost non-existent.
I remember using a 4 year old programming language on the JVM. When you installed the Eclipse Plugin you could do all the code navigation, refactoring, etc. This was a custom written plugin specifically made for this programming language so it didn't really take advantage of copying the Java plugin and extending it. It also had a fully fledged package manager, build system, code formatter and all the fancy bells and whistles. The tooling was amazing and it shows that whether tooling for a given programming language is good or not merely depends on how much the community values good tooling. Especially in python the community gives a crap about good tooling. There is a lot of tooling with which you can cobble something together but it never leaves you feeling that it was worth the time investment or that the solution was particularly elegant.
Deploying applications is a problem on which most people just give up and just ship a virtualenv to production... or they waste a week on finding a better solution. One might argue that this one week will save me time in the long run but then you start thinking about all the python programmers who don't give a crap and they clearly don't want to invest that week worth of time and so the cycle of bad tooling continues.
Actually I do 100% automated deployments with Ansible roles. I don't use the ansible commands though, I wrapped it in my own command which fixes all my problems with ansible: https://yourlabs.io/oss/bigsudo
I'm extremely happy with it. Currently I'm writting my own Docker/Compose on top of podman and buildah in order to remove all `Dockerfile.` and `docker-compose..yml` from my projects in favor of a single pod configuration file, which can variate depending on a "profile" (default, dev, prod, review...). Once that is solved I will continue my life with 100% satisfaction of the tools I use for my CI and deployments.
But yeah, prior to going crazy like that and rewriting the world, during years, I was actually doing my best to not invent new tools, and get the best out of every tool that was out there, I can easily imagine that's how most people would practice their craft, by just focusing on what they are supposed to ship, considering that "if I could make a better tool for that then somebody else would already have, so I'll just use anything and not really bother, perhaps try to contribute or wrap in bash scripts".
But then, that is the question : which language is that? C# and Java are not it. C and C++ have excellent dev tools and debuggers, though it's true package management is essentially non - existent. Python, JavaScript and TypeScrpit have better tools than Go, even with the limitations of a dynamic language (the package ecosystem of npm is horrible, but npm itself is a pretty decent tool, definitely much simpler than go mod).
Unfortunately I was pretty confusing. The G-GP had said 'imagine how much worse other languages are', and I was pointing out that I don't know which those other languages with worse tooling are. That phrase was meant to say 'Java and C# are obviously not languages with worse tooling'. That is, I agree with you, and considered it obvious that Java and C# have stellar tooling.
it really depends on how you define a great language.
btw. C# can do everything that golang can, but has a different inheritance interface. Also C# has some stuff that golang does not can, like an easier way of working together with C++.
Btw. Java is nearly in the same boat. It has a different inheritance interface and can nearly do all things that golang can (except maybe csp/channels and can not be as memory efficient than C# and golang). But the point still stands. Golang basically can do less than both.
Depends. Package Management, compiler errors? Absolutely.
IDE support? Not so much. I don't think it's fair to compare it to C# / JVM languages, but even compared to other languages, I wouldn't say the tooling is all that great.
Macros make IDE support a lot harder than in other languages, the atrocious compile times sure do as well. While IntelliJ Rust and rust-analyzer both try to do their best (kudos to the teams working on them!), it's still nowhere near anything I'd call "stellar".
There are mitigations being worked on right now, like compiling macros to WASM[0] and Cranelift[0], but these endeavours, while very promising, I wouldn't call production-ready yet.
EDIT: Also, debugging. I'm not sure of the current state of it, but last time I checked, it was pretty much just GDB integration that was working. CLion (which I think uses GDB?) is able to give you a nice UI for basic debugging, don't know about VSCode integration.
Still, even if it works, it's not at all comparable to debugging in C#/JVM, you're limited to very basic stuff compared to these languages / the well-known IDEs for them.
So, again, it may be "fine" or even "good", but imho it's not exactly stellar :)
At the caveat that RLS is a bit fragile - I have a project with a moderately large proc macro that generates multiple items and RLS falls over on that project now. Rust Analyzer is more robust but doesn't have a debugger or as comprehensive a feature set yet.
Re compile times: in nearly all cases I've seen, un-optimized builds (opt-level=0 or 1) are still quite fast. Do you know of any counter-examples? I'd love to see what the cause is.
Optimized builds though, yeah, you routinely hear about multi-hour builds. As long as there's a dev-friendly fast mode though, slow optimized builds don't bother me all that much.
What counts as "quite fast" for you? If you're talking about multi-hour builds, then you might not be talking on the same scale as others. I would consider 10 seconds to be pretty slow for a small/medium sized project.
Yea, fair. But for compiled languages, a few seconds is still fast for the field - Go takes that long on anything but tiny projects. My job's main Go binary takes about 30 seconds for a rebuild, for instance, and I know of other projects here that take well over a minute.
Many of the hours-long optimized builds for Rust projects that I've seen have turned to <10 seconds without optimizations on a clean build (possibly libraries are still cached tho), and even fairly large ones are still less than a minute. Tiny projects are around a second or less. I'm usually looking at pathological cases though, so I'm not sure how well those hold up in general.
So I'm not looking for interactively fast - very few languages achieve that on even medium-sized codebases, even if they're interpreted. But under a minute for a couple million lines of code in the project and libraries fits in my "reasonable" range.
In my experience, the problem is that debug builds (which, I assume, are not optimized) are still annoying. I recently built a new PC and got a Ryzen 3900x, so I'll probably have to check again with that :)
Now, "atrocious" may have been bad wording without context: I think it's just taking very long to finish the normal edit-start-debug cycle, compared to other languages (Go, Java, C# for example). I don't have any numbers right now, but I'll try to update my post once I get home, I've done a few basic emulator projects in Rust :)
At least, it was long enough to be annoying to me. Maybe I'm just too used to C#, which I use at work.
I've seen some pretty long compile times in C#, at my previous job building a local copy of the whole solution was around 40mins to an hour (400+ csproj).
I guarantee that a 400+ module Go program or 400+ Rust crates will build much slower (if no other reason, simply because Go or Rust compilers do a lot of work that is deferred to runtime in C#).
Rust copied tooling from Go and Python. Also its tooling cannot work without internet and need to download MB's of data for simple development, and on the other hand proclaims itself to be systems programming language to replace C and C++ which doesn't require active internet to work.
Also C++17 is much better version taking away the deficiencies of earlier versions and if not better at least provide similar run-time guarantee and safety as Rust (which also needs to rely on unsafe code to do anything meaningful with systems programming).
So if you use either modern incarnation of C++ or Rust, you will get same safety guarantees. Just with C++ you will not have to climb the mountain of learning just to do simple things.
I think Rust gets a lot of hype because it's syntax is complex and learning curve is much higher than C or C++ and programmers see new shiny language which got some traction as Panacea for all the programming problems.
> which also needs to rely on unsafe code to do anything meaningful with systems programming
Rust allows you isolate such instances of unsafe-ness to specific parts of your code base. When you're auditing for safety, you only need to audit those specific parts because everything else is safe by default. For example, the implementation of std::vec::Vec uses unsafe, but it exposes a safe interface. So once you've audited the few hundred lines of std::vec as being safe, you're sure that the millions of lines of code relying on it are transitively safe.
> which doesn't require active internet to work
cargo build, cargo clippy, rustfmt, rust-analyzer, rustc all work without internet. If you want to fetch dependencies from the internet, you need an active connection. If you're in a situation where internet isn't available, you can work around this by writing all the libraries you need. Same with rustup - it needs the internet to download new versions of rustc and cargo.
> you will not have to climb the mountain of learning just to do simple things.
I'm sorry you felt this way. Learning materials are improving all the time and the language is improving too. For example, future combinators were a pain but with the advent of async-await it's easier to write and read code that deals with Futures. Give it another try :)
> If you're in a situation where internet isn't available, you can work around this by writing all the libraries you need.
Why can’t cargo be simple enough like pypi where all the dependencies are in intranet and private or another way is all the dependencies are in my own simple folder and can update it separately whenever I need rather than relying on an internet connection.
Rust cannot match C and it will always need to interface with C code, so in the end it will only be as safe as the C code it is interfacing.
I do not understand why to create complex syntax and semantics and difficult to learn language in spite of so much development in compiler world. Like Swift which is similar to Python style syntax. Also Swift is performant and secure enough to provide replacement for objective-c.
Hopefully FP (functional programming) becomes more mainstream and take away all this burden of imperative languages like Rust. But being pragmatic understand for hardware C will be there at least for the foreseeable future and Rust and any FP language can only do so much.
On a personal level Python made code beautiful by white-spaces, pep-8 and cultivated a habit of trying to write beautiful code. It did not succeed completely in writing every library beautifully because in the end it needs to interface with C libraries where practicality beats purity.
Rust copied gofmt and introduced rustfmt. Still Rust has miles to go until it is just able to be as clean as Python for beautiful code and cultivate it as very important habit like Python. It will take Rust may be another 30 years or more to come close to it, given its syntax and reliance on C interface.
Any other language I see at present with safety guarantee and as clean as Python is Swift.
I like lisp, Haskell and other FP which can achieve a lot with very little code, easier to refactor and maintain in the long run. Rust is just opposite of it. It’s as difficult to learn as Haskell with complex syntax and semantics, but less powerful than C.
> Why can’t cargo be simple enough like pypi where all the dependencies are in intranet and private or another way is all the dependencies are in my own simple folder and can update it separately whenever I need rather than relying on an internet connection.
Swift is not a competitor for Rust. It may be a competitor for Scala/Kotlin native or Go, but definitely not in the same performance league as Rust or C++.
> Hopefully FP (functional programming) becomes more mainstream and take away all this burden of imperative languages like Rust.
Pure FP is overrated. It is a nice academic model of computation, but it doesn't reflect the way how hardware works. Hardware is mutable and memory is limited.
> It’s as difficult to learn as Haskell with complex syntax and semantics, but less powerful than C.
Swift is replacing Objective-C (a systems programming language a dialect of C). So indeed it is a direct competitor to Rust as it also provides hardware abstraction in a safe manner to replace C dialect.
I will wait and watch when Rust will be used to control Apple or Android hardware as performant with similar safety guarantee like Swift or now Kotlin.
It's also same as C++ used by Microsoft to provide underlying abstraction of hardware on Windows.
Kotlin and Swift fall short on memory management and both need runtime and GC. Currently Kotlin native is an order of magnitude slower than Rust, and Swift is also losing in benchmarks sometimes by 10x (although, not always - purely numeric Swift code is the same speed, because it is going through the same optimiser).
As for hardware abstraction - not sure what is your point - Rust can use any C API.
Swift and Kotlin are used for application programming there, not systems programming. Also, Google does Rust as well. And Microsoft is slowly switching some pieces to Rust as well. These are big companies with many teams and use many different tools.
Don't know how iOS, but Android does offer native APIs, so Kotlin is not the only officially supported choice there. Actually for anything that needs performance, e.g. games, Kotlin is a nogo.
Swift is used for systems programming on Apple platforms, as already replied on another thread.
Google already mentioned a couple of times at Android Fireside Q&A sessions that it is weighting Kotlin/Native adoption on Android.
And even if not, Rust isn't taking C++ and Java place on Android, specially after Project Treble changes, where drivers can even now be written in Java.
Google and Microsoft are indeed adopting Rust, with some products already in production, although they are also among the major ISO C++ contributors, so it remains to be seen how much Rust love from their security teams will spread into the OS development teams.
For example, I still look forward to the day that Azure Sphere actually offers something else other than C, in spite of its security sales speech.
Why would we need to limit our programming language abstractions to how the machines work underneath? Are there some stateful objects running around in the hardware? Just because the abstractions are detached from the hardware doesn't make these design patterns useless.
It is nice to have a connection between abstractions and hardware, because it makes it easier to reason about performance, which in turn makes it easier to achieve great performance.
Now, not everybody actually need great performance, so some distance between hardware and abstraction is ok, but the amount depends on the use case.
Well, I agree, that's why we have both low and high level languages. High level languages, setting you free from the actual constraints of the underlying machine, allow you to define abstractions that can increase productivity greatly - that's their whole point of existence.
I mean I don't see people complaining how you don't need to manually manage memory in SQL expressions.
> I mean I don't see people complaining how you don't need to manually manage memory in SQL expressions.
Well, they don't complain about not having to manually control the low-level bits of query execution, but later they frequently complain about performance problems. And then they add hints, setup buffer sizes, indexes, etc.
Also, Rust (and to some degree C++) shows that you can have both very high-level, productive abstractions and low-level control. They call that zero-cost abstractions. The price to pay is a steeper learning curve, but not actual productivity or final performance.
This is the most exciting part about Rust actually. I know how to write Kotlin, Scala, Java or C# code that's close in performance to C or C++. But this will be ugly, unsafe and hard to maintain, non-idiomatic code. Rust gives ability to write code that has almost Python-like expressivity but is still fast as if it were hand-optimised loops and pointers.
I'm not sure what your point is. I feel we may have a different idea of what very high level abstraction means in programming languages.
Just about everything in software engineering is a compromise of sorts. Not knowing Rust or C++, I doubt they're as high level as something like Haskell or domain specific languages like SQL. Are monads first class citizens on C++? Type classes? Generalized Algebraic Data types? Parametric polymorphism & pattern matching? And so on.
Would you really achieve the same amount of type level guarantees, equally concise control flow and concurrency on C++ with an equal amount of lines of code as would be possible on Haskell? If not, then there obviously is a productivity penalty. Just like on Haskell there is that performance penalty for not being able to drop down close to the metal.
From what I can tell, these languages are not aiming to be very high abstraction level languages but instead solid systems level languages with some convenient abstractions and design patterns baked in from higher level languages.
Sticking only to some zero-cost abstractions limits how high level abstractions it is possible to bring to the language. This in turn limits productivity.
Rust is very strong in its ability to build extremely powerful abstractions. With built-in compile-time procedural metaprogramming I could risk a statement that in some domains it may be actually even higher level than Haskell. Maybe its type system can't prove all the same guarantees as Haskell, but on the other hand, Haskell can't prove some guarantees that Rust can. They offer different set of abstractions with some overlap (e.g. first class type classes and ADTs). If is really hard to say which one is higher level. They are just different.
Anyway, type safety is not the same as productivity. If it was, then nobody would use Python or Ruby and everybody would use Idris. I became much more productive quickly in Rust than Haskell.
Interesting, what kind of guarantees does Rust give that Haskell's abstractions cannot do?
Haskell also has metaprogramming in terms of Generics and Template Haskell, allowing you to create custom DSLs and such. Sadly, it kind of leads to pretty ridiculous compile times and mixed editor support so I'm trying to avoid that.
We should remember productivity is really subjective and not the same thing as high level abstractions. We're typically the most productive on the language we have the most exposure to, whatever it may be.
> We should remember productivity is really subjective and not the same thing as high level abstractions. We're typically the most productive on the language we have the most exposure to, whatever it may be.
I agree with you on this we are most productive in the language with most exposure.
Indeed many in Rust community do not realize that the LLVM infrastructure they use for Rust to make its code executable by real hardware is itself written in C++.
Rust is still miles away to compile itself and may not happen. So survival of Rust is dependent on progress of C and C++. So I doubt it is even be viable to call it their replacement.
> Indeed many in Rust community do not realize that the LLVM infrastructure they use for Rust to make its code executable by real hardware is itself written in C++.
And the operating system is written in C, and the CPU in VHDL (or sth similar). Compiling itself is a property that only academics take care of. LLVM and C are not going anywhere anytime soon. There are more important things to do now. That's why Rust is already much more loved and popular language than Haskell - because it focused on important stuff and getting job done, not theory that looks only nice on paper, but doesn't match how hardware (and generally the world) operates. Real stuff is mutable. Not being able to mutate stuff in Haskell directly is a productivity killer. Many of the "abstractions" you can build in Haskell exist solely to workaround this limitation.
Exposure has nothing to do with this. Rust is new and Haskell is since forever, yet Rust already far surpassed Haskell in terms of adoption.
Template Haskell is not standard Haskell, it is an extension. But even with it, Haskell type system can't reason about lifetimes of objects. It doesn't give any guarantees about object destruction nor even object construction, due to laziness. Hence, you can't prove anything about resource usage of a Haskell program. Which makes it usable only for cases where you don't care about resources and where the program doesn't interact with external systems.
Well Haskell's prelude is purposefully small and extended by a variety of extensions in the GHC. Prelude + libs + extensions effectively make what we call Haskell today.
Just like with Elixir and many other high level languages I don't think it's meant for building low level systems where you're heavily constrained by resources.
But why wouldn't it be able to interact with external systems? I'm not having any problems building an API that interacts with a database, caching layer etc.
Sure, I didn't say "useless". The post I was responding to was worded in such a way as if FP was the only suitable paradigm everybody should be using. FP is a nice and useful abstraction, but in some areas it is just too inefficient and despite all the great compiler progress of the last decade, it is still not there yet.
FP languages are not in the same problem domain as system programming languages, so claiming FP should replace Rust is just ridiculous (and as a side note: Rust borrows many things from Haskell/Scala as well, so some FP is there).
> So if you use either modern incarnation of C++ or Rust, you will get same safety guarantees.
As a C++ Dev who's done some rust on the side this is utter horseshit. Regardless of which C++ version you use it is utterly trivial to write code with undefined behaviour. Even if you're extremely diligent you will eventually be hunting down a bug caused by undefined behaviour.
> So if you use either modern incarnation of C++ or Rust, you will get same safety guarantees. Just with C++ you will not have to climb the mountain of learning just to do simple things.
This is absolutely, completely wrong. If you want to write safe C++, you must follow a very strict discipline, essentially equivalent to appeasing the Rust borrow-checker. The main difference is that straying from this discipline in Rust leads to a compile-time error, while straying from this discipline in C++ leads to unsafe, but probably working, code, that you will only discover is unsafe after much work.
And in both languages, you sometimes need to drop from the safe subset to do some things. In Rust, that is signaled explicitly with `unsafe`, in C++ it just means using more C-like constructs.
It would also be very interesting to see what subset of modern C++ vs Rust could be used for safety-critical code. Given that several C++ idioms like RAII depend to some extent on exceptions, and those are not permitted in safety-critical code (per Bjarne's own standard), I would not be surprised that Rust will become very attractive in this sector once it matures.
Not for all the things that Rust catches. And not for the most important things.
Static analyzers typically catch trivial stuff like returning a pointer to a stack from a function. Which is easy to spot during code reviews. But they don't catch more sophisticated UB that can result from bad interaction of code in different units. And this it the kind of UB we're the most interested in being protected from.
If you mean lifetimes, yes you can, with the latest Visual Studio 2019, although it is WIP.
And compiling in debug does enable bounds checking for arrays, vectors and iterators invalidation.
Rust still needs to define what actually is UB in unsafe blocks, how multiple implementations might affect the language and a memory model.
Not saying that it is perfect, rather that it can be made safer than how many make use of it, which is important, because there are plenty of codebases out there that will never get rewritten into something else.
In C++ lifetimes are not encoded in types. Therefore I seriously doubt it can do the lifetime analysis with such accuracy as Rust can, particularly if the whole source code (e.g. libraries) is not available.
Eh, what part of gofmt is borrowed from python? Python hasn't even bothered to have all parts of its standard library formatted to follow PEP-8..
As for pypi, it is Perl that (as far as I know) invented language specific packaging. Then Bundler, a Ruby project took and improved the situation and let people easily create repeatable and predictable installs. Some of the people behind Bundler then went on and built Cargo. Packaging for Python is and has always been a mess, which people shy away from rather than copy. There are some signs of improvement in later years, but it is not from pypi.
As for RFCs, I would assume the RFC process should reasonably be attributed to IETF. They were a bunch of decades ahead of Python there..
GoLand solves many problems though, for example they have conditional breakpoints and function execution in debugger (though not that powerful). And code refactoring/navigation is also pretty good.
In my experience, the debugging story isn’t as bad as most think, because compilation times are so fast. If I need to run some code at a random location to see what’s going on, I just add it and rerun the unit test, usually it takes less time than configuring a debugger to do the same thing, and I’ve also now got a place to put a UT to cover whatever the problem was.
Edit-compile-rerun is as fast or slower in Go compared to Java, C#, Python, Ruby and many others, and it's still an option in those languages. Comparing to C, C++ or Rust, or, surprisingly, JS, instead, you're probably right.
However, unit testing is not by any means a substitute for all uses of a debugger. You can add unit tests to reproduce an error in any language, but once your problems start crossing module boundaries, a debugger (or printf debugging at least) often becomes indispensable. And having an edit-and-continue debugger is extraordinarily helpful when dealing with bugs that only reproduce in complex application states (when your cycle becomes edit-compile-rerun-send50CommandsToPutAppInCorrectState-reproduce).
I wonder what the future of package management will be. The author of the Node-revamp language Deno (https://deno.land) chose to basically follow a strategy not unlike Go and forego package.json and similar dependency manifests, while also adding things like explicit network access flags to prevent, I believe, NPM-type package hacking vectors.
I just don't see what deno is going to build with access flags that couldn't be solved by writing off local code and wrapping everything npm does into containers.
There's nothing special to install in linux for a long time to use containers and once WSL2 is done, if it's made available to all windows either that or native containers it can just be the way npm works, managing containers.
> Major version changes require source-code level changes in all files touching the package
I've always thought the tooling was half baked but would get there eventually. The introduction of modules has caused a split and put the tooling either further behind IMO.
Pretty sure `go guru' satisfies reference finding across modules/packages and finding all implementation of an interface. It is slow because go guru (and gorename) needs to search all packages in modules and GOPATH but it is also correct.
It's part of Go's typical "we assume/declare that we do better/simpler without checking what/how other languages do" schtick.
Take error handling and package management for example - different approaches, hacks piled, etc, instead of just going with the program, seeing how others do it successfully (e.g. Maybe/Optional error, exceptions, etc), and get on with it...
Do you think Go's developers and users are simply unaware of exceptions, optionals etc? Or is it that the trade offs that they made aren't net positive for most use cases? Even if the second was true, that isn't necessarily a bad decision; since C# continues to exist, there may be more use for a language targeted to some niche than just a copy of C#.
I've used C# and Go extensively and like both languages. In some big ways they are similar: garbage collected, (fairly) fast compile times, large ecosystems, good tooling (if you include the JetBrains tooling for Go), built in lightweight concurrency, etc. It's not surprising people compare them.
For the areas where they differ, Go in general is more explicit and more low level. Handling errors as return values instead of exceptions is more typing, but I find it easier to check that error handling has been done correctly in my own code and in third party libraries. Instead of the class/struct dichotomy pointers are explicit and any type can be used as a value type (there's also more flexibility in using Go types with unmanaged memory, and the ability to take interior pointers). The way interfaces are implemented they can be used in more places without boxing.
C# has generics but I've run into frustrating limitations with them, for example not being able to write numeric code that works with either 32 bit or 64 bit floats. Go's generics are currently limited to some built in types but the proposals being iterated on seem like they'll be more useful to me than C#'s implementation if they make it into the language.
Other things: Go has a single concurrency story so the ecosystem isn't divided into sync/async worlds, is a less complex language overall, and is a bit less verbose aside from error handling (no public static void etc). Overall I probably prefer Go, but I like C# also and could easily imagine projects where it might be a better choice especially in areas where the lower level aspects of Go aren't useful.
Reading HN, it seems like a lot of people believe that people who use Go (and even the language designers) are simply ignorant. I'm open to the possibility, but can you flesh it out for me a little bit more? What am I missing here?
Yes it's always amazing how these new environment skimp on debugging support. After all that's the activity most of us doing real software spend the most time on, and no, no amount of "safe programming" will avoid debugging. I guess debugging is just not cool.
Not that I disagree with you but comparing C#/JVM to go is like comparing a commercial airliner to a single seat monoplane. These projects are diametrically opposed from the get go in every way I can think of.
There are only two languages I ever liked working with C and go. Because they're simple, full of compromises, but simple. I can't say that of C# or java, you probably need a PhD just to be able to use 1/3 of c#/java tooling and IDE features.
> I can't say that of C# or java, you probably need a PhD just to be able to use 1/3 of c#/java tooling and IDE features.
That is absolutely incorrect. The whole point of having tooling at your disposal is to make the process of writing and reading code dead simple.
Ypu don't need a PhD to pause the debugger, edit some code, hit continue, and have the new code executing. You don't need a PhD to select some text, click Extract Method (or press the shortcut), type in the method name, and have the new method/function on the screen.
By the way, C has excellent tooling on all of these sides as well - C IDEs and debuggers are extremely powerful, unlike Go.
Yes, _commercial_ C (an Java) IDEs are extremely powerful, but it's a bit unfair to compare them to _open source_ Go IDEs/editors. From what I heard, the only commercial Go IDE available so far (GoLand) is extremely powerful too...
The degree of backing is completely different though. They appear to only do the minimum effort that they need for themselves. Microsoft provides good C# tooling because they want you to stay in the Microsoft ecosystem.
They want you to stay in the Microsoft ecosystem is why they made an open source IDE and great tooling for it that competes directly with an IDE they sell for up to 1000$/seat/year that actually tries to get you to stay in their ecosystem with things like free Azure credits, preferential Azure pricing, free Windows Server and MSSQL licenses
Not that I think any of this is not entirely tangential to the original point...
Eclipse is far more powerful of a C and Java IDE than any non-commercial Go (I)DE. NetBeans is also a great Java IDE. They're not as powerful as IntelliJ or VS, but they are still miles ahead of any Go plugin I've used.
Also note, for C you have gdb, you have ctags, you have all sorts of great plugins for Emacs or Vim or whatever. You also have stellar plugins for Javascript and TypeScript, for VSCode, for Emacs, for vim and many others.
Can't agree here. Go is going for simplicity of the language but definitely not for simplicity of tooling. If anything, I would say golang tooling is more complicated then c#, with all that myriad of small tools. Debugging story is just objectively better in c# and it's pretty easy to use.
I definitely appreciate many parts of go's tooling ecosystem: pprof, gofmt, go get (in GOPATH mode), and go bug are all pretty good ideas. pprof in particular and the various profiling tools build out around it are great.
That being said, I haven't found all aspects to be as nice - most notably among these is the lack of a good debugging environment. There is no debugger in the go tool, and users rely on delve[0] for their debugging needs. The dlv command line tool has actually gotten a lot better over the last couple years (props to the maintainers!), but integration in to visual front-ends is still lacking. VS Code and GoLand integrations at least are lacking to the point of it being nearly unusable except in the simplest cases. For a few examples, stepping is painfully slow (there are several machines/OSes I use), conditional breakpoints don't seem to work, keeping the tools up to date doesn't work consistently which means updating the underlying go/delve tooling or the IDE will frequently break the experience, and defining your debugging processes (e.g. via VS Code's launch.json) is tedious. None of these are go's fault per se, but it diminishes the overall experience.
In general, it would be nice to have a better debugging experience, esp. since I prefer a GUI tool to the command line for debugging. In the meantime, I might switch back to just using command line dlv.
But regardless, definitely agreed that go's tooling on the whole is great. pprof has saved my skin on countless occassions!
I think the biggest barrier is still the compiler itself. The thing is built for compilation speed and not usefulness in terms of compille time semantics or readability of errors like the bad old days of GCC and GHC.
In this day and age, I shouldn't have to understand how a parser/lexer works to understand what an error really is, especially for super simple stuff like missing semicolon, bracket, parents or comma, misspelled symbol.
Small stuff like this is just a nussience for veterans, but I've seen it prove fatal on many an occasion for beginners who quicky get intimidated and start feeling stupid.
Delphi and Turbo Pascal are also built for speed, with more expressive languages, and are still able to provide sensible error messages, all the way back to MS-DOS days.
The compilation times were extremely short, and that’s on a 4.77 MHz CPU. (Too bad we are unlike the Ancient Egyptians who would’ve stuck with it for a couple of millennia.)
I never really felt that way about Go error messages, but I also came from Python, Java, C, C++ (via GCC and Clang) where the error messages are far, far worse. Maybe there are languages/toolchains where the error messages are better?
This is confusing to me, the pre-compilation tools catch these simple things instantly and display them right in your editor, which is far superior to having to wait until you compile to catch them.
"print/log" based debugging in Haskell is harder and thus there is more pressure to solve the debugging needs with a proper debugger.
I rarely feel the need for a debugger in Go and other procedural languages because I often find it easier to just patch the code with some ad-hoc debugging logic that lets me understand what's going on. I find that usually easier to do rather than figure out how to attach a debugger in the right environment where the issue can be reproduced.
The laziness can make it a bit of a challenge to follow, though. I know trace itself is strict, but that thunk will still be lazy from the environment around it.
Debugging has the same problem too, actually. "Step next" is much more challenging to know what is actually going to be "next" than in other languages. On the plus side (no sarcasm), it's a great way to get a feel for what the laziness really does. I recommend stepping through some non-trivial code of your own with the debugger for any intermediate Haskell programmer, even if it's working perfectly, because it'll actually show you the lazy evaluation order as it happens.
I think in highly concurrent apps a debugger might become even more difficult to use since at the end of the day it allows one only to walk linearly through the application.
I feel your pain with the debugging and I have a feeling this will be an aspect of go which matures greatly in time. Especially with respect to goroutines, it can be tough tracing the program flow, which makes good debuggers even more valuable. Also a very hard challenge.
I'd love to be able to see a diagram of goroutines and channels, like a chemical plant. That's probably asking for too much :p
This is a serious question because in 5+ years of programming I have never used a debuggers. The only time I used a debugger was when I was learning to program and to trace out for loops.
Now I just use print statements and read code to figure out what’s going on.
Yes, absolutely, it's an indispensable tool - and I've been doing this software gig since 1990. Some languages, like JS and Python which are quickly interpreted, are amenable to printf debugging, while others, like C++, C, Go, really benefit from debugging. A debugger is your best bet at debugging multiple threads, for example, especially since you can pause them at strategic locations to recreate a race condition. This is really difficult with printfs.
The world of JS development has tremendously powerful live runtimes/debuggers in the browser developer tools, those are a fine example of what a debugger can be.
I write go using Goland by JetBrains, which uses Delve under the covers as a debugger. Compared to dbx (which was amazing), gdb, etc, it's deficient, but better than nothing.
How do you know how you got to that point in the code without a debugger? What's the call stack like?
How could you debug a call in a DLL that is being loaded by your program if you do not use a debugger?
It might be alright for tiny programs but it's essential for "big" applications.
printf'ing isn't going to help with 2 million+ lines of code or where there's a leak somewhere in that code (eg. someone else wrote some bad code or bad reference counting and didn't release when they should, eg. in COM wrappers in Windows), or someone is double-freeing some memory or trampling on memory and you need a hardware breakpoint to know what is modifying that memory at any given point.
I step through almost every line of code I write in a debugger. I'm pretty visual based, and that combined with the action of clicking through forces me to slow down and really think about what my code is doing. Whenever I open the debugger, I find bugs that I can fix! :P
I've heard it said that there are two types of programmers: those that stare at their text editor all day, and those that stare at their debugger all day.
I often find myself working on a legacy codebase. The application will enter a state that I don't think should be possible. The debugger can usually help me figure out how that happened.
It's worth mentioning that debuggers are less essential in languages like Go where the control flow is purposefully very straightforward. I often debug Python and end up being totally surprised to see the execution jump somewhere else because of a decorator.
It's as if you didn't read the comment I wrote and started foaming at the mouth at the chance to say RTFA as is HN tradition. (Or didn't read the article closely enough yourself)
He said explicitly, he used a debugger.
He came to understand the value of having a proper mental model of your code, but the comment I replied to said they had never used a debugger.
My comment was about the difference between knowing how to problem solve without a debugger, and straight up never having used it
There's nothing wrong with having more tools in your toolbox
> This is a serious question because in 5+ years of programming I have never used a debuggers. The only time I used a debugger was when I was learning to program and to trace out for loops.
They're saying it themselves, "yeah technically once I used it when I was figuring out how for loops work".
I wouldn't say I've ever written a Brainfuck program even though once I messed around with a hello world on an online interpreter
Leave it to people here to say a comment doesn't say literally the exact words it says
-
The spirit of multiple replies, not just mine is debuggers are an option you should at least be familiar with before tossing aside, even if you do end up not needing them
It's the preference of the people referenced by Person B.
Person A the word _never_, the definition of absolutism.
You're literally making my point, a preference towards not using debugger not a limitation, an absolutist approach of never having used one professionally could be.
It's not like I said "OMG go use one right now or your career will fail" either before you latch onto that too. I said even those guys with a preference away from them, have used them professionally
Didn't think I'd be spelling out 1st grade reading comprehension on HN but here we are.
Why this sudden increase who think turning off their critical thinking skills to go "gotcha" because you didn't write out your comment like a thesis is something positive?
To give an opposite metric, on one of the C++ software I develop the "add printf, compile, and run" takes in average 1/2s while it take sometimes up to a minute to load it in gdb or lldb. Which sucks because a lot of things are incredibly easier with a debugger.
That's very unusual, is it an embedded environment? I'm used to C++ taking much longer than other languages to compile, which in turn discourages printf debugging.
not at all, standard Qt desktop software - when split correctly into object files and shared libraries the incremental build time in C++ can be very fast (also, being on linux and using lld as a linker helps tremendously, building on windows is on average 3-5 times slower for me, and lld links in the bat of an eye compared to bfd (GNU ld) and gold)
You're right. But you need to have been exposed to a good debugger before you understand this. Preferably you also have had a colleague that was proficient in it to get you started.
> Now I just use print statements and read code to figure out what’s going on.
Print is worse and more work. You should set one (or several) breakpoints and then debug, which replaces possibly many run/debug loops at once. You also have no chance of accidentally committing a print statement.
Hearing that people don't use debuggers feels like hearing that someone uses a screwdriver instead of a drill that someone gave them for free and is just sitting in the garage, untouched.
There are literally no drawbacks to switching to a debugger.
> You also have no chance of accidentally committing a print statement.
Debuggers are ephemeral---unless they are "reversible", which is a subject of ongoing research. You can't replace logging with debuggers, and some print statements used for debugging are in fact better described as post-hoc logging. On the other hands, print statements do change your program and can mask some other problems especially in multi-threaded programs.
> There are literally no drawbacks to switching to a debugger.
Only for properly prepared environments. In some environments enabling a debugger itself can be annoying or even close to impossible (for example, some non-native socket-based debuggers tended to be fragile). I use debuggers when I can, but being able to not rely on debuggers is useful from time to time.
You seem to have inferred two straw-man arguments from my comment: 1) that people should stop logging, and 2) that debuggers can be used in 100% of cases.
> You can't replace logging with debuggers
I never suggested this. I only discussed whether to use printing or debuggers for debugging.
> Only for properly prepared environments
Of course, but these are the vast majority. Most devs are working with Java, Python, C#, TS/JS, Visual Basic, Go, or some combo. All have debuggers.
While I meant not to fully refute but rather to build upon your statements, some clarification seems necessary:
> I never suggested this. I only discussed whether to use printing or debuggers for debugging.
Yes, but I specifically mentioned some usefulness of printing for debugging ("post-hoc logging"). Printing can be used both for debugging and for logging simultaneously, and many debugging prints can be readily converted to actual logging. As far as I know there are no widespread equivalents in debuggers---but please let me know if any, I'm genuinely curious.
> Of course, but these are the vast majority. Most devs are working with Java, Python, C#, TS/JS, Visual Basic, Go, or some combo. All have debuggers.
Their quality wildly varies though, and they are especially fragile when multiple languages or environments are in play. In my recent case, Visual Studio 2017 froze when debugging mixed C++ & C# codebase. I tried a lot, but eventually I didn't want to investigate further and decided to use print debugging for the moment. It was finally fixed when I and my team upgraded to VS 2019, I still don't know why it didn't work and it fixed itself.
Then probably I was talking about something else. I meant persistent logging for later inspection, not ephemeral logging for one-off debugging (yes, it is confusing...).
My impression has been that developers way too often rely on the subsequent use of the debugger when writing code. It’s like, “I don’t really know what I’m doing, but the debugger will help me understand my code... Oh, what a piece of crap I wrote! Let me start over...” Looks almost like people are debugging themselves.
They're much more useful in maintenance environments where people have already written a million lines of code before you got there. Of course you have no idea what they've done. After a few years they probably have no idea either. You can go grepping in the codebase, but the sheer speed of asking the debugger "how did I get here?" is hard to beat.
They're also useful in layered environments. Sometimes it's a bug in other people's code and you need to get in there to find out what's happening. When all you have is disassembly, and can't readily insert print statements, the debugger is absolutely invaluable. One of my own "debugger greatest hits" was finding a bug in Windows CE stack unwinding this way.
It's simpler to click a few lines of code, get the entire program state or a selected subset, and then get a perfectly line by line granular understanding if desired, then to write a bunch of print statements. The only reason printlns would seems easier is because it takes s bit more time investment to learn debugging tools.
In my ide I can see all threads, I can freeze and unfreeze threads as I want, I can run a threads to the same point. All while inspecting the variables in use.
Or pausing code and seeing the whole call stack, having it navigable by just clicking.
This is a hard question to answer easily, IMO. I find that Java folks tend to overuse the debugger, spending 20 mins stepping through something that a simple value print would have solved.
That said, some code -really- needs a debugger. You can theoretically spend hours adding prints everywhere, but even that pales in comparison.
In short, I find that the need for a debugger is rare, but when you do need it, it's irreplaceable.
There's literally no difference between a print and a breakpoint except that the print you actually have to write code while a breakpoint not only do you don't have to write code... you can see the entire program state.
There is literally no argument here. Debugging with breakpoints is categorically less work.
When you see people stepping through code it's because they don't know where the error occurred. It's the equivalent of putting print statements everywhere. You tell me which one is better.... littering your code with print statements? or breakpoints?
A debugger is not just useful to debug. I've found it a joy to actually be able to step through every step and see e.g. a request being transformed through the different libraries and middlewares. In Goland, you can even step into the stdlib!
I will admit there a few "blind spots" but 99% of the time, a debugger is a super fun way to understand your code rather than inserting print statements everywhere.
Yes. Especially if the debuggers are easy to use and aren’t just gab-style shells.
I can definitely lived without a debugger, I’ve done it before and you get used that after awhile. But given the choice between a programming experience with a good graphical debugger and one without, the former will almost always win out.
As with many other perennial programming debates, I think we underestimate the diversity of our experiences when having this debate. There are codebases that debuggers don't really help that much on, perhaps because they're technically very difficult to get running, or perhaps because they were written in such a way as to not really need them very often.
There are other code bases where debuggers are a necessity.
Some of those considerations are not even strictly speaking in the code themselves, but related to the code + developer; for instance, I really love debuggers for codebases I don't understand because I just came to them. You can read the code all you like, but until you watch it run, you don't realize that this one if clause that you kind of skimmed over actually invokes an entire chunk of code you didn't even know existed, etc. Whereas someone intimately familiar with that code may not need the debugger so much. Any sort of recursive data (parsing, HTML nodes, etc.) is a pain to deal with in printf debugging too because you either fail to dump out enough context to understand where you really are, or because you dump out so much context that you drown in it.
My call is that A: they're absolutely indispensable tools that you should have in your toolbelt so that you don't hesitate to use it when you need it, but at the same time B: you should try to avoid depending on them, because a codebase that you can't handle without a debugger is a code base that you can't understand, and that's bad.
I very deliberately say "should try", because it's not always practical; maybe you're just visiting this code base and have no reason to understand it, or maybe it's just intrinsically too darned large to understand, etc. But you should still strive to not need debuggers to follow subtle, complicated code, but make your code do what it says it does and no more. (e.g., stop using globals, don't write code that tries to figure out what lies to tell to other bits of code to make them coincidentally do what you want, etc.) I also say avoid "depending" on them, because if you don't depend on them, you can use it freely. I've only used a Go debugger a handful of times, but each time it saved me a ton of time. But the codebase in question remains comprehensible and straightforward, because I don't make skilled use of a debugger a prerequisite for following it. (I've inherited other code bases which do.)
> As with many other perennial programming debates, I think we underestimate the diversity of our experiences when having this debate. There are codebases that debuggers don't really help that much on, perhaps because they're technically very difficult to get running, or perhaps because they were written in such a way as to not really need them very often. There are other code bases where debuggers are a necessity.
This is absolutely true and very wise; there are a lot of strange kinds of programming environment out there, and we need to be aware of the full spectrum of tooling possibilities. And failure possibilities.
The worst environment I ever had to deal with was a strange embedded one where my code ran as a subprocess of a 3G module. There was a single-step debugger but not a proper JTAG one - so if it crashed, the device reset rather than dropping you to a debug prompt. Printf was sort of possible, but over buffered USB - and if the device crashed, it lost the last few lines in the buffer which would have told you what the crash was. I ended up leaving a breadcrumb trail in bits of memory that were known not to be zeroed on boot.
I don't usually use debuggers in functional programming languages. Since it's mostly expressions I usually use REPLs.
But for imperative languages like Go, a fast and good debugger can boost up productivity a lot if properly used, it's basically a REPL for imperative languages - since imperative languages mess with the ordering and the environments much more than functional languages.
Instead of inferring how to code would be executed, you would debug at reasonable places, know how it executes and play with it directly with some expressions. It's more about the feedback speed.
Only using print it's not sufficient because there are a lot of data structures that are too large to serialize. Or like functions that are not serializable at all, you have to reference it in the debugger or it doesn't really help.
It depends on the language. I've used debuggers in java, much less in c++, and hardly in go. I feel there is not much of a need in go for debugging, writing good tests and the occasional log/print statement do fine.
Java, often having more complicated structures and layers (more due to the culture, one could write more go-like simpler java as well) is more reliant on debugging. More dynamic (less predictable) languages such as python even more.
My debugger usage pattern varies between languages. For C# I rely heavily on the debugger because it is just so easy and pleasant to do so. For Go using GoLand I use the debugger regularly, but I expect that it won't be quite as versatile as debugging C#. For Python or JS I find it easiest and most intuitive to just print-debug for everything.
Especially on JVM, conditional breakpoints are really nice for this. (And at least most classic Java-Webservers are really compatible with this approach)
Agreed, debuggers can be nice to have but they are mostly a crutch.... Write modular code with unit tests and you find your need for debuggers is really a rarity
Absolutely; everything from libraries, language development, even in REPL languages like J and R, I use debuggers all the damn time, native J/R/whatever, GDB, the whole lot. If you deal with hairy issues, there's no way of solving the problem with print statements, and if you're used to them, they're a lot more productive. You just have to pay the toll of learning the debugger.
I haven't uncorked any golang in a long time (~5 years), but I do remember thinking WTF regarding the debugger. I think the idea is to write enough unit tests you don't need the debugger so much. Seems questionable, but what do I know; Google would never hire me. Otherwise I liked golang a lot as a very pragmatic language and ecosystem. I had wished the GC was optional somehow to replace C/C++ tier "systems" stuff, but whatever; it's a fine successor to Java type things.
I have no experience with terminal debugging mind you, work pays for GoLand because it definitely speeds me up by more than the 0.15% or whatever of my salary it costs.
I have noticed slower run time with debugging on, but it's about half the speed usually, not too drastic. I'm usually debugging DB and network traffic and data flow anyway, so processing time is rarely an issue.
VS Code integration is mostly hopeless from what I've read, with the go language server taking up massive amounts of RAM, so have you tried GoLand properly?
It's interesting as talking to Go->Java guys they echo your sentiment and talk up goland.
They also write "non-Go" go.
Coming from deeply embedded and low level programming I've been perfectly fine without any more than printf style debugging... Tho we leverage logs/stats heavily.
I aim to write simple and easy to grok code with few surprises and abstractions tho so maybe it's a mindset difference?
I've seen a number of comments like this here today. I stopped liking source level debuggers when optimizers got decent in the second half of the '90s. Go has an unusual usage of the stack but mdb has support for it https://www.joyent.com/blog/mdb-support-for-go
When I first got into go, a few of the Go Opinions kinda rubbed me the wrong way.
- I have to source GOPATH in my rc files? Annoying.
- File paths are network URI's? Ugly.
- I have to deal with every err != nil? Verbose.
But over time I've grown to love go more than almost any language (python is still bae, especially with type hints, and even then, it's close).
- dependency pathing is a pain in most other languages. Singular source of truth is great (I've had virtualenvs leak and cmake do bizarre things when multiple libs are on the file tree)
- host/paths are file/paths. I love this pattern now. It's just so obvious and natural.
- ok the err != nil still drives me nuts. But it drives me to write things in a way where I don't need to deal with errors as much. Reduces fractal complexity and paths through the code. It also forces "the buck stops here" sort of pattern where you have some atrium which is resiliant and is where most errors bubble up to
> dependency pathing is a pain in most other languages
This has been a solved problem in the Ruby community for nearly a decade thanks to bundler. When I first started writing Ruby it had all the same problems as other languages, there were various attempts to solve it (similar to virtualenvs) but nothing really worked properly. Then bundler came along, and even from early releases it had pretty much figured it out.
The way it works is by having a human generated manifest file with a list of libraries you think you need (Gemfile) and the tool will then fetch those libraries (they can be installed in a vendor directory or globally for the specific Ruby version). You don't need to specify versions or deps (but can if required), as when the tool runs it will generate another manifest file (Gemfile.lock) which lists the versions of all libraries and their dependencies. Both files should be committed, so when you coworker checks out the project they know they have the exact same versions as you. There's another command to list outdated libraries, and to upgrade all or a specific library later.
The tooling checks the versions of libraries on every execution (unlike npm or yarn) and spits out an error if your local versions don't match the manifest. It automatically creates wrappers around binaries so you don't need to prefix commands from other tooling. There's built in support for installation in a deployment environment, so you don't install testing libraries there.
Together with tools like rvm/rbenv, it makes managing multiple Ruby versions and multiple projects, just work.
Python doesn't really. It's quite funny that Python has much better library dependencies available but Ruby has a much better system for managing library dependencies.
cargo and bundler share some core contributors, for example wycatz (Yehuda Katz), who held a talk about the common concepts and differences at the rubyconf portugal: https://www.youtube.com/watch?v=Bwk8mdU6-ZY
I am an average Go programmer, so I may not be deep into Go as you might be. However the mention of GOPATH catches my eye. All my issues with GOPATH went away for me when I started using the Go Modules: https://blog.golang.org/using-go-modules. Also Go's approach to err is one the reasons I started gravitating to Go -- my old Go code is more readable because of the verbose (error as strings in my code) and hence easier to maintain. Go is a surprisingly practical language for distributed, system oriented, undertakings I'd say
I'm not going to lie, I never understood why the gopath was so hard for people until I saw a friend using it.
The go developers where all unix heads, and as unix heads setting a path env variable was so natural it almost doesn't bear mentioning.
A unix dev is going to follow the following flow:
cd into the project i'm working on in the terminal (I use fasd for this)
export my gopath from history (ctrl+r GOPATH=)
launch emacs on the files I need
develop
However, many "younger" devs grewup with IDE's. They interact with a project by launching goland, which means mucking with this stuff isn't first class.
Not that it adds much, just some food for thought on why GOPATH existed and you found it clunky.
>- ok the err != nil still drives me nuts. But it drives me to write things in a way where I don't need to deal with errors as much. Reduces fractal complexity and paths through the code. It also forces "the buck stops here" sort of pattern where you have some atrium which is resiliant and is where most errors bubble up to
This was actually a huge flaw in the design of go. The way forward for this type of thing was sum types but go failed to implement this modern programming concept hence why your stuck with it. See how rust handles errors... that is the proper way.
Nor have I had problems, simply saying it is too much hassle for little gain so I rarely bother, whereas with other languages it is a breeze so I update all the time.
Please document your workflow to update and compare it to other modern langs, it's all relative.
> Note that golang-go installs latest Go as default Go. If you do not want that, install golang-1.13 instead and use the binaries from /usr/lib/go-1.13/bin.
> If that's too new for you, try:
> $ sudo add-apt-repository ppa:gophers/archive
> $ sudo apt-get update
> $ sudo apt-get install golang-1.11-go
Am I being trolled here?
How would you compare this for both memorability and user experience compared to say "rustup update"?
Please take a step back and look at that question without personal bias. Please!
This comparison is entirely unfair, because you're not comparing the same thing. This is what one has to do to install Rust:
# Install rustup
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
# It automatically installs the latest version
This is what one has to do to install Go:
# Add the PPA
sudo add-apt-repository ppa:longsleep/golang-backports
# Fetch information from the PPA
sudo apt-get update
# Install Go
sudo apt-get install golang-go
1 command vs 3, but in both cases only one "odd one" you have to remember specifically. More important, however, is the fact that from now on Go will update along with the rest of you system. You upgrade your system like this:
# Upgrade the system
apt-get update && apt-get upgrade
Now I'm not trying to say rustup is necessarily worse than Go, because it does offer advantages, but for the simple use-case of always running stable it adds an extra step rather often, instead of once.
Updating Go: Search a bunch of stuff and then do the stuff some random website tells you.
Entirely stand by my earlier statements. Apparently the thousands of paid computer scientists at google are simply incapable of automating the basic process to update their lang. It's utterly absurd and should be actively mocked by anyone with half a clue. We should mock this stuff, we really should, it's hilariously dumb.
How are all the people getting paid SF wages literally incapable of creating "go update"?
> Updating Go: Search a bunch of stuff and then do the stuff some random website tells you
No, updating Go just happens when you update your system. You don't need a separate command to do it. All of the other stuff you mentioned in your previous post is about downgrading Go.
That is good advice. I really should document it but only use go now and then for various reasons and never seem to enjoy updating, don't really have a horse in this race. It just feels so clunky, which was my original point.
Now you can `rustup install <version>` easily with everything taken care of. Of course, the latest stable release is automatically installed already. And Rust provides automatic install scripts for Windows, Mac, and Linux. Whereas Go is more involved than is expected for any modern language.
You can just download the go tarball and go from there. Technically you don’t need to install Go, you don’t need environmental variables set etc.
But I do get your point. You are absolutely right that go doesn’t hand hold you through that process and thus a thousand different people have a thousand different methods for doing it.
Well, using apt is one of the standard ways of installing/updating software on Linux, so most advanced Linux users should be familiar with that. But I'm sure you can also use a graphical package manager to do the same thing. "Rustup" on the other hand is a customized rust-specific tool, so of course it's easier to use (once Rust is installed)
The idea of everyone repeating themselves over and over again seems to be almost desirable amongst the community. The concept of DRY is essentially dead as far Go is concerned.
Why exactly can the core team not write this tooling? Why should thousands of independent people do it all themselves to varying degrees? It's a laughable situation.
As far as I know, most new features are opt in - I’d be curious what issues you’ve seen? I recall one recent one that bit me in dependencies on private repos is the new mirror/sum where we need to `export GOPRIVATE=github.com/<us>`. In a way I’m thankful it was so simple, but while I figured out what happened I wasn’t very happy.
Go Modules are opt-in if you are trying to submit pr's? Either you use >1.13 or you don't submit the pr.
I hate to be the rust shill but updating it is "rustup update" I dont have look anything up nor put in any effort, why exactly should a common process which will happen millions of times be so hard for Go to replicate? It's absurd and should be called out.
Updating out of the box is a massive pain and I stand by that statement, it's hilarious how people get so defensive of what is absurdly poor UX for a modern language.
Docs for anyone interested. You need to uninstall the previous version before going through this process. Again to reiterate, all it take in rust is "rustup update".
Go would be about right, had it been released around 1992, when we had Oberon, Oberon-2, Component Pascal, Modula-2+, Limbo making the university rounds.
As it is, I see it mostly as C replacement.
Yes, as C replacement, as projects like TamaGo, Android's GPU debugger, gVisor hypervisor, Go's bootstraped compiler, Fuchsia's TCP/IP stack, TinyGo for microcontrollers, show.
However for anyone that enjoys being able to take advantage of what happened since 1992 in mainstream computing, Go isn't it.
IDK. On paper I agree with you, but I'm just so darn productive with Go. C is great until I need to open a socket. Rust is awesome but slows me down. Dynamic languages need types for serious projects, and I can't be bothered to add a build step just for that. I use them all, but I only picked up Go relatively recently, and I've been very impressed.
Your opinions of Go are quite clear, given every time there is a thread on HN about Go your relish telling everyone how backwards a language it is. Yet weirdly you’re not the only competent engineer out there.
I’ve been programming for 30+ years and used many of the languages you’ve described. In fact I’ve written code professionally in well over a dozen different languages (I stopped counting somewhere in the last decade). Yet I still find myself productive in Go. It’s not just the compiler speed either. You talk as if people are idiots or are delusional when actually Go is quite a nice language for a great many people with real world problems to solve.
I’m sure I’ll get downvoted for saying this but there is so much snobbery and elitism in these kinds of threads and it’s absurd. Live and let live guys.
Google has fuck all to do with my decision to use Go and nor has it ever been a consideration. In fact I’m about as decoupled from Google as one can reasonably expect to be.
I think you need to do some research into what “personal preference” actually means because you seem to think your own preference is gospel and everyone who disagrees is a magpie (which is just an indirect way of saying “idiot”). And frankly it’s getting tiresome watching you derail every software development thread.
Yeah. But your reply to pushpop just lost validity.
And you might at least consider whether the person who's wrong on the Internet might in fact be you.
And you might consider whether, even if you're right, you're also being counterproductive in how you're going about trying to persuade others. When someone raises a valid point against something you say, you don't acknowledge the point. Instead, you find something - anything - that you can quibble with in what they said. This makes you look like someone who wants to win the argument at any cost, rather than someone who wants to have a reasonable conversation. This makes some of us wonder how biased your interpretation of events is. That is, your hyper-argumentativeness makes you less persuasive*, rather than more.
You too noticed pjmlp keeps regurgitating the same stuff about Go on every chance he gets?
Pretty combative and low effort content. Snarky replies with little respect to others opinions. Always posting jabs at Go about how 90's languages are better in every way yada yada. Do mind the tendency to veer into offtopic too.
Given the effort, I sometimes wonder how painful it must be for this person to see others enjoying and being productive in the language.
The "live and let live" point would be stronger if there was not a stream of people arguing that "go is so productive", as if other languages were adding wikipedia links in compiler logs to distract us.
Go is productive in specific ways. Other languages are productive in other specific ways. That’s why we have different programming languages solving different problems.
So if someone says they like Go because they’re productive in it that doesn’t mean all other languages are shit. That only means Go solves the specific kind of problems they need solving.
This is why I hate languages flame wars and why commenters like the GP exasperate me. They have different problems they need solving and talk about programming languages like there is a one size fits all. That’s not the case.
Are you advocating something, or just sad that Go is more popular than you think it deserves?
What languages do you think people should be using instead? My use case is mostly backend network services. My next two choices would be Rust (absolutely love it but slows me down and async isn't ready) or Clojure (would rather avoid a VM). Would you suggest others?
More disappointed with its design decisions, and how some believe that Go is special in some way, just because they don't bother to learn computing history.
Example, Go's compilation speed is nothing to wonder about, when one enjoyed languages like Modula-2 and Object Pascal compiling on 4 Mhz CPUs.
Fair enough. I agree it's sad how much gets lost from generation to generation. Unfortunately progress is not monotonically increasing, even in the internet age. But that doesn't mean Go isn't one of the best languages we have today, which has been my experience.
I've done lots of Java and C++. C# looks like a better Java to me, with a few really cool features like LINQ. But I used to be a MS hater so never tried it, and now with Go/Rust I see few reasons to use a VM language. Performance is fine for lots of my use cases, but deployment is strictly worse than compiled languages IMO.
I've never been willing to make the religious commitment to truly get good at C++. It's insanely complicated.
That's fine if you just prefer the language. I don't really agree that performance or deployments are any better in Go though, compared to those. That's where I was curious, I've only toyed with Go, but havn't really seen anything better on those angle.
I agree with you, but Go's lack of language features appeals to me. First off, I'm not a language geek. I code to solve problems, and not explore programming language advancements. That's not to say that more advanced languages can't be used to write simple solutions, I know they can. But, I'm just a blue-collar software engineer, and use modern Java (8+), Go, and SQL daily. They are fine and get the job done without much fuss.
I also use JavaScript+Babel daily, and it feels like a kitchen sink of language features with no holistic thought about language direction. Many times I think, that's cool, but did it really make the solution a) easier to write b) easier to read and/or c) easier to test/debug?
Back to the original article about tooling. Having used VB back in the day, c#, Java, etc..., Go's tooling is passable. I think it's the bare minimum to get by today. One thing that I love that Go pushed is gofmt. Having it be the default out the gate is great for teams.
So strange that nothing important is implemented in Oberon, Oberon-2, Component Pascal, Modula-2+, and Limbo. Implementers must be really dumb. Maybe they should read more papers.
Their designers weren't Google employees at the time.
Should we talk about the Plan 9 and Inferno commercial failures from the Go authors?
Interesting enough, on your blindness to attack me, you just called Rob Pike dumb.
Oberon is a success, even if a minor one, it keeps being used as OS systems language at ETHZ and many Russian universities, and brings food to the table to the Astrobe owners.
Well by that standard Go is putting food on lot many developers table (including me). And your constant berating of Go about some lack of "modern" features is like telling its users to be ashamed of themselves of not knowing any better.
Rust async isn't quite "seamless" (no async in traits) but getting there. Albeit if "seamless" is what you care about, you can always use threads - the overhead is negligible for typical workloads. That's what Go does under the hood; it just uses threads.
What I mean is that what you do in rust using async/await is done in Go by default for every function and function call. There is no distinction. _there are only async methods in go_ and that is what I want.
In other languages, like rust, there is a big ecosystem divide between libraries that are async and those which encourage you to use blocking calls and threading.
I've used it, and it's great! However, as I've written in a sibling comment, it's nowhere near as performant without using c-libraries. Which make the schedulers' effectiveness nastier (same in go if you use cgo).
I've used F#, it uses computational expressions for async functions. And functions are either async or not. Same for C#. Same for Scala, Haskell and others. Future[T] is so unergonomic and it creates a huge divide in the ecosystem. Been there, hate it.
I understand why Rust made the choices it made, they're going with zero cost abstractions, but for other kinds of languages async everything is fundamental to me after having used Go.
Erlang and Elixir are not fast, they use C implementations of functions for computationally heavy code, which of course blocks the scheduler for the duration of the call as far as I know and also can bring the Erlang VM down with exceptions. But yes, ergonomically wise they are what I want in this context.
Java Loom seems to be trying to add transparent asynchronicity, but it's not here yet.
In Go I write straightforward seemingly blocking code, and it just works.
Can you specify what you mean by seamless async everything and why it's significant? Are you talking about the execution model here? You mean having a message passing system like in Elixir that allows you to simply construct most of your program by writing callback functions?
Even in a language with no such system like Haskell when you have a web request it runs in its own green thread, allowing you to write mostly synchronous code. I'm not sure what the benefit of some kind of natively async execution would be here.
Ok, let's first look at Rust here. Rust has synchronous functions and asynchronous functions. Asynchronous functions return a Future which you can await in an asynchronous function or run in a future executor. An asynchronous function really gets transformed into a state machine with states in the various yield points (mainly awaits, or lower level primitives). Now if you make a blocking - synchronous - call somewhere deep inside then... Great, you've blocked an executor thread for the duration of the call.
Now let's look at F#, Scala, Java, C# (I didn't use Haskell asynchronicity, but I suppose it too has a Task/Future monad). Here you have the same as in Rust with the calling a blocking method deep inside. Other than that, in the Scala that I've seen you end up having Future[T] instead of T everywhere. Which causes you to use monadic functions everywhere instead of using T directly. This is cumbersome and unnecessary in my opinion, because in most applications I'm working on you'll just make 90% of the code async + synchronous helper functions.
Now coming back to Go. There's no distinction between synchronous and asynchronous functions, because there are only asynchronous ones. That's also why Go code will look "blocking" for somebody used to the monadic approach. You don't have to await a function, there's a yield point implicitly inserted at every function call. There's no real blocking in Go. And I'm working on T's all the time, no Future[T]'s.
Sure, it gets hairy if you use cgo, but the scheduler can't reason about C code execution, so you'll block a worker thread for the duration of the call. And there's the proverb "cgo is not go" too.
I'm not meaning to say the other approaches are invalid and the Go one is clearly better. But in practice, in the kind of software I tend to write (microservices, storage systems, service meshes) the Future[T] is just unnecessary clutter which I don't need.
Also, regarding the Elixir parallel, Go doesn't handle everything as callback functions, you just write code comprised of imperative function calls like you would in C (you know what parts of "like you would in C" I mean I hope) and there's no blocking. I agree that Elixir is another example of a language I like by the execution model, but it's much slower than Go if you don't use C-backed libraries.
You also have the point of being able to spawn tons of Goroutines with micro stacks because of how stack growing and moving has been implemented. The main point is that in Go everything is first class green.
Project Loom, as far as I've talked to my Scala writing collegues, seems to be aiming for the same in the JVM ecosystem. Making synchronously written code asynchronous by default. But it's not here yet, so no comparisons to be made.
EDIT: I may seem to be trying to show off with my knowledge of other languages, but people often make a point about developers being too incompetent to use something other than Go as the reason for Go's popularity. For them I'm trying to make a point, that I've been there, tried the approaches, and this really is the one that stuck with me in practice and which I like the most. To each their own.
Alright thanks for taking the time to detail it for me!
Most languages/runtimes indeed don't have this kind of execution model. It wouldn't often make sense for a general purpose programming language to function like this, but then again both Go and Elixir are specifically designed for the web.
You're right in that when there's no built-in design pattern for async stuff the language community finds various, perhaps conflicting, ways of doing it. In Haskell you could use threads or some Async library (that uses threads) to achieve concurrency, there are many different abstractions. But in a typical web API it's not common to be needing lots of async functionality, exactly because each request is already served in its own thread. It's ok to write synchronous code there as it does not block the runtime or any other thread.
In a typical web API you don't want each request to spawn a thread, at most a green thread. (at least if you have traffic that requires you to have more than one machine) As I understand the term "green threads", they are scheduled on standard "worker" threads. Synchronous functions will block green threads and this way block the underlying worker thread. If you have green threads, then you need asynchronous functions. (which in Go are just the default, so you spawn a goroutine - or more - per request, without thinking much about it)
And I do actually disagree about the general purpose language part. I think that only really performance oriented languages (like Rust) should go the way of sync/async distinction. Because there's hardly any loss in async-everything otherwise.
By threads I meant the corresponding runtime thread (not OS thread) on each platform. In Elixir they're called processes, in Haskell (green) threads.
Both Haskell and Elixir web frameworks spawn a new thread for each web request they receive. Those threads on both platforms are very lightweight. You end up writing mostly synchronous code on both platforms for handlers that perform the work for those web requests. In some typical smaller API I may not have even a single piece of async code (async statement/expression) on either of those platforms, it's all synchronous in terms of code. It's all thanks to the execution running in its own thread.
I don't understand what possible gain there would be to make everything implicitly asynchronous in this kind of scenario. Haskell already evaluates lazily, and on Elixir you can always just pass messages to other processes. In a typical web request you still need to fetch something from the DB, manipulate the data a bit and then return it. Synchronous code serving a single web request makes perfect sense to me, having everything asynchronous sounds like it'd just make everything more complicated for no reason at all.
Addition:
The point of lightweight runtime threads is exactly to allow concurrency without having to write asynchronous code.
Writing to closed channels never happened to me in practice. Sure, you have to write idiomatic go code but then it's a Non-Problem. Deadlocks are indeed a problem but at least there's the deadlock detector which crashes your app with a stacktrace which makes it a minor inconvenience in practice.
Being able to launch a goroutine for background housekeeping, like sending keepalives, in a structure constructor without thinking much about it is extremely liberating to me.
My point is that Go is not extraordinary in the language constructs it presents. It's extraordinary in its implementation, from-ground-up async and the family of problems it completely abstracts away.
I'm not saying you have to agree with me in that Go is the best languag for you to use. But suggesting it's a language which feature wise is left behind in the 80's is plain dismissive and makes you seem uninformed to anybody who's written any nontrivial amount of it
I've written code in most of the languages you've compared it against. Even explored heavy functional approaches. And even though they were interesting, often innovative, Go still leaves them behind in the dust in practice for me. Because it hides the stuff I don't care about and works well for creating real-life software which I later have to operate, extend and extinguish when it's burning in prod.
Edit: Just to add, implicit interface implementation is great for composing software.
I haven't encountered those problems in the Go I've written (though they are possible with any concurrent programming model powerful enough to allow for optimization). Perhaps it's just an experience / practice difference.
In general, if one is trying to do a concurrent read/write on maps, one his "holding it wrong." Access to map data should usually be channel-gated with a goroutine serving as an accessor to the map data. Mutex-locked if you need it faster for some reason (though chans are already pretty fast).
I'm not sure if you're talking about the literal `async` keyword in JavaScript or the general concept of asynchronous functions. I'll answer both questions.
Declaring a function `async` in JavaScript causes the function to return a Promise immediately instead of executing all the code within the function (and blocking until that execution is completed). A Promise can be thought of as a "placeholder" for data not available yet; something can later put data into it (which will trigger all code waiting on the promise to have data to be executed). If you call another async function within an async function, you're allowed to use the keyword "await" to block execution of the calling function until the called function either satisfies or fails the returned Promise, at which point the return Promise result is used as the return value of the awaited function and the waiting function finished execution.
More generally (speaking loosely), synchronous functions are evaluated to completion before the next line of code is run, and async functions return immediately but either queue up some work to be done later or kick off some work in a separate system (additional thread, additional processor, maybe the network card or graphics card) that happens at the same time your main program body is running. So, for example, "Math.pow(3,3)" is synchronous and your code won't continue until three-cubed is calculated, but "XmlHttpRequeest.open" with its async parameter set (https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequ...) will return immediately and let more of your JavaScript code run while the browser itself communicates with the network software / hardware to make an HTTP request, fetch some data, and give it to your program; you can register a function to be run once that data is available.
ADDENDUM: WHY WE CARE SO MUCH
It's generally impossible (I'm lying, but it's a useful lie) for completely synchronous code to "deadlock," which is where there is no way for the next line of code to be run. With asynchronous code, deadlock happens when an asynchronous function is waiting on some result that will never come; the program cannot continue, and (if it blocks your user interface) your computer is frozen. Without some discipline of ordering of events or ownership of task responsibility, deadlock is very easy to cause with asynchronous code.
Trivial deadlock example:
thread A: start async thread B, wait for a '3' from B, then send '5' to B
thread B: wait for a '5' from A, then send '3' to A
These threads get stuck forever; A is waiting for B to give it a '3', and B is waiting for A to give it a '5'.
"However for anyone that enjoys being able to take advantage of what happened since 1992 in mainstream computing, Go isn't it."
Which radically new programming language concepts has been invented since 1992? To my mind most new languages today seem like buffet plates of features picked from existing languages.
Not many, but it would help if Go at least took some ideas from CLU (1975) and ML (1973), which while considered academic experiences in 1992, by now most mainstream languages have adopted their ideas, with exception of Go.
You can't really cite Oberon as 1992 mainstream computing, already back then Wirth was rejecting a lot of the things that were happening -- and continues to do so. Heck, Oberon-7 eliminated multiple returns…
Both Wirth and Pike had the various PARC systems as inspiration. Sure, things like the BLIT or the Ceres tried to do such GUI things with way less hardware to throw at it, but their rejection of some concepts goes beyond mere RAM constraints. Or else they would've rectified that once powerful consumer hardware went mainstream. Wirth's opinions on "proper" design haven't changed that much.
I'm also not quite sure that the arc of the programming universe errs towards productivity improvement.
One thing that happened since 1992 in mainstream computing is multicore processors, and Go is one of the few broadly popular languages that attempts to handle concurrency in a sane way.
Go's tooling is very nice compared to most other dev technologies, and the source is quite hackable as well. I made a 1-commit modification to the compiler that adds a flag -warnunused to make it stop failing on unused imports or variables [1], and it was remarkably simple to do in the go src tree.
Plus, it's super easy for other people to use. Just check out the modified source, make.bash, and you're done in less than a minute of compilation!
I may not agree with some of their philosophies, but the tooling is nice!
Truly unfortunate. I don't write Go, but so often I comment out large swaths of code for debugging purposes and get warnings for unused variables and imports. If I couldn't simply ignore the warnings, I would not be a happy camper. It's funny to me that people call it a "hackers language", when it seems so restricting from afar.
Doesn't help when you remove chunks for testing purposes, but want to keep the imports around as you'll be adding them back immediately after and don't want to search for the right import again.
I think you missed the point of the comment - no matter if you're removing or adding imports, the tool (goimports) should do it for you automatically. It's easy to setup with any modern text editor (VSCode, emacs, etc)
What of when there are duplicate symbols with the same name? And this doesn’t help with unused locals. Also, another commenter pointed out that unused local functions don’t get flagged. Go authors probably realized that making people delete and restore entire function bodies during debugging was too much, but they could still convince people to be okay with deleting and restoring import statements and local variable definitions.
My experience with GoLand is that it manages to add back the right imports (notably with things like errors vs github.com/pkg/errors vs github.com/cockroachdb/errors, where they all share a package name and ~interface) but that might just be me getting lucky or not noticing changes.
You got lucky. And it still doesn't solve the problem of unused variables.
At the end of the day, it's a hair shirt. And no matter how many coping mechanisms people come up with, it's still a hair shirt.
Even the go authors had to put a limit on their madness. You'll notice that unused unexported functions don't cause compilation errors despite the fact that they, too, fall afoul of the original justification for this policy: https://golang.org/doc/faq#unused_variables_and_imports
"The presence of an unused variable may indicate a bug, while unused imports just slow down compilation, an effect that can become substantial as a program accumulates code and programmers over time. For these reasons, Go refuses to compile programs with unused variables or imports, trading short-term convenience for long-term build speed and program clarity."
But disallowing unused functions would have been a bridge too far, so we're left with this half-measure and half-reasoning that doesn't even make sense.
> But disallowing unused functions would have been a bridge too far, so we're left with this half-measure and half-reasoning that doesn't even make sense.
while a pain, for unused variables, if it comes up, right under the declared variable, just add the line: _ = myVar. Bingo, now it is used. Gross, but works.
When I'm debugging something, I don't want compiling to fail just because I've commented out the only line that uses the net/url package. If I temporarily modify a function call to use a literal instead of a variable so I can test an assumption, I don't want it to refuse to compile just because that variable is now "unused". The cascade effect of this can get quite extensive and annoying.
Same goes for exploratory programming, where I'm testing out ideas rather than writing production code. I expect my tools to get out of my way rather than play nanny.
Goimports breaks when two packages have the same local name, can't handle import name overrides, and doesn't solve the variable problem. My solution fixes everything.
Alternatively just add usages at top of package that have no real effects. This is experimental code after all! Never had problems with this, but yes, it's a bit different.
So many silly workarounds being suggested here. You should be able to pass a flag to accomplish this, anything else is simply annoying and drives away new users.
You could have a flag, but we all know this is going to be abused. Seems devs have opted away from such, though it is understandably opinionated and require some thought.
In what way would it be abused? "To use this library, you must enable warnings in your build"? Yeah, that's gonna fly. If your code won't compile with default options, nobody will want to use it.
Isn't such types of shenanigans SOP in many communities?
Not against options per se, but can see the arguments against implicitly hiding alternative behaviour. Many compilers fail to be simple, performant and provide the right incentives.
Again, never had problems adding experimental code (// TODO: Remove), bootstrap-code, extra debug-info, etc. Ie. Why you should want to prefer refactoring.
But then if you forget to take those bits out, you now have unused things in your checked in code, which is exactly what the compiler was trying to prevent!
The build flags approach is more comprehensive and safer, because it simply won't compile when you build without the -warnunused flag (for example in your ci trigger off your git repo).
> If you already have a Go compiler on your system, there’s no reason to bother with binary releases. Just grab the source and build it. All can manage their own toolchain with ease!
No thanks, give me binaries any day. I want to spend time developing my own software, not fiddling around bootstrapping a dev environment.
I agree with you and I found this part of the article strange as well, since one of Go's strengths is that it routinely spits out self-contained binaries thanks to doing exclusively static linking.
Eh, Java and C# and I'm sure plenty of others make this pretty simple as well.
Go does seem to currently benefit from simplicity through lack of choice as it seems like there's only one way to do things, which is nice. I assume that'll eventually change as its not something you can really protect against and more the burden of success.
The reason why header-only libraries exist is because C and C++ don't have an official package manager. That's it. Any other language that does can vendor just as easily. Similarly, pretty much every other language has package management so that's not a point in Go's favour.
From the article, we're left with compilation speed. That's actually important, but Go isn't the only language that has a reference compiler that runs fast!
Also error messages from the tooling are still from a previous era. Nowadays tools say not just what's wrong, but go an extra mile to explain why and how to fix it. Go tools still print "go: your command was bad and you should feel bad. exit".
As an additional plus, you can apple-click "builtin" functions from inside of VSCode and drill into core libs for debugging purposes (adding breakpoints, log statements, etc). I've done this on a number of occasions to quickly get the result from an external HTTP request, etc. Go truly feels like a hacker's language that lets you get under the hood really easily when necessary.
b(reak) [ ([filename:]lineno | function) [, condition] ]
Without argument, list all breaks.
With a line number argument, set a break at this line in the
current file. With a function name, set a break at the first
executable line of that function. If a second argument is
present, it is a string specifying an expression which must
evaluate to true before the breakpoint is honored.
The line number may be prefixed with a filename and a colon,
to specify a breakpoint in another file (probably one that
hasn't been loaded yet). The file is searched for on
sys.path; the .py suffix may be omitted.
Have you ever used smalltalk? The smalltalk development environment is so much better than being able to look at stdlib functions (which afaik everything allows? I don't think I've run into a language yet that didn't allow such introspection).
It can be difficult to drill into “native” stdlib routines from interprètent langages e.g. definitely doesn’t work in python even using pycharm, you get the skeleton stubs instead. I’d expect mruby to have similar issues.
There are definitely good things about googles "decentralized package management". But using a url for the import also causes problems. What if the url the code is hosted on needs to change? And then I also ran into a case where I wanted to fork a dependency of a project, so I could get a bug-fix that hadn't been merged into upstream yet. But the only way I could figure out how to get it working was not only to have to change all of the imports in the main project, but a bunch of imports in the dependency as well to point to the fork. This seems needlessly painful.
> I wanted to fork a dependency of a project, so I could get a bug-fix that hadn't been merged into upstream yet. [...] change all of the imports in the main project, but a bunch of imports in the dependency as well to point to the fork.
Isn't this solved with Go modules and the `replace` keyword in the go.mod file? It's meant to be used for exactly this purpose and the wiki uses the fork example as well:
"decentralized package management" is a security dumpster fire.
I want well-known Linux distributions to review and vet libraries - as they do - so that I don't have to trust some random libraries from random author hosted on random VCS services.
The difference is that java’s “uris for import” is nothing more than a namespacing convention to avoid conflicts. A url in go is the actual place where `go get` will go retrieve the dependency, and the directory it’ll live at.
> [anything related to Go] is an undervalued technology
What exactly has he been reading? I've been working in Go for a while now, after years of reading only praise for it. Except for the error handling, which is far from the only thing wrong with the language/ecosystem, and even comments on that never fail to gather dozens of apologists.
There's nothing undervalued about it. This article itself is an example and it features little I haven't read dozens of times.
> Decentralized modules are great idea and avoid most of the issues of a centralized package repositories.
But centralized package repositories are essentially an attempt to solve the problems of the wild west approach from before (which Go's way is more or less a throwback to, that only works better now thanks to platforms like github). Maybe they're a good thing? Is everyone else wrong? Questions for later.
The part about building go is neat in comparison to other languages. I don't see the fresh breathe of ideas in package management. People used to host code and directly import it before go even released publicly. Go get was a thin wrapper over traditional way of downloading code and putting it in your folders if anything (wget). Ignoring the other advancement that took too long for go package management situation to resolve, currently I don't see any advantage go has over other languages in that area. Node package managers have allowed you to install packages from "decentralised" sources such as github or gitlab for a long time. node_modules is similar to vendor and with nexe, you can package them all into a single executable. Cargo provides all of that with additional features that neither of them do.
Go developers butchered community experiment in the form of dep. While it had many problems but those didn't require abandoning it out of nowhere, this isn't the first time that has happened. The community has a knee jerk reaction to anyone providing suggestions to change something or better it. Failure of go type system (other than generics) is often met with suggestions of code generation and doing some wackamole interface magic.
To any problems in the standard tooling, the answer is to fork it and go away. I agree but other communities don't behave that way.
People are separated into distinct camp when it comes to promises or expectations. One noticeable idea is simplicity, half the crowd believes in the notion of verbal/syntactical, keywords and typesystem having less "features" simultaneously fighting for less ambiguity. The second half base simplicity on putting graduates to get up and working quickly in a code base at the cost of long term complexity in abstractions. All ideas of go make sense if you think from large company's perspective.
You can notice how go would work well in a company with monorepos.
That said. I hate go but that's why I like it too. If I want to get something done, I choose go. If i want to hack something and appreciate beauty of developing rather than efficiency or reaching end product, I avoid go like plaque.
I played around with Go for the first time. The package manager and process of creating a new project was refreshing! It’s so tidy and neat. Those are the first words that I thought of - Go is a very tidy and neat language. From syntax to publishing packages, the language has extreme levels of cleanliness.
I don't really see how the package management in Go is better than in Java? In fact, it seems a bit inferior on some aspects, though mostly similar sounding overall.
Can someone more familiar with both explain?
I have to admit, the Go compiler is what I'm jealous of. No dependencies, not even GLIBC and cross compilation seems a breeze. SubstrateVM has a lot to catch up too there.
Python has the unittest module which works well for basic unit testing, and it's inspired by others (JUnit and the smalltalk testing framework respectively as I recall). I'm not sure about benchmarks, though.
That is because unittest should be removed from the core libs. It is a bad copy of a Java library, pytest is far superior in every way. There is no reason to use unittest.
I find go doc, go fmt, go vet, gopls, etc to be much more undervalued than the compiler itself. I use these supporting tools much more frequently than I invoke the compiler (yeah they're related...) but TBH the huge selling point when I point people to Go is the developer support and ecosystem.
The article has three sections: building Go, package management, and vendoring. The first is irrelevant to almost everyone. The other two are terrible. If that's supposed to be singing Go's praises then I think it says a lot that wasn't intended.
Go's compiler toolchain is an overvalued monstrosity, a relic from the 70's only Rob Pike could get away with resurrecting. Imagine walking into your day job and saying to solve today's problem you're going to blow the dust off your old floppy disks and see what they've got in store.
It's like Tesla deciding that before they built a car, they'd have to pave a new highway from SF to LA. Someone's already done the paving, focus on what you bring to the table.
LLVM is clearly the future. Yes, it could be improved, but of course all the effort that went into resurrecting the geriatric Go compiler didn't go into improving LLVM. It went into re-implementing assemblers for various architectures as though that wasn't a solved problem 100 times over.
As for package management and vendoring, both are great, and should be table stakes, so credit where due on that one.
Besides, one of Go's explicit goals was to not be bound to any one implementation. The language has been carefully documented to ensure that anyone could reimplement it without having to understand nuances of a specific toolchain.
If you want to compile Go using LLVM, everything you need is there to make it happen. In fact, someone has already done a lot of the leg work for you[1]. And if GCC is more your thing[2]...
I think this is too facile. Go was conceived as suitable for systems programming tasks, like web browsers, databases, and operating systems. But Google's own web browsers, databases, and OSes do not use Go.
It was also conceived as suitable for web servers, crawlers, and indexers; there it has been successful inside and outside Google.
Why didn't it catch on for browsers, databases, OSes - while Rust did? I think it must be the GC primarily, but the lack of an advanced optimizing compiler is part of the story.
Rust was invented specifically to write a web browser, so that is a bit of an unfair comparison, but I would suggest that Go's lack of seamless inter-op with C is its biggest limiting factor in integrating into existing browser codebases as well as greenfield GUI applications. Additionally, Go's runtime is not well suited to OS development.
I doubt the lack of an advanced optimizing compiler ever crosses anyone's mind as by the time that might become an issue they have already ruled out Go for its many other limitations.
They would have shipped Go years sooner if they didn't waste their time re-building all the basic plumbing of a compiler suite. To leverage LLVM would have been to implement a basic front-end just like LLGo did. They'd have received instant access to an optimizing mid-end, optimizing back-ends and all the architectures that LLVM supports today and will ever support in the future. Any optimizations they'd made to LLVM would have made all LLVM clients better off. Instead, they chose to buff up 9c.
Is LLVM bigger? Yep. Slower? Probably. Is that because it's some wildly inefficient monstrosity? Nope. It does a lot more, with a lot more flexibly, and it does it a lot better than the Go toolchain. One day when Pike & Co get around to implementing all the optimization that LLVM has, it'll be just as big, and just as slow, mark my words.
Why? Your users are the ones with the weak computers, developers usually have the beefiest machines. Why optimize for the developer, who'll build it a few hundred times, as compared to the end-user who will run it continuously for days/weeks/months/years on end? Just disable the expensive passes in a debug build, like "cargo check", "cargo build" and "cargo build --release".
I'm not sure that refutes my statement. All the post indicates is they didn't know LLVM. They also didn't try to learn LLVM, or hire someone who did know LLVM. I think that's what I'm suggesting.
Indeed but that doesn't make it a good decision, and it certainly doesn't make it "undervalued technology" -- and that's what we're discussing here, isn't it?
Well, I concur with the points from the article, since while I don't like Golang as a language I really like its tooling (BTW I am really craving for a more powerful language that compiles to Golang. And specially I like the focus in Golang to produce truly static binaries (they're not even linked with libc!).
And while Golang may not be the fastest language out there, it is fast enough for most use cases. I mean, Python is fast enough for most use cases, and Python is one or two orders of magnitude slower than Golang.
So yeah, I think their compiler decisions are fine. They may or may not be the best decisions, however the article is not talking about the compiler part of Golang, so yeah it can still talk about "undervalued technology".
Not rewriting all the assemblers, linkers and optimization passes of which decades worth of PhDs went into LLVM. It was a total and utter waste of time re-doing all that. Any work that went into buffing up 9c is never going to make it back to the community, whereas anything that goes into LLVM makes the whole world a little bit faster, immediately. And they could have started there.
Ditto for gccgo - it still is a thing but they had to have their own everything and make gccgo a secondary implementation that lags behind mainstream. NIH maybe.
It's faster because it does less, and it doesn't do it as well. You can always make LLVM faster by reducing the kinds and number of passes performed for debug builds, for instance, "cargo check".
Further, developers have much better machines than customers, and taking more time to release something smaller is almost always better.
> ...not depending on C++
I hate C++, but I could care less that LLVM was written in it. I don't refuse to use MySQL because it happens to be written in C++. The Go compiler could still be written in Go, and targeted at LLVM IR code, just as the Rust compiler is written in Rust.
> ...and we don't need a compiler monoculture
And we don't have one either way. There's GCC and LLVM. A boutique one-off compiler is in no danger of toppling the GCC/LLVM status quo.
The debugging story is very limited:
- No function execution
- No edit-and-continue
- No conditional breakpoints
- No watchpoints
Package management works OK on the consumption side, though it is much more clunky than with classic package managers (e.g. set an ENV variable to get a custom proxy, but write something in go.mod file to get a per-package proxy). However, the publishing side is all over the place:
- Relies on source-control integration
- Horrible import paths for packages published from large repos
- Versioning based on source-control tagging, exact format undocumented
- Major version changes require source-code level changes in all files touching the package
- Multiple Go modules in same repo use-case is not documented (can you have separate versions from same commit? how do they reference each other?)
The source-code level tools/plugins available are extremely basic (except for GoLand):
- No refactoring support other than "rename variable"
- Reference-finding across modules rarely seems to work
- No way to automatically find all implementations of an interface - especially problematic in a language with implicit interface implementation
- The tools are pretty slow, constantly re-parsing large chunks of code; go pls has fixed some of that