The new `runtime.SetMemoryLimit` is actually a pretty big deal and solves a lot of long standing issues regarding the garbage collector not being very configurable, I believe including the one described by the viral "Go memory ballast" article [0].
Well, you are in luck. Java provides more than five hundred flags to configure runtime behavior. So there is no limit on things you can achieve with that.
It hasn’t been true at all for modern GCs. G1GC basically has only one knob that you may change: the target pause time. This changes between better throughput vs lower latency (which two problems are mostly opposite ends of the same axis).
And your sarcasm aside, Java has the state-of-the-art GCs by a long shot.
2 knobs after a decade vs how many in Java? Yes, that is exactly what simple looks like.
After years of real world experience, I can say that the Go garbage collector worked fantastically across a range of applications that I've been involved in. No tool is perfect for every job, of course, and having hundreds of knobs does not make Java perfect for every job either.
I'd say it makes Java's GC worse at nearly every job as that means the defaults probably aren't what you want but are probably what you are going to use.
The Java problem is that you wind up in a blame game where one team blames the code and the other blames the GC settings and you wind up doing a lot of fruitless flailing. No knobs really cuts down on that nonsense.
In situations where you actually are allocating on the heap, Java’s default GC significantly out performs Go’s. Just look at the binary tree bench marks.
Even that simplification is too much. Java uses a generational GC, so the fewer objects that survive the nursery, the less work it has to do.
That benchmark does not mean Java's GC is faster when you're "actually allocating on the heap". It means Java's GC is faster when you're spewing garbage onto the heap as fast as possible.
Since Go makes heavy use of stack allocation where possible, short-lived garbage usually doesn't end up on the heap, but this benchmark is designed to force that outcome. Go's garbage collector is optimized to minimize latency and STW, but this does come at the cost of decreased throughput. The opposite can be said of the majority of Java GC implementations, which trade latency and STW time for increased throughput.
This depends on implementation. ZGC is not generational yet, but I don’t see how generational or not would invalidate a GC benchmark for the default implementation.
> That benchmark does not mean Java's GC is faster when you're "actually allocating on the heap".
It actually does at least in the discussion of _default gc implementations_. Just because the default (and only) implementation for Go sacrifices throughput (and relies on the compiler to stack allocate) doesn’t invalidate the benchmark.
And honestly if I was in Vegas I’d still bet that ZGC (Java’s non-generational, latency sensitive GC) would beat Go’s implementation here.
> This depends on implementation. ZGC is not generational yet, but I don’t see how generational or not would invalidate a GC benchmark for the default implementation.
I never said the benchmark was invalidated, I said that you interpreted the meaning of the benchmark wrong. I like how your reply completely ignored what I said the benchmark meant. That benchmark has a very narrow focus, and generational makes a huge difference for that one benchmark.
> And honestly if I was in Vegas I’d still bet that ZGC (Java’s non-generational, latency sensitive GC) would beat Go’s implementation here.
I would love to see a holistic set of benchmarks comparing them. Even just that one very narrow benchmark you linked would be fun to see — if ZGC is so good, surely one of the implementations of the benchmark on the website is using ZGC? But I haven’t had time to dig through them.
But, part of Go’s charm is that idiomatic code rarely puts tons of pressure on the GC anyways, and a GC can never be faster than stack allocation for a multitude of reasons… which is also why C# has value types.
But, if I understood correctly, that design both requires you to explicitly mark a type as a value type when you define it and it doesn’t support inheritance, so I’m skeptical that it will see any real adoption for a long time.
Java hasn’t had decades to develop a culture around this feature the way that C# has, and neither are like Go where everything is value typed unless it is one of the built in pointer types, or you explicitly put the value behind a pointer. (The pointers themselves are values, of course, but I think few people are interested in that distinction here.)
Go isn’t perfect by a long shot, I just don’t think adding the millionth keyword is going to be a silver bullet for Java’s predisposition towards GC pressure.
Java’s standard lib has been planned with value types in mind for some time now, plenty of classes will be able to take advantage of them. Also, inheritance is not rampant in Java, at least not in classes that are so numerous that value types would help them.
Also, Java is an exceedingly small language, so the millionth keyword comment is unwarranted.
> Also, Java is an exceedingly small language, so the millionth keyword comment is unwarranted.
Small compared to what?
Java and C++ are extremely competitive in feature bloat, and I can hardly think of anything that is comparable to them. Even C# has a smaller surface area, in my opinion, even though it manages to implement a variety of useful features that Java currently lacks (like async/await and value types), but C# is still a large language — no doubt about it. C# just manages the interactions of its features better than Java, in my opinion, which makes it feel simpler.
I have honestly never heard anyone call Java a small language. It is far from that!
> Java’s standard lib has been planned with value types in mind for some time now, plenty of classes will be able to take advantage of them.
Aren’t the majority of the standard container types using some form of inheritance? That’s what I recall, and that rules them out.
What?? C++ is perhaps the biggest language I can think of followed by Swift. C# is well on the path of C++.
But Java? It only has classes, inheritance (no multiple inheritance as c++), interfaces, objects which are instances of said classes, 7 primitive types and I am basically at the end of java’s feature list. Lambdas are often hated because they were implemented at first as classes with a single method (they no longer compile to that), so they are syntactic sugar only in a way. Classes can have static methods as well, and there are 3 visibility modifiers (which are also language feature of Go, just implicit in naming convention).
> and I am basically at the end of java’s feature list.
Not even close. There are quite a few keywords that modify behavior, including abstract, final, transient, synchronized, etc. You also have method overloading, including constructors which use a separate syntax from normal method declaration. Java also relies heavily on non-linear control flow thanks to exception handling. Go has panics, but it isn’t normal to see them in my experience, and indicates a serious bug with your code, whereas Java exceptions are extremely normal to see… but that doesn’t mean non-linear control flow is simple.
All Java classes have implicit methods like toString and equals, which you need to be aware of, and then this also lets you override the behavior of the equals method… yet you can’t override any operator, for some reason. And the actual equals operator is not at all what the average person expects when they’re getting started in Java, since it is referential equality, not value equality, except when it isn’t. I guess it is “simple” that the language lets you override the equals method, since that is consistent with other methods, but why is it a method at all? Why doesn’t the equals operator just do the expected thing, which would be equivalent to calling an invisible, non-overridable equals method? If no operator should be overridable, then no operator should be overridable, and the equals method is effectively an operator since the regular equals operator is just a footgun most of the time, except for the cases where it is performing value equality checks. Java has other footguns that stem from the language, not the standard library, like hashCode. I could go on, but really, why bother?
If you only want to talk about the basic features of the language and ignore the many keywords and edge cases you can run into in the syntax, then even C++ is simple. It’s the interaction of all these features that makes the language complex, including the implicit nullability of almost everything.
> there are 3 visibility modifiers (which are also language feature of Go, just implicit in naming convention).
Java actually has four access modifiers, not three, one of them is just implicit.[0] Go only has the notion of public and private, and private in Go is probably closer to the "default" access modifier in Java.
Compare to actually smaller languages like Go, or really small languages like Lua, Lisp, Tcl, or Smalltalk.
Java isn’t the worst language by a long shot, but it is one I’m unlikely to intentionally pick for anything.
You’ve left a bunch of comments all around this thread defending Java against even the smallest slights. I’m not interested in debating pointless aspects of Java forever this morning. I’ve used Java. I’ve used C++. I’ve used many languages in many contexts. You may disagree with my conclusions, and that’s fine. In my experience, Go takes less code and is less error prone than Java, in addition to being nicer to deploy and run. That's enough for me to choose it over Java, even if Go isn't perfect either.
I do comment about Java when I feel it is needlessly bashed - like in this case, comparing its complexity to C++’s, which couldn’t be further from the truth. Sure, brainfuck is also a smaller language than Java, it doesn’t mean the latter is at the other end of the complexity-spectrum.
I have zero problem with anyone choosing their favorite language for some job, but do not make a language look better by spreading bullshit about another (GC-knobs, java language being complex).
> That benchmark does not mean Java's GC is faster when you're "actually allocating on the heap"
The benchmark doesn’t show it, but it is nonetheless true. Java uses thread local allocation buffers, which is possible due to moving GC, and it is literally as fast as it gets (a single, non-atomic pointer bump)
Again, GCs just aren’t that simple. You seem to be taking a very literal interpretation of what was said, but that person was implying that Go is only fast when you’re avoiding the heap. They weren’t talking about the actual, literal speed of allocation. My interpretation of their comment is confirmed in their later reply to me, as far as I can tell.
Java’s GC design requires more barriers which hinder performance throughout the lifetime of a heap allocation, not just at the time the value was allocated.
Either way, Go also uses per-thread caches in its allocator to avoid contention between threads. I don’t believe it is a bump allocator, but this goes back to other tradeoffs being discussed. The number of barriers, the duration of STW, etc.
I don’t understand why so many people in here feel the need to declare the superiority of Java GC. Java’s garbage collectors are better for Java, due to the way that Java allocates practically everything on the heap. That doesn’t mean they have zero performance tradeoffs and that they’re perfect. They do have tradeoffs, and not just in complexity of tuning.
You clearly seem to be here to troll / just crap on Go while trying to appear fake-earnest, and that goes against HN guidelines. I believe this falls under "sneering".
I've actually also used Rust plenty in professional contexts over the last five years. Once again: no tool is perfect for every job.
As I recall, C#'s GC famously has very few knobs as well. I don't think you can swap the GC implementation, either, which is a source of even more complexity in the world of Java.
I had a look through some of your comments here and elsewhere and just wanted to give you some friendly feedback: This style of commenting is quite toxic. I think you'll feel happier if you let go of the tribalism and engage with others in good faith.
> The compiler now uses a jump table to implement large integer and string switch statements. Performance improvements for the switch statement vary but can be on the order of 20% faster. (GOARCH=amd64 and GOARCH=arm64 only)
> optimization of switches would be different for different architectures. every x86, amd, intel, and then all the non x86s. they all have very different execution times and cache times and pipeline times. there is no single answer. there is usually a huge pipeline cost for a computed jump.
> when you are considering native-client
restrictions, jump tables become impossible.
> also note that go switches are different than c switches. non-constant cases have to be tested individually no matter what.
> having said that, a large (where large depends on architecture) dense switch would be faster than what is implemented now. BUT there are two parameters needed to implement it. how large and how dense. no matter what is used for large and dense, there will be a micro-benchmark on some machine, under some alignment, with some branch cache that will look wrong.
> what is implemented is as follows.
> 1. in order, all non-constant cases are compiled and tested as if-elses.
> 2. groups of larger than 3 constant cases are binary divided and conquered.
> 3. 3 or fewer cases are compared linearly.
> i honestly dont think there is a much better algorithm across all machines.
Kind of annoying that the blog post links to pkg.go.dev but pkg.go.dev doesn't yet show 1.19 as the latest Go version so the links don't actually take you to the new documentation.
Conversely, as someone who has written a lot of code using LookPath I was shocked (like, actually gasped) to learn in these release notes that it ever did the old behavior, and now need to go check several projects for security issues.
Edit: checking the actual os/exec docs instead of the release notes, it’s actually not as bad as it sounds. What the change actually seems to be is that relative paths are now ignored even if they are in $PATH.
On Unix, it only did the old behavior if you explicitly had "." in your PATH (same as all other programs doing path lookup). The original (1979) default PATH included ".", but it's been considered a bad practice since the mid 1980s, and no modern Unixes include it in the default PATH. So for most Unix users, this is a no-op change.
But this is a change on Windows, where bad practices stick around because backward compatibility. (Lots of things would break if cmd.exe changed; but fortunately they fixed this in PowerShell.)
I am quite sure that until the early 2000's it was configured by default in many UNIX deployments, and I did use many flavours of it, starting with Xenix in 1992.
For sure; by the late 2000s I was still seeing it, but it was fairly uncommon by then. When I said "considered a bad practice since the mid 1980s" I was specifically meaning since Grampp & Morris 1984[1]. It definitely continued to be common for decades after that. I remember lots docs that told you to run "configure" where now they'd tell you to run "./configure", but the docs tended to lag behind the actual change to the default PATH.
We're especially excited for this release because we discovered a decade old inconsistency with Go's notation of ASCII/UTF-8 that's now fixed in Go 1.19.
1. Go uses UTF-8 encoding. UTF-8 is a superset 7 bit, 128 code point ASCII. (Two UTF-8 creators, Rob Pike and Ken Thompson, are also two of Go's creators.)
2. All UTF-8 points over byte 128 must be encoded as two bytes. All code points 128 and below are encoded as a single byte.
3. Single byte code points between 129 and 256 are valid in Go, but not as UTF-8.
4. Go uses different notations to make the distinction between 129+ code points that are two bytes and 1-256 code points that are a single byte, notably the code points in the 129-256 range. Go represents single byte code points with the \x notation, "\xXX", and multi-byte code points with the \u notation, \uXXXX. All ASCII and single byte 129-256 codes should always be represented with "\x" notation.
The core of the issue is that Go was printing code point 128, \x7F, as \u007f. As previously said, the "\u" notation is reserved for valid UTF-8 points past the ASCII range, which are always encoded as a minimum of two bytes. \x7F is a valid single byte code point in the ASCII range and should not be printed in the "\u" notation.
>Quote and related functions now quote the rune U+007F as \x7f, not \u007f, for consistency with other ASCII values.
How did we find this? We're fans of arbitrary base conversion (https://convert.zamicol.com) and we discovered this issue while checking our work in our Go libraries. If encoded as two bytes, base conversion would be less useful.
That's code point 127, which has always been quite a special case as the only non-printable ASCII codepoint larger than the smallest printable codepoint.
> All ASCII and single byte [128]-256 codes should always be represented with "\x" notation... the "\u" notation is reserved for valid UTF-8 points past the ASCII range... all valid UTF-8 "\u" notation characters must be a minimum of two bytes.
Says who?
I mean I guess some consistency (though, with what?) is nice, but you should be plenty prepared for the other format too. `strconv.Quote` is only meant to be equivalent to Go's source string literal representation and that allows both, and JSON only allows \u, and Python's source literals also allow both, and and and....
> UTF-8 points... single byte code points... multi-byte code points
This is... at best, unnecessarily and fundamentally confusing language.
The playground example (https://go.dev/play/p/R9dm6uYZSas?v=goprev) is a great example. All single bytes, 0-255, print as the printable character or if non-printable as \x except 127. There's nothing special about 127 to deserve this.
Inversely, \u denotes multibyte for all code points except 127. Once again, why is 127 special?
There's two possible fixes: note that 127 is special (even without a reason, but at least document it), or change the behavior to align with everything else. UTF-8 itself was a response in part to perceived arbitrary decisions made in other encodings; I'm not surprised that the second fix was preferred.
Our chief concern was how many bytes were used in encoding, and that's when we ran into this issue. If not fixed, our tests in our library had to notate why 127 is special (because Go says so), or hope for a change. Now that it's fixed, there's no need for downstream documentation.
It's a minor change, but now no one else ever has to spend the time we took to look into this issue because now there are no surprises. That makes it worth it.
>That's code point 127
How does that joke go? There's only two hard things in computer science...
Range selection is not a special case, and it's equivalent to writing it as `case r == 0x00, r == 0x01, r == 0x02 ... r ==0x7f`. The short hand, `r < 0x20`, is far more readable. I would reiterate, the lack of the need for another literal `case` shows that logically this is not a special case.
In the sense of the abstract idea of ASCII, 0x7f is unique in its position, but so are all characters. There is no meaning in its positional placement in ASCII. It's totally arbitrary and was thought to be a useful convention. If position denoted other relevant, and unique, meaning to printing, then yes it could be a special case in certain circumstances. But its position has no additional information. And that's the key, no information means no special case.
> On Windows only, the mime package now ignores a registry entry recording that the extension .js should have MIME type text/plain.
As someone whose primary dev machine runs Windows, this is a very convenient improvement to a pitfall I ran into. I probably won't stop manually adding the JS mimetype for a while though.
I got no dog in this hunt but I've come to really despise tools that grab random stuff from the registry or the environment. There are a myriad of ways that bites developers that just want stuff to work.
I mean mapping of file extension to filetype is fine as an OS feature. On Unix that's /etc/mime.types (or /usr/share/misc/mime.types, depending on the system), on Windows that's the registry. What I despise is are tools that vomit random stuff in to your config, whether that's the registry or ~/.config/.
At first glance I don't see anything done about migrating the standard library to use generics. Is there any roadmap? I remember a cascade of related improvements when Java introduced generics.
The new generic atomic.Pointer[T] has been added[1]. More general improvements for slices and maps are being prototyped in the x/exp repos[2][3] to get it right and not get stuck with a sucky implementation in the stdlib.
I think they've mentioned that they expect that their generics spec will evolve, plus they're waiting to see what idioms emerge before refactoring the standard library.
For now there are some experimental things in the exp module. See the maps and slices packages there. But yes, my understanding is also that they kind of waiting to see how generics work out. They've promised to make no breaking changes to Go v1, so adding any functions to the standard library is not to be taken lightly.
I think almost everyone is waiting to see those idiom too (myself included). If I'm going to rewrite stuff, I want to be able to justify it (less LOC may be be enough) and "do it right". Hopefully I get to see them in the sdlib soon!
Generics are a language level feature, so it must be done at the
language/compiler level, otherwise people are forced to implement it using "go
generate" or other metaprogramming. The standard library doesn't ever have to
change in that regard. If people want a generic version of some function, they
can just write it themself. Its possible/likely that more generic functions
will be added to standard library, but its not a foregone conclusion that it
needs to happen, or even should happen.
> Doc comments now support links, lists, and clearer heading syntax. This change helps users write clearer, more navigable doc comments, especially in packages with large APIs. As part of this change gofmt now reformats doc comments to apply a standard formatting to uses of these features.
A lot of exciting new features, but really happy to see this one.
I see what you mean, but I was looking at (and responding to) the 15 comments which that account posted to this thread, which were clearly provoking and perpetuating a flamewar.
Other than my comment on "simple" I don't see what you mean. I said in another comment I think that being able to watch newer languages evolve is interesting, and this stands out as exactly one of the more interesting tradeoffs of a new toolchain.
The "simple" bit is just me poking fun at Go though, so yeah sure detach away.
I think it has to do with posting a high volume of low-information comments on provocative points.
For example, "I was told that garbage collection knobs were an antifeature" (https://news.ycombinator.com/item?id=32323019) could easily be mistaken for trolling even if you didn't mean it that way, and indeed it derailed the thread.
As I said earlier, I was responding to the total set of 15 or so comments you posted. That's a lot of comments, especially low-quality ones. We're trying for quality rather than quantity here, so if you could post in the intended spirit in the future, we'd be grateful.
Time will tell. We have 11 years of Go to look at so I guess your argument is that we'll start seeing rapid adoption of optimizations? So 11 years to hit the upward curve of the S, if we see that.
It was too bad they had to do it. Being able to use say LLVM's compiler tech would have had it's advantages. But they wanted a good developer UX which LLVM just doesn't provide.
Thats a kind of gaslighting isn't it. Seem to be saying no one dare to write new compiler ever because v1 of new compiler will be very likely worse than v10 of existing compiler technology.
If they could have simply use LLVM they would be stuck with slow compilation that Rust folks are very fond of telling. And they would not be able to use segmented stack which was differentiating factor for language.
Go doesn’t use segmented stacks anymore, but it does still have resizable stacks that start small and are technically able to grow to any size.
IIRC, they were artificially capped at 1GB a few years ago since it was decided that stacks larger than that usually indicate an obvious bug, and if infinite recursion causes a goroutine stack to consume all memory on a machine, the OOM killer would make this harder to diagnose than a reproducible panic with a backtrace.
I'm hoping atomic.Pointer's performance gain offsets the dependency of ours that rushed to switch to some generic slice function library and basically doubled memory allocations on a near-critical path.
We didn't pay for it, so I'm not going to drag anyone personally for something that's a cultural issue. If you're worried because you might use it, you'll see it in your benchmarks just like we did. If you don't have benchmarks, it's the least of your performance concerns.
why do you need to be so combative? I like to learn from others mistakes if I can. If you don't want anyone to learn anything from your issue, and force everyone to figure out everything for themself, thats your choice.
They've made it clear - they don't want to denigrate a library they're using, likely for free. Imagine the author of that library seeing such a comment. I'm sure you'd find it educational, but it's a terrible experience for the author.
If O(3) people want to design a new language, how should they incorporate the "optimizations that other languages have had for 30 years" into that language that will satisfy the internet's peanut gallery?
(alternatively)
Go is the most successful language released in the last 20 to 30 years, so if this is meant as a critique I can't see how it's valid.
> Leverage an already existing compiler backend, typically GCC or LLVM – just like Julia, Rust, or Swift.
>
> Or, similarly, target an existing, battle-tested VM – like e.g. Elixir, Kotlin, Clojure, or F#.
Each of these options come with enormous, unavoidable, and ultimately unnecessary baggage derived from their substantial histories. They simply don't, can't, represent the full spectrum of acceptable foundations for new language development.
> GCC and LLVM both have terrible UX with their ridiculous compilation times.
And so will go if it implements the optimizations they implement. GCC and LLVM are not putting sleep(10) everywhere just for the sake of longer compilation times.
Then it won't do those optimizations. Fast compiler times are more important than being the fastest at runtime. It has to be "fast enough" and given the adoption I'd say the consensus is that it already is. More optimizations are all gravy at this point.
I don't. I avoid Go like the plague. My point was that you don't get to unilaterally decide for everybody whether fast compile times or runtime performance is more important.
> If O(3) people want to design a new language, how should they incorporate the "optimizations that other languages have had for 30 years" into that language that will satisfy the internet's peanut gallery?
Generally this is done by building the language on an existing compiler backend, and the ability to do this is in fact why the backend/frontend distinction in compilers was created to begin with.
> Go is the most successful language released in the last 20 to 30 years, so if this is meant as a critique I can't see how it's valid.
False dichotomy. A language can both be popular and have left very obvious optimizations on the table for years.
Those compiler suite was very portable ("The compilers are relatively portable, requiring but a couple of weeks’ work to produce a compiler for a different computer.") which is a strength Go inherited from get go.
Also 2/3 of Go designers (Pike, Thompson) were intimately familiar with that code base by the virtue of having written it in the first place.
(some) people just get twisted that Go didn't pick LLVM because it had better optimizations as if in engineering you just look at one positive and ignore all the negatives. For LLVM the list of negatives is huge: slow compilation, gigantic code base that would take ages to learn and contribute changes, architecture that would preclude some of the optimizations in Go (Go has a very smart and fast linker where using LLVM would forced them to just use the LLVM linker, segmented stacks not possible, GC not possible) etc.
Just to put things in perspective: just implementing segmented stacks LLVM would probably require more man hours than the all the compiler work in Go 1. And would realistically require forking LLVM because why would Apple devs with bonuses tied to shipping what Apple needed care about some new language from 3 guys from Google.
Can you explain why you think the claim is incorrect. I would think it at least plausible. A huge portion of the languages in common use are all over 20 years from introduction. So Go is certain to be in the top for language introduced after 2002. Or at least I would think.
Even if you had said only the last 15 years, that would still include languages like Clojure, Rust, Typescript, Swift, and Julia. But from 20 to 30 years you start to get into the league of: C# and Java, R, Lua, PHP, Ruby... In other words, there is no way you can prove your assertion.
Just to note, I didn’t make the original assertion about the success of Go. I was only asking why your dismissal of the claim was so emphatic. I just stated that I found the assertion plausible where languages from the last 20 years were concerned and wondered why you were so certain Go could not be at the top.
Python was released Feb. 20, 1991. I checked that before I posted. And I am not the original poster and did limit my comment to 20 years. But, I did forget Java in the 30 year category.
I think these debates boil down to people just talking past each other about things that don't matter. "Blue is the most widely used color in company logos in the last 30 years" "Yeah well I like red better" Yeah, that's fine.
You’re probably right, I was thinking about amount of code in actual production use and limiting to languages from after 2002, but some people clearly had different things in mind.
Using existing compiler toolchains, obviously. In 11 years I think it's interesting to see how that has panned out. Go is very popular for a relatively new languages, yes.
Does anyone think that C and C++ are going to die?
Anyway, I'm actually glad that Go went down this path exactly because it allows us to look at the decision and see the costs. I'm very curious to see if we do hit that S curve, as another poster mentioned.
Depends on how one stands in regards to secure code.
I love C++, but secure like the ALGOL linage it will never be, no matter how many fixes we throw at it, because that C copy-paste compatibility layer is never going to be thrown away.
I'm quite fond of secure code but attackers don't really exploit memory unsafety in compilers as they're not often in a position to or, if they are, there are much simpler ways to execute code.
Except compilers aren't used in isolation, if one writes the compiler toolchain in C++, chances are other system critical components are also written in C++, if nothing else for the comfort of single language use.
Security must be applied end to end, and not pontual.
Seems to be significantly more equitable than the React license that people were originally concerned about. That one terminated your rights to use React if you sued Facebook for patent infringement for any reason. This one only terminates your rights to use Go if you assert that Go itself infringes on your patents.
That also sounds pretty reasonable. You don't get to patent something, contribute code that uses the patent to go, and then sue people using go for violating the patent.
I like go and I can program it very well. But for full stack web dev Rails is still my go to choice. Less headaches. I don’t need to setup everything by myself (sql layer, middlewares, background jobs, validation patterns, i18n, authentication, authorization, security concerns, caching, mailing, migrations and a million other things).
There’s just nothing comparable from a developers productivity perspective to what Rails offers.
Would I write a command line tool in Go? For sure.
After using Rails for 15 years, I'm looking for a future on Go. The niche where Rails is great is larger apps that don't need anything custom or deviating from the built-in Rails Way.
Simpler apps or services are simple in Go. Apps with heavy customization are also great with Go, and there's now plenty of libraries to help you outsource heavy lifting.
[0]: https://blog.twitch.tv/en/2019/04/10/go-memory-ballast-how-i...