"If you look closely, those aren't angle brackets, they're characters from the Canadian Aboriginal Syllabics block, which are allowed in Go identifiers. From Go's perspective, that's just one long identifier."
Simultaneously amusing and disturbing. Is there an award for which one might nominate this person?
That's the kind of thing you see in a comment and then back slowly out of the room, never breaking eye contact with the code until you are far enough away to turn tail and run.
At the very least they could have used the cute Japanese 「quotation marks」 to avoid confusion.
This is the person you need. Be assured that they were the person willing to log into the live prod server to fix that script that was killing your startups app at 2am on a Friday night in 2007 when everyone else was too gun-shy.
> A valid identifier must begin with a non-digit character (Latin letter, underscore, or Unicode character of class XID_Start) and may contain non-digit characters, digits, and Unicode characters of class XID_Continue in non-initial positions. Identifiers are case-sensitive (lowercase and uppercase letters are distinct), and every character is significant. Every identifier must conform Normalization Form C.
I don't know about the standard, but both GCC and Clang accept ZWSP characters in the middle of variable names.
Clang logs a warning about potentially invisible characters every time you use them. g++ just compiles the code without warning, even with -Wall and -Wpedantic.
What clang _doesn't_ warn for, is the use of the left-to-right override character. This can be used to confuse the victims of your code even more.
I suppose there are many examples like this. Once I've copied a piece of code from SO but IntellijIDEA was warning me that the semicolon at the end of the line is not correct.
Upon further investigation(I think IntelljIDEA reported Unicode value) I've found out that the 'semicolon' I copied is a Greek character that looks exactly like semicolon.
I wonder how people on vim/emacs deal with situation like this.
Yes, but IntelljIDEA's Linter showed me the error. Without linter you'll need compiler/interpreter to tell you something is not right.
And in my case, even if compiler / interpreter reported about an invalid syntax - I would not think of checking the unicode codepoint as it the character appeared to look exactly like semicolon.
vim user here. Most of us are running the same linters and language servers as every other major editor, which does a pretty good job catching stuff like this. Modern vim is really only a C core with a web of JS, Python, Lua, and vimscript using the exact same third party solutions for every other part of an "IDE". emacs is probably similar, but no personal experience with that.
Fixed in C++23 by adoption of P1949R7 [0], which didn't originally set out to fix that, but seems to have after a Netherlands national body ballot comment.
I'm actually curious if Go would normalize the characters AND if it had used angle bracketd, and make those equivalent to angle brackets? Would he then not have to even change his code?
I guess that's a lot of ifs/ands but it's interesting that one could have future proofed their code if their bet on syntax paid off
They're semantically different characters and Go supports multilingual identifiers, so I would not expect normalization to impact this. At most I would expect normalization to simply deal with ordering combining marks and standardizing on composed or decomposed forms of characters (where applicable.)
It might be useful for editors and compilers to check for tricky Unicode (lookalikes for common characters, atypical changes in direction, invisible spaces or other formatting control codes, etc.) in this era of copy/paste coding.
Codegen is a super flexible and versatile tool, and simply having a strong generics feature does not mean you can avoid it: at $DAYJOB we use codegen even for C++ for e.g. grpc/openapi types, generating rapidjson streaming unmarshallers, etc not even counting Qt's moc/uic.
No, I do believe Go will be improved if it allows library maintainers to implement some things like new collection types and utilities using generics.
I for one won't be using more advanced functional coding though, it's a lot more difficult to read / parse and the language is not designed with functional programming in mind. I'm sure people are already working on a functional programming library so you can do map / reduce / filter and the like, but readability and performance will be dreadful.
Strongly agree. Some people like to be on this endless treadmill of cutting edge features, like in this case parametric polymorphism but I refuse to be dragged in. They just make the code more complex with no benefit.
Next thing you know they will try to force me into structured programming (I'm sure there are people already working on do...while). They can pry goto from my dead cold hands.
I've come to prefer code quantity over code complexity. I mean, what the fuck is
def traverseImpl[F[_], A, B](fa: Option[A])(f: A => F[B])(implicit F: Applicative[F]) =
fa map (a => F.map(f(a))(Some(_): Option[B])) getOrElse F.point(None)
This is probably less readable than it has to be due to the lack of whitespace. I don't know the language (Scala?), but I don't think it's too hard to work it out. First, the '[F[_], A, B]' part are type parameters, where I assume the 'F[_]' part is just a bad notation for a type of kind '* -> *' (i.e. a higher-ranked type). The next value parameter has type 'Option[A]', and after that we have 'A => F[B]'. This confirms my suspicion that 'F' is a higher-ranked type - and probably a functor or something. The 'f' parameter is a function for converting a value of type 'A' into a value of type 'F[b]', which looks like it's tranforming a value and putting it into a functor or something. The final '(implicit F: Applicative[F])' hints at deep problems in the language, but I think it basically just says something like 'the higher-rank type F must implement the trait/typeclass/whatever Applicative'.
That's the type. The expression makes little sense to me, but I suspect you removed a bunch of dots and maybe some parens, and it should actually look like this:
fa.map (a => F.map(f(a))(Some(_): Option[B])).getOrElse(F.point(None))
So this function essentially just operates on an option value inside of some functorial context. This seems like a building block function that you might use often, but probably not write very frequently.
(If you didn't remove anything, then I suppose this language has infix 'map' and 'getOrElse' operators. I think that's bad, but it's not related to the type system.)
scala has infix everything ‘operators’. f(x) can be written as f x and f.g(x) can be written as f g x if the parser manages to figure out what it means (I’m sure they phrase that differently)
Idea, I think, is to make the repl look more like a CLI, but it also may be a fight against punctuation.
Its only hard to read because you don't know the language, otherwise, its fairly straightforward. Although the Applicative type would require some research. I would much rather use a language like this, that can create enough abstraction to enforce consistency throughout the codebase, rather than relying on boilerplate conventions. Just had a bug this week in Go where a previously defined "err" variable was re-used instead of creating yet another "err" variable, which caused the actual error never to get checked.
This function is certainly hairy, but its use in the codebase might very well be intuitive and simplify code that would otherwise be way less expressive. I personally think it can be worth it.
What is funny about it? People have worked around deficiency of systems since forever in almost every field. Is it something you have never seen in real life?
C++ and Java also used codegen before they got templates and generics respectively, but since Go's matra is to relearn history, they had to have a go at it.
Isn't the "noise problem" with Go error handling the control flow rather than the lack of generics? You want to return early (and potentially add context to the error) if there was an error, otherwise continue on. How are you thinking of solving this with generics?
The previous try() proposal added new control flow. That's part of the reason it was criticized so heavily -- it hid a control flow construct in something that looked like a function.
"Isn't the "noise problem" with Go error handling the control flow rather than the lack of generics?"
It's both. Adding only concise flow control, such as with the ? operator, works poorly if all you can return is multiple non-generic values; various common permutations get awkward in that case. And as you point out Result<> doesn't achieve much without the concise operators. Rust has very successfully shown the benefit of the combination.
I wrote a comment here pushing back somewhat on the idea that Rust has much better error handling ergonomics (not correctness, where it clearly wins, but ergonomics) but thought better of it; debating Go vs. Rust is about the least productive thing we can do on HN.
I like what Chris Lattner said about language design in Lex Fridman podcast, that you know when you're doing something right because it "feels" right. Rust has a bunch of syntactic sugar which gets critised in many languages, but to me things like error returns with ? "feels" right. Love Go as well, and the error handling makes a lot of sense in concept but it just doesn't "feel" nice to look at or use.
I like there's a language that has such a strong vision, but dealing in absolutes has drawbacks, a little sytactic sugar would make error handling a lot nicer.
eh having used both, rust's approach isn't any better. when you need to decorate an error, which generally is a lot of the time, rust has the same ergonomics as golang. they optimized code for a generally rare situation.
From what I know of zig’s error handling, I really don’t think it’s better than result types.
It is less work, especially when using error unions (though I think that makes it easier to miss changes to error sets as they’re implicit), but since errors are literally just u16 they’re also deeply lacking in richness and composability.
It’s definitely an improvement over C’s error style, and thus perfectly understandable given zig’s goal, but I very much disagree that it’s “better than Result”.
It’s also extremely magical in that it needs special language-level support throughout in ways Result is not, but obviously that’s more of a matter of taste.
> It’s also extremely magical in that it needs special language-level support throughout in ways Result is not, but obviously that’s more of a matter of taste.
That's a peculiar observation, it's like saying Rust's borrow checker is magical because it needs language-level support. I mean, that's the point.
In any case re: zig error or rust's Result, I don't think either of us are discussing features and pros/cons, but just describing what we like the best. Not very objective an argument.
> but since errors are literally just u16 they’re also deeply lacking in richness and composability
> That's a peculiar observation, it's like saying Rust's borrow checker is magical because it needs language-level support. I mean, that's the point.
You can’t do borrow checking without language support (I think, at least not without a significantly more expressive type system) whereas Result is pretty much just a type, the features it uses are mostly non-exclusive, and those which are, are very much intended not to remain so (e.g. the `Try` trait for the `?` operator).
Not really? That’s just merging error sets, but you can’t say that you’ve got an error which originally comes from an other error, except by creating its dual and documenting it as such. Essentially your choices are full transparency or full type erasure.
> Not really? That’s just merging error sets, but you can’t say that you’ve got an error which originally comes from an other error, except by creating its dual and documenting it as such. Essentially your choices are full transparency or full type erasure.
What you just described are error traces. They're generated by Zig auomatically for you, thanks also to the fact that errors are a special construct in the language.
Sum types are a powerful language feature, but they’re also a general language feature, they don’t exist for the sole purpose of creating a Result type (and indeed a number of languages have had the former and lacked the latter).
No, I mean that in contrast to the other side of the discussion (zig’s errors).
And affine types are not here for borrow checking, it’s closer to the opposite. Affine types are a formalisation of RAII, borrow checking is a way to make references memory-safe without runtime overhead which mostly avoids unnecessary (and sometimes impossible requiring allocations & refcounting) copies of affine types.
To clarify I thin parent means syntax error I'm the sense of "I'm parsing a (code) file at runtime and there's a syntax error", not a language syntax error.
I mean there's nothing stopping you in zig from creating a rich "error" struct and returning a union of the struct with whatever you would have done otherwise. You only lose the error return trace.
"I mean there's nothing stopping you in zig from creating a rich "error" struct"
Thus Rust's std::result::Result<T, E> where the E in Err(E) is anything that implements the std::result::Error trait.
Rust got this right. It satisfies most use cases with the least possible noise and accommodates the weird ones easily, and does it in a std:: codified manner where no one has to wonder whether or not there is anything stopping them.
Have you seen what a "rich error struct union" looks like in zig? It's very easy and very legible code. I'd further bet that there are no cases when you're doing this that you want an error return trace. And the downstream handling code is very easy.
If there is a problem with how it's done in zig, it's a community one: maybe the pattern needs a megaphone beside it, like examples in the main docs or tuts in the various zig tut sites that have cropped up
"Have you seen what a "rich error struct union" looks like in zig?"
No, and I'm not arguing that zig hasn't done well. I do argue that Go has not; you're only non-weird choice is to fetter code with miles of "if err ==/!=" gymnastics.
Traces are frequently very useful; precise traces have saved me a lot of pain in life. Lack of traces precludes nothing for me, but I have enjoyed the benefit of them enough times to acknowledge their great value.
Traces are fantastic for unexpected errors. You don't need them, for example, when validating user input, which is the type of system where you want richer errors.
Early return is definitely a part of the problem. One thing you could do with generics is create option/result wrappers that would work with any code and hack together a try/early return solution using panic/recover, but you'd offend the Go community, and at that point you might as well just use Rust.
Was the try() proposal for Go error handling?
I'm really curious what happened to the proposals for shorter Go error handling, rather than if err!=nil {...}
Generics took several iterations from the Go team to finally land on the current (pretty good, imo) design. I might expect error handling to also require multiple tries before it sticks. And we should probably wait for the dust to settle on generics a bit before proposing another design for error handling.
I don't see how it changes error handling. To implement a generic error handling function you would need a non-local return statement (panicing is not error handling). And a generic monadic Result type would make error handling more verbose due to its reliance on closures for subsequent actions.
Does anyone know the impetus for using [] in generics? I know Nim opted for the same syntax and I assume it has some parsing/tokenizing benefit like the switch to using “fn” a la Zig or Rust. However, the syntax seems needlessly ambiguous with array notation sharing the same operator. I know I’m not alone because I’ve seen others mirror my concerns on Nim forums [0].
The original implementation used ( .. ), which I found an unreadable mess of parentheses soup, but luckily that was changed. The problem with < .. > is that it's ambiguous with the greater-than and lesser-than operators. From [1] (among many other discussions on this):
For ambiguities with angle brackets consider the assignment
a, b = w < x, y > (z)
Without type information, it is impossible to decide whether the right-hand side of the assignment is a pair of expressions:
(w < x), (y > (z))
or whether it is a generic function invocation that returns two result values:
(w<x, y>)(z)
In Go, type information is not available at compile time. For instance, in this case, any of the identifiers may be declared in another file that has not even been parsed yet.
They thought about generics for 12 years. It's unlikely that someone who thought about it for 3 minutes on Hacker News has some epiphany that wasn't taken into consideration.
Also, I think it is fair to challenge the decisions here seeing as how there have been a few things chosen in Go's history that ignores prior art from other programming languages.
Most (if not all) languages are compiled or interpreted in two stages: first the code is parsed (a.k.a. lexed, tokenized) to an internal representation (usually called "Abstract Syntax Tree", or AST), which just means translating something like ">" to "GREATER_THAN_SYMBOL", "5" to NUMBER, etc. and storing this in some data structure. This is also where you usually detect syntax errors like forgetting to use " to close a string and such.
In Go, you can do this fairly easily with the go/ast package.
Then you take the AST and do something with it. That something can be compiling the code, but also writing some sort of tooling like a linter, or identifier rename tool, or generate documentation from it, or whatnot. When compiling the code you need the type information, for a lot of other purposes you don't really care.
It's pretty valuable to keep the parsing as simple as possible; it makes it easier to detect errors, improves the quality of the error messages, and makes it easier to write tooling. It also keeps the code a lot simpler, easier to understand and modify, etc.
> Most (if not all) languages are compiled or interpreted in two stages: first the code is parsed (a.k.a. lexed, tokenized) to an internal representation (usually called "Abstract Syntax Tree", or AST)
Lexing/tokenization doesn't produce an AST; it produces a token stream. Parsing (which may or may not be preceded by lexing) produces an AST.
Rust used [] for generics a very, very long time ago. A lot of people wish we still did. I personally am glad we do not, but that's only in the context of Rust. I think the choice the Go folks made is very reasonable.
is it really the case that parsers can't tell the greater than sign in "a, b = w < x, y > (z)" without type information? or is it simply difficult to parse for some parsers?
So for C# it's resolved "by examining the token after the closing >: If it is one of (, ), ], :, ;, ,, ., ?, == or !=, the expression is parsed as a generic method call."
I assume this works reasonably well; however, it's not hard to see how this might be a problem in some cases where the parser gets it wrong.
The doc says it's actually impossible, and I'm inclined to believe so. Rust has a similar problem in their case they adopted the "turbo fish" operator, where the generic function call would take the following form:
Eiffel is the first language I know of to use [] for generics. It's consistent with the interpretation of [] as "subscripting"; the generic is a family parameterized by types, and the subscript selects one element of the family.
That errs to the side of too much consistency for me. I'd like a nice balance; a small amount of syntax... but not too small. After all, morally x.a, x[a], and x(a) are all the same thing.
I don't know the answer to that but Scala is also a precedent for using square brackets for generics (AKA type parameters). However in Scala arrays don't use square brackets so that confusion is avoided.
> Because we will not know what the best practices are for using generics,
Well, you could... take a look at languages created these past twenty years that support generics, learn lessons from that, and see how these lessons could apply to Go.
But well, the Go team is not exactly known for paying much attention to the state of programming language theory, so I guess that's out.
Given that every language is different, this is basically reasoning by analogy. Sometimes analogies are useful for explaining things, but they're not a reliable way of determining what will definitely work.
Reasoning by analogy in a sneering way doesn't make it work better.
Programming languages differ, yes, but not so much that they face completely different design decisions. Also, it's not "reasoning by analogy", it's learning from others.
In fact, why stop at generics? Why not go back and question structured programming? Or strong typing? Because structured programming and strong typing has proven to be extremely useful. Same as generics in statically typed languages.
Maybe reasoning by analogy and learning from others are about the same thing?
How much you should rely on others' experience, and when you should consider it a positive example versus something to learn from by avoiding, is going to be a judgement call.
Trivia: parametric polymorphism (aka generics) was first done in ML, 46 years ago. WP cites this reference:
Milner, R., Morris, L., Newey, M. "A Logic for Computable Functions with reflexive and polymorphic types", Proc. Conference on Proving and Improving Programs, Arc-et-Senans (1975)
(And of coures we've been able to write generic code in dynamic languages for a long time as this is a static typing concept)
I think Go is all the better for not rashly copying every idea from every programming language. The delay on generics is a small price to pay (if indeed a “price” at all) to have such a simple, ruggedly utilitarian language which also happens to be an absolute joy to use.
You can do that, but that's still not as good as direct experience with Go generics in Go code. Those wouldn't be best practices, they would be guidelines.
How does that compare to other approaches for generics? I know that you can do monomorphisation, which usually increases the executable size and compilation time, and usually gives faster code, but I don't know about other approcahes.
The two approaches are stenciling (what you call monomorphisation) and dictionaries. Stenciling/monomorphization increases binary size, compilation time and TLB/cache pressure. The dictionaries-only approach has more runtime cost. This hybrid is a middle ground of both.
The linked article is a proposal but at least when I tried Go a couple of weeks ago, it did not use that approach and to the best of my knowledge that proposal has not yet been implemented. Instead as of now Go uses the straight forward approach of generating one implementation for every instantiation just like C++ compilers do.
The upside is there is no runtime cost to using generics, the downside is there is significant compile time and link time cost to generics.
> the downside is there is significant compile time and link time cost to generics.
Are we talking statistically significant or productivity significant compile time cost? Go has a fairly fast compiler, it would be a shame if generics made that a less stand-out feature.
Yes, which I find unfortunate but I can respect that approach as it's likely the simplest way of implementing it and Go does want to avoid heavy abstraction in its type system.
The downside is you can't do something like have a function pointer to the generic itself or specify generic members of a dyn trait/interface or otherwise treat the generic as a first class value (although you can treat specific instantiations of it as first class). The generic is basically syntactic sugar for a macro like substitution system. C++ and other languages that take the monomorphization route all have this restriction, whereas languages with more "first-class" support for generics like C# don't and hence do allow the moral equivalent of a virtual template.
C# mostly gets away with it because it has JIT support for it; in the end you still need to generate code for every data-size it's used with. Obviously, if all data are passed by reference you'd only need to generate code for the size of the pointer.
Even in C#, generics are sort-of second class:
interface IMyLovelyADT { R Accept<R>(IMyLovelyADTVisitor<R> visitor); }
is not isomorphic to
Func<IMyLovelyADTVisitor<R>, R>
The interface is strictly more flexible because it's non-generic, see my comment at [0] for an actual code example (it's quite long) if you're interested, but the gist is "you need a non-generic wrapper interface IMyLovelyADT: it essentially does the same job as Haskell's forall, since generic types are not quite proper types in C#'s type system. Interestingly enough, upcoming Go 2's generics will not support this scenario: a method inside an interface will be able to use only the generic parameters that the interface itself declares."
They've been discussed as one of the several "feature proposals for Go 2.0" for couple of years, the playground for them is called "https://go2goplay.golang.org/", the issues with them go into "cmd/go2go" tag on Github, so that's why I called them "Go 2's generics".
And weren't they supposed to be in Go 1.17 but then got delayed?
From the outside, it seems like adding generics to Go seems similarly controversial and slow as the Python jump to version 3 — although Go generics haven’t even landed yet.
Generics have the potential to impact a decent amount of the standard library. Although I’m not surprised that they are not doing a major version bump given the isolation of the feature at this stage, it will be interesting to observe its uptake in the community and whether fragmentation occurs over what is supported and what isn’t given their recommendation of isolating generics-related code in third party libraries.
I don't think it's anything like the Python jump from 2 to 3. That changed the behavior of a lot of code (unicode vs bytes, various syntax changes, import changes, etc), and there was no way to automatically upgrade a codebase (2to3 didn't really work).
With the addition of generics to Go, they've been careful to ensure it's fully backwards compatible, so all existing code will still work as is. Major version bumps are for incompatible changes (like Python 2.x to 3.x).
What they're recommending with isolating generic code in existing libraries is so users of older versions of Go (1.17 and prior) can still use those libraries, just not any new functions/types the libraries have added that use generics, and hence require Go 1.18.
The fact that they can release generics and NOT bump to 2.0 is a triumph of the language designers and implementers, a testament to how much effort has been put into keeping this commitment.
> We expect that some package authors will be eager to adopt generics. If you are updating your package to use generics, please consider isolating the new generic API into its own file, build-tagged for Go 1.18 (//go:build go1.18), so that Go 1.17 users can keep building and using the non-generic parts.
You're right. I maintain a very large code base in a private repo and a successful open source project. Generics will allow me to clean up a lot of code duplication internally, but given that Go modules are distributed as source code, this would force everyone to use Go 1.18.
My plan is to not use generics for quite a while, as it's pointless to have two separate implementations, one for 1.18 and one for < 1.18, which is difficult, because I really like the feature.
Upgrading to the latest release of the Go toolchains is generally a painless experience; the Go team has a good track record of enforcing backward compatibility. Hopefully this means that you can assume other users will be able to use code targeting new features of go 1.18 in pretty much the same time window it takes for other releases to supplant old ones, and make this feature not unlike previous (minor) language features that nevertheless would break users who cling to too old toolchains.
I hope that the publicity caused by generics doesn't taint this release causing people to unnecessary procrastinate a toolchains update they would have otherwise done if it wasn't for generics.
Sometimes, people can't be on the latest version of software due to various compliance and regulatory issues, and updating anything to a new version requires some kind of re-certification or new auditing. So, when supporting such users, you need to work carefully on balancing their inability to update, versus yours to move ahead.
Hacking on some projects or experiments is one thing, but say you're providing code for the automotive industry or the payment card industry; you're in for a world of regulatory hurt.
Well for the users, they should be upgrading anyway.
I can't see a situation where someone would mirror your library in and not also have brought, or be bringing in a non breaking language change in the new golang.
The only change management advice I could give, that you'll probably know, would be to roll a month slower than golang.
I would say, many people also look down on much more popular languages, such as PHP. Go is still an absolute niche language, even more niche than languages like Scala or R.
Niche in your niche perhaps, but there's a lot of code in the wild written in Go. Hell, all the modern devops tools are written in Go (Docker, k8s & co.)
Meanwhile, as a non Java programmer, I've never seen a Scala app. I know about R, but nothing on my PC is written in it.
Agreed that PHP is massive in comparison to all of them.
Sure, devops tools are usually more fundamental, whereas high level languages are used for things closer to the business. Twitter's backend for example is mainly powered by Scala, but you probably don't know about it, because it is not as visible to you.
Enumeration isn’t a compelling way to indicate relative frequency (in this case, frequency of usage of Go and Scala, respectively). An enumeration of size=1 is even less convincing.
Yep, Kafka is probably the most popular one. I've heard that the team is looking forward to replace the remaining Scala code in the project with Java once pattern matching and co. land. Spark is another beast that was written in Scala. Many are reporting a high cost for compatibility. Nowadays, Scala community is all about Typelevel and ZIO, if you are not a category theory minded person, then you will have a hard time picking it up.
Go errors are actually less useful in this case since they do not capture backtraces by default (if I remember well), and invite gratuitous re-wrapping. Most times I had to troubleshoot something it was useful to know the lowest level where the error happened, not the arbitrary level the application/library author decided to be appropriate and missed.
Agreed, there are tradeoffs, but at least Go’s approach has more potential—there’s nothing stopping Go from capturing stack traces and returning them on the error object. On the other hand, exceptions always have the confusing control flow problem.
Sigh. Go without generics was useful. We built useful things with it.
Go with generics will continue to be useful, and it'll make it easier to build some things that were irritating to build earlier.
The design of generics went though so many rounds of iteration and discussions to make sure it wasn't a detriment to the language - indeed you can build everything that Go has already been used to build without ever knowing about or using generics.
Generics coming into a 1.x release also means that any code written under 1.x is expected to continue working perfectly, generics or not.
The process has made sure no damage will be done, and we now have a new toolkit to make solving some problems easier. I don't think anyone is trying to gaslight anyone by using generics.
That's an expected pivot. From "nobody needs generics" to "we absolutely needed 10 years to figure out the best way to do parametric polymorphism, which was already known for 35 years when the language was created".
And of course in addition to that, there will probably be claims that "no one claimed that generics aren't needed". I'm glad that we have the archives to show that this isn't true, and it's a good thing to keep in mind when opinions are being pushed on HN.
Often the people that expressed that opinion are different people then for example the person you responded to here.
Please remember that this is a forum of a very big group of participants with varying points of view and resulting opinions.
Even if he was previously a person that said golang doesnt need generics, I struggle with your point of view. It seems you're of the opinion that nobody should be allowed to... change their opinion? Or at least has to be continuously told that they're dumb because they held an opinion that turned out to be incorrect? We're all people and make mistakes/have incorrect assumptions about topics, why not just accept that and move on?
You might want to refocus on what kind of person you want to be, because both of your comments come off as very hostile
You're arguing for charitable interpretations of statements by people who claimed Go didn't need generics or even that Go was better without them, saying that one should be able to change ones opinion without being called dumb. I fully agree.
Similarly, a charitable interpretation of what kubb wrote would be that they are referring to those who might have been dishonest in their defense of Go's lack of generics, which one might say kubb does indicate by using words like apologetes and zealots. The Internet is full of people who pick a team and will say dishonest things in perceived defense of it. I agree with kubb in this regard. That is the charitable interpretation of what kubb wrote, but instead you assumed kubb referred to everyone who ever voiced that opinion and suggested kubb should refocus on what kind of person they want to be.
The Internet needs more of charitable interpretations, and HN in particular. Perhaps I failed to interpret you charitably now? Nuances get lost easily in online debates... :)
I sort of agree. The most charitable interpretation is that kubb is “nutpicking”—addressing the least articulate and worst arguments of a community. But he would do us all a favor to acknowledge explicitly the boundaries of his criticism. For example, I hold the position that generics aren’t necessary, but that they will make some code more clear and a lot of other code less clear (and this has long been my position)—does kubb’s criticism apply to positions like mine? Am I his “zealot”?
Moreover, using terms like “zealot” to refer to people with whom one disagrees is very likely to inflame the thread (as indeed it already has, to a degree), whatever kubb’s intention.
The term "gaslighting" has become overwrought to the point where it's hard not to immediately dismiss anyone using it.
It is not a "pivot" for someone to say, "X isn't necessary", and then say, "X still isn't necessary but at least it's implementation doesn't break things".
There's no gaslighting. The Go team position was always something like "We didn't find a good way to include generics in the language, but if we do we may add them". That's a reasonable position.
However some people, which are referred here as "apologetes and zealots", claimed that you don't need generics and that they were a bad thing, which is a very different thing and not the opinion of the Go team.
The conclusion here is that these people were wrong, just like the ones that said that it was impossible to do anything serious without generics. While those people were busy arguing with other people on the net, the Go team and community were slowly but surely refining their ideas on generics and how to make them fit into Go.
Surely you don't mean the Go team, which has had this in the language FAQ since 2009 [1]:
Why does Go not have generic types?
Generics may well come at some point. We don't feel an urgency for
them, although we understand some programmers do.
Generics are convenient but they come at a cost in
complexity in the type system and run-time. We haven't yet found a
design that gives value proportionate to the complexity, although we
continue to think about it. Meanwhile, Go's built-in maps and slices,
plus the ability to use the empty interface to construct containers
(with explicit unboxing) mean in many cases it is possible to write
code that does what generics would enable, if less smoothly.
I have heard even Java is going to catch up with modern times with Valhalla and maybe even futuristic later on, where a directory tree of source files can be compiled without weird twisting of java command line.
[0]: https://www.reddit.com/r/rust/comments/5penft/parallelizing_...