Hacker News new | past | comments | ask | show | jobs | submit login
The upcoming iterator design for Go 1.23 (gingerbill.org)
134 points by toprerules 3 months ago | hide | past | favorite | 148 comments



Nice write-up! At my personal level, I feel nothing about Iterators or even Generic. I can live with or without it, and kind of dont mind it to be used by library authors.

I didnt had a single line of code broken by golang version up updates in 7+ years, scanning through golang source code is always a pleasure. No hidden magic. I dont read documention either - just surfing inside of source code by clicking with CMD key.

My main KPI for Golang is the ammount of clicks required to understand internals ... very few compare to other languages.

If its stamped by guys like Russ Cox (rsc), I can sleep well too.


Haven't developed an opinion about iterators yet but generics are a godsend for writing libraries. The first time I attempted to create a database toolkit for Golang I quickly abandoned it because of limitations with the type system. My first attempt post-generics, REM, went much more smoothly. Been enthusiastically working on a follow-up privately. I honestly don't think I would be writing much Golang without generic types.


Go has very much "magic" all around. Take a look at goroutines, they are pure magic. Theres also lots of implicit magic going on with magic comments, file names convention etc.


I agree about directives embedded in comments. Especially things like // go:embed

But goroutines? They are a foundational language feature with clear syntactic demarcation and documented behaviour.

I think that in computer science "Magic" doesn't mean that the implementation is hard to understand. Magic means that the behavior is surprising, that there is misdirection involved etc


While I agree that Go has lots of magics, probably far more than the average programming language, it seems that Go has also successfully hidden many if not most magics so far.


I worked in a company that used golang but my position did not and that was pain because golang's syntax is "we do things differently because we can" so whenever I needed to make a small change in existing code, it always turned into a bigger project because I never understood what the fuck was going on in that program.


Yeah, that's a curse of knowledge to be frank because you can see many magics once you have used many other languages and it is generally a good instinct to be skeptical about those magics anyway (Go included).


My hot take is that "magic" is just a pejorative term to dismiss any abstraction that someone doesn't like. It's subjective, and IMO it's used mostly as a thought-terminating cliche rather than providing any value to a discussion.

Sometimes it seems like people throw around the word "magic" like it's some morally dubious shortcut used by programmers who lack the discipline to resist its temptations. I'd argue that making a good language that doesn't require using "magic" is actually _hard_; it needs to differentiate from existing languages enough not to be redundant but still try to fit into the non-uniform expectations of users.

Rather than saying that a language "has magic" or "doesn't have magic", I wish it was more common for people to directly state that a language works like they'd expect or point out the areas where it doesn't (which can be separated from a value judgment of whether something is "good" or not). If anything, I think that saying a language works like you expect is _more_ of a compliment than saying "it doesn't have magic" because it properly conveys the sense that our expectations are fairly narrow in comparison to the entire design space and require skill to identify rather than something obvious that people would only avoid by choice.


At first, even if I knew they were useful, I didn't have much use for generics. But it came increasingly handy and there is even code that I could not have written without these.

Will probably be the same for iterators.


> Will probably be the same for iterators.

I think especially because people were already doing iterators, just in a not-really-specified way, as the discussion[1] for this change mentions. (I mean people were kinda doing generics too with code generation but that was probably less doable for library code...)

[1]: https://github.com/golang/go/discussions/56413


Yeah, a Cassandra library has an “iterator” that reads a row’s data into a struct pointer, and returns false if there’s no more data.


> ... No hidden magic ...

This has been changed since Go 1.22. Go 1.22 introduced magical hidden code: https://go101.org/blog/2024-03-01-for-loop-semantic-changes-...


Is it magic?

The reason they changed the behaviour is because the previous behaviour was surprising and thus "magical". The new behaviour is more consistent with what you think about what is a scope of a variable.

The fact that a compiler has to add some code to instantiate a variable is not magical. Otherwise all what compilers do would be magical


I never doubt it is a good change for "for-range" loops. The change for "for-range" loops didn't introduce magic hidden code.

But the change on "for;;" loops creates more surprises than before and gets rare and tiny benefit. It introduced magic hidden code. Please read that article for what is the magic hidden code and the surprising cases.


yeah I see; that said if for;; behaved differently than for range it would cause a different set of surprises, so I guess consistency was deemed more important.


The cost of making the consistency is too large and the benefit is too tiny.


The surprise factor is only for the current generation of people who are used to it and who are used to other languages that the same kind of for loop and closures that can escape (basically only JS?)

But the internal consistency between the for;; and for range allows you never doubt about the rule again once you first learned it. Yes, there is the risk that you'll get it wrong once (if you first learn the language or if you are used to the old rules) but if the language authors had only fixed for range, then a lot more people would get for;; wrong just because they would be never sure which way it would be.

So the only "solution" would be to never fix "for range" or disallow for :=;; ?

In any case I think that if you really care about the old semantics, it's not a bad idea to making it obvious to the reader that you intend to directly or indirectly take a reference to the iteration variable and that it effectively survives the loop body with:

    var i int
    for i = 0; i < 4; i++ {


The core problem here is that "Go promotes explicitness" ended at Go 1.21. Since Go 1.22, it is no longer valid.

The change of "for-range" loops is good is because no implicit code is introduced.

And the change of "for;;" loops is bad is because implicit code is introduced. Implicitness often causes surprises. The implicit line

    pa_last, pb_last, pc_last = &a, &b, &c
in the new semantics is absolutely an evil.

And the seriousness of this problem is that, if you upgrade your go version in go.mod files, the behavior of the your old code might change. And the change might be not always easily found in time.

> but if the language authors had only fixed for range, then a lot more people would get for;; wrong just because they would be never sure which way it would be.

This is just a problem in theory, it is never proven (and now no chance to prove it).

> So the only "solution" would be to never fix "for range" or disallow for :=;; ?

I think it is just fine to only make changes on "for-range" loops, just as C# have done. The assumed problem of "for;;" loops is never proven, or just proven to be tiny. IMHO, it is just rsc's personal willing to make the change. This might be the worst decision made in Go history.


As the sibling says, this is just a semantic change in the scope behaviour of loop variables. You need to add an extra line of code to emulate Go 1.22 semantics in Go 1.21 code, but there's not really any hidden magic if you just interpret Go 1.22 code according to its own defined semantics.


I know that, the change of "for-range" is good. But it creates more surprises in the uses of "for;;" loops. Please read the cases in that article.


It's a change in semantics. So it's surprising if you don't know about the change, and not surprising if you do know about it. I'm familiar with the kind of code that behaves differently under the new scoping rules. Looking at the examples in the post, I don't really share the intuition that the behavior is any more or less surprising for classic 'for' loops as opposed to 'for...range' loops.


If you can guarantee that every gophers know the effects of the change. :D

The problem of the change is that, if you upgrade your go version in go.mod files, the behavior of the your old code might change. And the change might be not easily found in time.

I indeed haven't found any surprising cases caused by the change of "for-range".


> No hidden magic

thats the whole point of the article. this change trashes that. now many loops are going to have hidden magic


If iterators are hidden magic, how are the following not hidden magic:

    for i, v := range someSlice { ...
Shouldn't that be eschewed as magic, and instead we should write:

    for i := 0; i < len(someSlice); i++ { v := someSlice[i]; ...
They both de-sugar to the same thing (well, now they do, it used to re-use the same 'v' in the first one, so it used to be subtly different, but they changed the language's hidden magic).

How is the channel loop not hidden magic?

    for v := range channel
    // magic for
    for {
      v, ok := <-channel
      if !ok { break }
    }

The iterator proposal is adding some sugar which is _less hidden_ than the current 'for range' loops, and people are complaining that's magic?


> for i, v := range someSlice { ...

what a ridiculous comment - every single programming language has this or similar syntax for iterating a slice.


All the languages I can think of that have iteration over a slice also have the ability for users to define iteration over custom collections, such as with generators or whatever... yet the comment I'm replying to is saying custom iterators, something every language has, is too much magic, so clearly "every language has it" isn't enough justification for something not to be magic for them.

Anyway, not every language has it. C has arrays, but doesn't have any iteration sugar for them, and the general attitude of gophers does usually seem to be "if C didn't have it, it's not simple, it's magic", and that Go should just be C but with a GC, goroutines, and builtin hashmaps.


Go always had hidden magic:

- Lowercase symbols are private to the package they are declared at.

- A function called "init" will get executed implicitly on startup.

- You can have multiple functions called "init" in the same package or even in the same file.

- Files ending with "_unix.go", "_linux.go", "_windows.go" etc. will only be compiled when compiling for the specified platform. The exact list of platform is very hard to find in documentation.

- There are a handful of magical built-in function that have lowercase names.

- Some built-ins were generic long before generics were introduced to the language.

- The copy function is defined as "copy(dst, src []Type) int" (i.e. both src and dst have to be slices of the same type), but "As a special case, it also will copy bytes from a string to a slice of bytes".

There are many cases of magical behavior in classic, pre-Generic Go. Sure, if you read the documentation you can learn about these thing and you wouldn't be surprised (although good luck figuring out the order of calling init() or finding out what happened when Go introduces a new compilation target platform that happens to have the same name as the ending of one of your source files). The thing is, if you read the documentation about iterators and see that your iterating over the results of a function that returns a another function (rather than a slice, string, map or channel), you also wouldn't be surprised.

And unlike the old magic, the new magic is always transparent. You can always tell what an iterator function by looking at its source code. But you can't exactly tell what suffixes trigger platform-specific compilation without looking at the Go compiler source code, and knowing where to look.


The attitude of 'generics for me, but no generics for thee' that the language designers had before they were introduced in the wider language is a particular pet peeve of mine.

For comparison, C++ is one of my least favourite languages, but at least they try to put the user of the language on an even footing with the designers when that's possible.


I will never understand why anybody gave a shit about this "not for thee" thing. I get being irritated that the language didn't have generics! I didn't care, but the complaint made sense. But the language having a small, capped number of useful generics seems to me, as a normie, as a good thing. It was always weird that people personalized this.


I think this grievance mostly comes from the core language team being reluctant to introduce generics early on[1]. The core team was never entirely hostile to generics (although some vocal parts of the Go community were), but they were quite dismissive of their usefulness. I think the arguments about "generics for me but not for thee" came to demonstrate the hypocrisy, when debates got heated and the aforementioned vocal parts of the Go community rose up with their usual Luddite themes of "we don't even need generics or any language feature cooked up by academic eggheads after the 1970s".

If we'll be fully honest, every language has irregularities that could be qualified as magic (and I've never claimed otherwise). For instance, Pascal has magical varargs for the builtin Write family of functions and most modern languages with generics (such as Pascal, C and Java pre-1.5) have generic arrays. Modern Java arrays are also reified and unboxed, while other generics are boxed and the type is erased in runtime. Go shouldn't be criticized for having exceptions for some language internals and other types of necessary "magic".

My personal beef is with the claim that Go is unique by not having magic, while most other language have "hidden magic". I believe that's utterly and categorically false.

My other issue is magic that is non-transparent and not well-documented (or hard to wrap your mind around), like the platform-specific source files in Go and multiple init() functions in the same package. I'm not looking forward to the time Go introduces a target platform that happens to be a common word and mysteriously breaks lots of programs...

[1] To be exact, they didn't claim that generics are bad, but that they require expensive trade-offs and I remember one of them (perhaps Russ Cox?) saying that generics are mostly useful for custom collections and this is not a big-enough use case. This rubbed many of us as a little bit arrogant, since the "trade-offs" that were quoted only applied to Java and C++, two of the most atypical implementations of generics. None of the quoted issues applied even to ML and CLU which introduced generics decades before Java and C++ (back in 1970s).

https://research.swtch.com/generic


Thanks, I agree with most of your points.

Your footnote is especially noteworthy: most of the arguments against generics I saw from the Go authors struck me as arguments against C++-style templates only. And I can see why you don't want your language to become like C++.

But the authors also gave the impression that C++-style templates are the only kinds of generics they knew about and that they could even envision.

> [...] saying that generics are mostly useful for custom collections and this is not a big-enough use case.

Yes, all the while implementing their favourite collections generically as a hard-wired language built-in.

They claimed that their void-* style 'generics' (via the empty interface) could solve most problems, but they didn't even use them for their own key-value data structure.

As an example of their earlier 'generics' being good enough, they touted how well they can represent arbitrary sorting algorithm. But, of course, they could only represent single-threaded in-place sorting algorithm that way.


The Go designers were never against generics. They just didn’t want to rush a design in without careful consideration.

The real complaint should probably be “they took too long to decide”.


Not they were, they even acknowledge not having done the work properly in regards to researching existing implementations.

" In retrospect, we were biased too much by experience with C++ without concepts and Java generics. We would have been well-served to spend more time with CLU and C++ concepts earlier."

-- https://go.googlesource.com/proposal/+/master/design/go2draf...


That statement doesn’t disagree with my comment.

There are also plenty of comments online from Cox saying he isn’t against implementing generics, they just want time to do it properly.


"Do it properly" is an educated way in office politics to hand wave issues.

There are enough discussions on go-nuts on the contrary, plus it isn't as if there wasn't enough examples to choose from, between 1976 and 2009.


Not always. And the fact that Go now has generics should be evidence enough that Cox was always open to the idea of generics even if he didn’t feel rushed into implementing it.


Cox is not the founding team, and Rob Pike has publicly expressed he is not happy with the generics decision, but it wasn't his decision to take.

"Sydney Golang Meetup - Rob Pike - Go 2 Draft Specifications"

https://youtu.be/RIvL2ONhFBI?t=1018 (starts here)

https://youtu.be/RIvL2ONhFBI?t=1892 (he expresses his opinion here)


Pike isn’t the final word on Go either. Hence why we have generics.

I don’t really understand why you’re arguing about this. Go has generics now, isn’t it about time you moved onto a new soapbox?


Doesn't change the fact that the original authors weren't into generics, let's stop rewriting history here.

Don't worry, Go still has lots of stuff to complain about.


I think they are just addressing this statement you made earlier:

>The Go designers were never against generics

(Fwiw I think I agree with both of you in this situation)


There was nothing to "rush".

Generics, by the time the first version of Go rolled in, were a well understood field with proven strategies of inference and lowering. Not by Go designers, I suppose.


That doesn’t mean that zero research is needed to define the best syntax and implementation approach for, specifically, Go.

It just means it should have taken less time to decide on that.

Which comes back to my original point: Cox was never against generics despite the popular meme claiming he was. People’s real complaint should be the time it took to decide upon an official approach.

I get why some people are frustrated. But at the end of the day, most of the people who comment about it on HN aren’t people who do any development in Go to begin with. So it often just feels like people taking potshots to troll rather than an honest conversation about the merits of the article (eg in this instance, it has nothing to do with generics).


In theory. The non-ideal state of Java and (especially) C++ generics suggested that it might be unwise to rush into a particular set of implementation choices.


The non-ideal state of generics in Java mostly stems from having to maintain compatibility with early versions of Java that did not have generics.

If James Gosling had "rushed" and copied generics Verbatim from CLU or Ada, Java would have had a very decent generic implementation. Not necessarily a perfect one, but it wouldn't suffer from the serious issues that C++ and Java suffer from (and more or less ONLY C++ and Java suffer from).


In practice.

But I think this response showcases a particular trend within Go the language and its community: arrogance and insufficient acknowledgement of the outside world. There's myriad languages beyond Java and C++ both of which have rather non-standard "generics" implementation (object erasure and code templating). But, adhering to this trend, it seems Go design decided not to absorb much from the prior art. I know, surely designers of a popular language should know better? That's what conventional wisdom would suggest, and yet here we are.

But hey, I should cheer on - the bolted on approach that will plague Go for the years to come might just help companies realize sooner that Go is frequently a poor choice.


Every programming language is frequently a poor choice for somethings and frequently a good choice for others.

As someone who’s been writing software for 30 years and in well over a dozen different programming languages, I can tell you that there’s no such thing as a language that doesn’t have any problems. And that’s without even touching on programmers personal preferences.


Meh, I couldn't give a stuff about the personality traits of the Go designers. I like the language, which I think is both better and worse for not having followed every modern trend in PL design. I don't know the designers personally. If they are arrogant, so be it.


"Lowercase symbols are private to the package they are declared at"

This isn't magic, it's just a clever convention.


It's clever, right up until you come up against the fact that many languages don't have uppercase/lowercase (e.g. Japanese) - and now you have to add an extra hack to work around your original hack.

Any time you add a clever hack that re-uses something you don't control directly (Unicode, file names, existing data formats, implicit behaviors of a system - or even explicit since you don't control who will change it later, etc), you're making a mistake that will come back to bite you eventually.

In fact, I dare to say: Any time you think you're being clever, you're not.

Most of the design mistakes of go were made under the assumption that all complexity can be reduced. There are certain kinds of complexity that can only be moved around, not reduced (e.g. init, os-specific compilation, packaging and such).


I wonder if users of other alphabets really use their alphabet to code in roman based languages. I mean, even the keywords are not transcribed, right? I guess they could probably prepend a v or V to their variable names? And especially if the code is to be shared with an international community? Wondering.


I've never seen that happen. English is my third language and the only time I've seen people talk about programming in other languages was either when referring to very old esoteric prog languages, or in English speaking forums ironically enough. It happens and I've seen code written in french for example but it's considered an anti pattern almost everywhere.

I guess sometimes it makes sense for constants or context dependent variable names but again, still very rare.


I was once brought into a PowerBuilder project, during the discovery phase due to my French skills more than anything else, and naturally was cheaper than getting an external translator.

Everything except API calls was in French.

I also have seen enough code during my lifetime with comments, and occasionally code as well, in Portuguese, Spanish, Italian, German.


That's interesting! Was it an older code base? I know France also has a few "domestic" platforms, I just didn't think about it for some reason. It's also probably a lot more common in anything that intersects with BA or that encodes "real life" rules. The other exception is comments, I also still comment my code in french whenever I need to refer to a design choice or whatever since we discuss them in french usually!

I think your first point is exactly why it's not as common as it used to be (imo, I don't have any stats to back this up!). It makes hiring people, or getting support a bit harder, especially if you need consultants for example. Plus the docs are very often in English anyways...


Yes, going back to the PowerBuilder vs Visual Basic vs Delphi glory days.


It is a part of the language rules that you can't ever change if you don't want it. It is magic in that sense.


I always thought that lack of "magic" in a programming language means that when (as a human) you read through source code you are able to reach the end of the program. Opposite to that, in a language that has "magic" you reach a point where you don't know where execution will go next without the help of the compiler (or a heavy duty IDE that translates for you).


That's very subjective I guess, because it would also depend on your familiarity with the subject matter. I can for example easily read and understand a recursive descent parser in any language because I have implemented it multiple times and can catch a common pattern. Unless the language in question corrupts that pattern (highly unlikely unless it's an esolang :-), that definition of "magic" should be largely independent from the language itself.


It's not about understanding the language, it's about how an element in the code can mean different things. Think how in Ruby you can get monkey patched behaviour that overwrites a method name for some type, or even the init() function in Go, that someone mentioned in the sibling comments that gets executed at module load, so when you encounter state that should be a zero value, it's actually not.

If as a developer reading the code, there's no way to know if the compiler injects some behaviour in or around the thing that you're looking at, that's magic.


You also can't change the if statement, is that now magic too?


Yes, that's why I needed "in that sense". As noted in replies, I believe many naive notions of magic really depend on one's background and context.


It is not clever, especially for serialisation


A sufficiently clever convention is indistinguishable from magic.


except there's no smoke and mirrors for this character saving trick


not at all - you ever tried to export or unexport an identifier module wide? its a huge fucking pain in the ass - versus a one line change with other languages


I would love a program analyzer able to generate a list of magic behaviors:

  $ penn-and-teller ~/projects/go
    - init() is implicitly called on program start [link to explanation]
    - xyz_unix.go() will only be used when compiling to unix [link]
    - file.go:20:20 calls a bult-in function [link]


That's called disassembler and CPU architecture manual for a specific generation (because the way it is executed is magic too) :)


> - Files ending with "_unix.go", "_linux.go", "_windows.go" etc. will only be compiled when compiling for the specified platform. The exact list of platform is very hard to find in documentation.

Sure, you won't find it in the Go 101 guides, but it is well documented in the "go build" CLI reference in the official Go docs site. See https://pkg.go.dev/cmd/go#hdr-Build_constraints and https://pkg.go.dev/internal/platform.


GP made the point a few lines below.

What if Go adds support for a new OS called "FoOS" and suddenly your files named "my_foos.go" stop being compiled anywhere else?

They should have forbidden using _ in file names except for an allow-list of suffixes


This has in fact come up before, and you may be surprised to learn the approach they take is most reasonable.


I'm the first to argue that Go is "most reasonable" all around.

Even things which are technically wrong and _could_ have been done differently are still reasonable.

First and foremost because sometimes the alternatives would have meant breaking backwards compatibility (or at the very least forcing people to hassle with migrating code with "editions", which are probably better left for more impacting problems than "what if a new OS comes around")

That said, I think it's important to call out design mistakes when one sees them (as long as one engages constructively with them instead of just throwing a random "Go is magic/sucks/etc" without putting things into context, like how do other practical languages fare on all the metrics combined)


Something I have found surprising and tripped over is that the commands are not the same as the language and the Go team generally does not scrutinize changes which break invocation of commands in the same way that they do the language via the Go 1 compatibility guarantee.

I think overall the semantic filename build constraints `_GOOS_GOARCH` as well as `_test` suffixes provide real value in that I immediately know that the file is build constraint guarded, and it aids my ability to read/browse code greatly. If that information were not encoded in the filename, but only in build tags in each file, then it would be a fairly significant hit to my productivity. I can't see any alternative that is not more complex, and I have issue finding that complexity justified.

I think there is a tradeoff here, the Go team knows it, and that in practice the tradeoff is worth it. There are many such things in Go, tradeoffs of purity and theoretical issues for the sake of practicality, and by and large they're okay.


Perhaps I wasn't clear.

I also like the use of the file name "pre-extension" to categorize files.

There was a way to have the cake and eat it too which is: forbid _ inside the file name other than the extensions that have a well known meaning and treating the others as reserved.


To be honest that seems worse. It feels overly restrictive and surprising. What sort of error do I get? Is the file ignored?

Sometimes perfect is the enemy of good.


Well, if the file got ignored that would be very surprising and a source of frustration indeed

A clear error message that explains that go source files cannot have underscores in them other than the supported suffixes (and a link to a page that documents them).

If you do that since the beginning then it's easy and painless.

The problem is: what do you do when you haven't thought of the consequences of one convention? Do you fix it later?

I'm happy with the current Go trade-off. This detail isn't worth fixing.

But it's nevertheless interesting to use as an excercise to see how it could have been done in a way that preserves all good properties and is also future proof


Also, frustratingly, ”_unix.go" doesn't work IIRC even though "unix" is a built-in build tag, because the ending thing only works with platform names. You need to manually do "//go:build +unix" in the file.


For me init is magic not because it's implicitly called.

It's magic because unlike any other function it can appear in multiple files in the same package. And the order of execution depends on the names of the source files!


I guess my system got immuned to such things after TS/JS coding, but if 20% of community will say it has "hidden magic" I'm not excited too.

Fast learning curve for new hires is a top golang feature for me too.


Why do people focus so much on that random side comment from Pike during one talk he gave a long time ago? It's hardly the language philosophy and it's really hard to believe that Pike actually thinks Google programmers are bad or even average given that he has seen the hiring process.


Pike's comment gets a lot of play because it happens to have great explanatory power for observed language design choices.

It not "hardly the language philosophy", it's the language philosophy stripped of the ego-fluffing layer of marketing to language end-users.


No it's not. Please read some more recent writing by Rob Pike, like here: https://commandcenter.blogspot.com/2024/01/what-we-got-right.... The money quotes:

> In short, Go is not just a programming language. Of course it is a programming language, that's its definition, but its purpose was to help provide a better way to develop high-quality software, at least compared to our environment 14 plus years ago.

> And that's still what it's about today. Go is a project to make building production software easier and more productive.

I think there is an idea that Go is "software engineering for the masses", and that's why the word "easier" is used, and not "simpler". "easier" is fine when you're starting out and you don't know what you don't know. As times goes on, you start understanding more because you have more experience, and so you may yearn for something simpler, with less abstractions, more control. Go isn't the best language for this, but it's also far from the worst.


Because it's straight from the horse's mouth, and it does in fact capture Go's philosophy perfectly. Go is, and was designed to be, a blub language.


> The key point here is our programmers are Googlers, they’re not researchers.

I take that to mean that they're searchers, not researchers. I.e., "Googlers" here means they use Google to search for answers, not that they're Google employees.


In this context, Googlers does in fact mean Google employees. The context from the surrounding talk[1] talks about Google's use cases such as concurrency being a key part of the language, etc.

[1]: https://www.youtube.com/watch?v=iTrP_EmGNmw (quote is at 20:30)


Inside of Google the term 'Googler' unambiguously refers to co-workers; not to people who just happen to use Google products. (Source: I used to work there.)


Sure, the syntax looks more terse than the usual Go syntax. That's unfortunate.

But the author of the article does point at an important reason to do it like this:

> Allow for clean-up with defer

When the range syntax would be syntactic sugar for an iterator created as a local variable which handles the state somewhat like this:

    iter := Iterator{
        Next: func() bool {...},
        Current: func() T {...},
    }
    for iter.Next() {
        loopvar := iter.Current()
        ...
    }

then there's no way (in the current language) to call its clean-up function on panic, whereas it comes naturally in the current proposal. Since these are functions we rarely write, I don't think the syntax will be a problem in practice. Perhaps they could add Yield[T] type to the standard lib to reduce some of the func-iness.

The only gripe people may have is when it turns out to be much less efficient than a regular range loop.


I dont feel strongly about the PR. I thinks having a common way IS a GOOD thing, as i currently very much dislike the for row.Next() loop (that does not work with a range). Having the option to have ALL iteration work with the range IS a HUGE positive IMHO, no matter of implementation.


I’m using Go daily. Usually new features in Go are addressing some real need or pain point, or standardizing on something that’s become fragmented in the 3p ecosystem. So I’m a little confused with the proposal – maybe I’m living in a parallel Go universe, but I don’t think I have ever needed or even wanted iterators. Can someone point to a real world use case where these iterators really make things better?


Basically every custom data structure right now has some custom implementation of iterators. This will set a standard and make them usable with range loops. Even simple library methods like scanner.Scan, strings.Split, regex.FindAll or sql.Query should return iterators.


> strings.Split, regex.FindAll

But they already do, slices are iterables, and the input is bounded. Plus you have lots of other benefits like indexability. What need would it solve?

> scanner.Scan, sql.Query

Right, these are not suitable for slices because they’re unbounded. But still, what’s the use case? You still have roughly the same amount of code, no? Even the same noisy if err != nil checks. Can you provide a snippet that highlights the benefits?


https://github.com/golang/go/issues/61405#issuecomment-16388...:

> Can you provide more motivation for range over functions?

> If the results can be generated one at a time, then a representation that allows iterating over them scales better than returning an entire slice. We do not have a standard signature for functions that represent this iteration. Adding support for functions in range would both define a standard signature and provide a real benefit that would encourage its use.

> There are also functions we were reluctant to provide in slices form that probably deserve to be added in iterator form. For example, there should be a strings.Lines(text) that iterates over the lines in a text.


> scanner.Scan, strings.Split, regex.FindAll or sql.Query should return iterators

Will never happen. Best you can hope for is new functions


I know. There are already proposals to add new functions to many of these packages that return iterators.


no there arent




But is this a problem?


It’s called out in the proposal, but generics allow custom container types to be created like ordered map. Iterators allow nice methods to be created by designers of these containers types, and used in a way that is similar to the go collections like map and slice. Mentioned earlier in the comments, someone brought up in order tree traversal as well.

I don’t see an issue having the for range syntax that works magically for go std lib containers extended to custom containers, especially since go std lib omits so many valuable containers


The database driver has this row.Next() thingy that does not work with range.


This is the textbook definition of the Blub Paradox[1]. If you've mostly used Go (or moved to Go from JavaScript before it had iterators, or from Java where iterators are a royal PITA to write), it won't make sense. That's okay, since it is often hard to imagine how a feature will be used without getting used to it yourself. I'm pretty sure that most FORTRAN 66 developers coming to Go would be puzzled on why you need for loops at all, when you can write DO loops and specify the line number where the loops end (older Real Programmers[2] don't even understand why you need loops at all, since you can always GOTO anywhere you'd like).

So to cut it short, iterators, as Go implements them, should be useful at least for the following cases:

- Iterating over custom collections (e.g. trees, concurrent maps, queues) that are not provided by the language (and "range for" doesn't have custom code for inside the compiler).

- Iterating over a collection in a different way, for instance filtering out some elements or iterating over the collection backwards (as in the giving example).

- Reification - if abstract iterators can be passed to functions (instead of actual slices), you can write functions that operate on a theoretically infinite stream.

- Chaining iterators. If iterators can be reified, you can also chain them together. This is somewhat cumbersome with the syntax Go 1.23 will offer, but in many languages you can write code like this:

  let incompatibleTransactions = user.Accounts
    .Filter(account -> account.Balance <= 0)
    .FlatMap(account -> account.PendingTransactions())
    .Filter(tx -> !tx.AllowCredit())
  
Once you know what each function does, it is clear that the code goes through all the accounts that have a zero or negative balance and returns all the pending transactions that do not allow credit from these accounts.

The equivalent imperative code is harder to figure out at first glance

  incompatibleTransactions := []Transaction{}
  for _, account := range user.Accounts {
     if account.Balance < 0 {
         for tx := range account.Transactions {
             if !tx.AllowCredit() {
                 incompatibleTransactions = append(incompatibleTransactions, tx) 
             }
         }
     }
  }
If the accounts and transactions collections are not built-in the iteration code would even look worse. This code is harder to read since it conflates the What (what we want to achieve with the code) with the How (how to implement this with a for loop). Obviously, this is what imperative programming is all about, and Go is indeed becoming less imperative.

[1] http://wiki.c2.com/?BlubParadox [2] https://sac.edu/AcademicProgs/Business/ComputerScience/Pages...


I've used plenty of languages with iterators but have also never really missed them very much in Go. So I think this is a flawed analysis. The proposed Go iterators are just syntax sugar over some existing code patterns. They'll be nice to have but one can very easily do without them.

By way of analogy, consider that almost any language is missing lots of features present in some other languages. For example, Go does not have inheritance. If I do not miss that feature while using it, that does not necessarily mean that I'm a Blub victim.


GP here. Yeah I don't think I’m blubby in that sense. I used to work daily in Rust, including with custom collection types. And lots of other language experience with iterators. Go has maps and slices, and slices fit 99% of my use cases. You can index, reslice, concat, append, sort, sort by, clone, and yes, iterate with range (although the range syntax itself is merely a small convenience).

As for the chaining, yes I do miss filter and map but is again totally doable with vanilla slices (they just haven’t added those methods to the slices package).


Yup.

This is why I'm despairing so much over Go's popularity after using C#/F# and Rust which have rich and idiomatic iterator APIs. Go implementation is plain ugly, flawed and, most of all, is going to be slow unless you write it in the most verbose way due to Go's compiler weakness.

There are languages that move us forward, Go sets us back by 20 years.


Some people enjoy expressiveness in their programming language. Some people don't. Some people like both, I'm one of them, and for some programs I choose expressive languages like Haskell or Common Lisp and for others I use unexpressive languages like Go or Zig.

Take these things a little less personally. Live and let live. People think differently from you, and that's okay.


This is not about personal choice but rather Go's undeserved adoption in domains it has absolutely zero business being applied to. This leads to me to generally approach engineers promoting it as misguided and severely lacking in some area of knowledge the moment Go is suggested to be used anywhere beyond light networked microservices or CLI utilities.

Instead, people shoehorn it into data processing and databases, the domain at which .NET is incomparably more adept due to performance primitives it offers, if we still talk high-level-ish languages with automatic memory management. Of course if there is no such requirement, there are many (like Zig, Rust and C++) systems programming languages that are better in every possible way at solving the task. And yet, we get Go and startups struggling to scale. But all the interesting problems in which Go is attempted to be applied to, where the solution, if successful, is despite Go not because of it, would have benefited from C#/F# which offer much better control and performance assurances, without having to fight the tool.


Take these things a little less personally. Live and let live. People think differently from you, and that's okay.

I have meticulously benchmarked Go in previous roles and also used Go as a load generator and with smart allocation strategies the language is quite fast. I think you're letting your emotions cloud your judgement but that's nothing new in PL discourse. There's no better way to bring out those that identify with/against programming languages than PL discourse.


> I use unexpressive languages like Go or Zig

See, I would not call Zig unexpressive. I've written a number of iteration structs in Zig, because it has true optionals this is easy, using them is just:

    var iter = thing.iterateSomeWay();
    while (iter.next()) |item| { 
        // do stuff
    }
It's a low level language, yes, but an expressive one given that design constraint. Go just makes this stuff harder, because nil is just zero wearing a costume, an antipattern it inherited from C. In Zig every type can be nullable, and the compiler takes care of representing the null case distinctly from the load-bearing one.


Unfortunately Microsoft is also to blame, instead of .NET, all their Azure contributions to CNCF are based on Go and Rust, and all their recent CLI tools are written in Go (older ones used Python).

Also by not doing enough to make UNIX culture startups have a different opinion of Microsoft, instead of still doing M$ jokes.

The same kind of startups that won't have any qualms adopting Swift, or dealing with its current poor state on GNU/Linux.


You have never used for..range in Go? That's an iterator.


I like iterators in JavaScript well enough. I understand the syntax of this just fine, it doesn't seem that ridiculous. It's definitely a little less simple than Go usually tends to be, but it's certainly not impenetrable. And you don't have to use it. I personally probably won't use it but I don't care that it's there.


You don’t have to use it is probably the most common worst argument ever. The mere existence of the construction means it _will_ be used and you will have to know about it and eventually use it.


They'll be used primarily by library authors but most people will end up consuming them a lot. They'll eliminate a lot of unnecessary intermediate slice allocations.


Whenever iterator design comes up, I always have to refer to munificent's excellent article: https://journal.stuffwithstuff.com/2013/01/13/iteration-insi...

tldr: The gold standard is if your iterator syntax / syntaxes can support these two types of iteration:

1. Interleaving two sequences

2. In-order tree traversal


Go co-routines are the same: a bit of magic you just use and don't think much about.

I'd argue the primary magic in large-scale languages should mainly be in those two places: inversion of control and concurrency.


Why do those two places have a special need for 'magic'?

Especially concurrency is hard to get right, so it's probably better to make it as clear as possible as to what's happening?

Btw, would garbage collection count as 'magic' in your book?


Is this just a form of continuation passing style? I get that it's a bit functional-y, and more complex that typical go code (admittedly a very low bar), but I don't think it's too bad. It's no worse than async generators in python, and it's downright elementary compared to anything template metaprogramming.

I don't mind the pace that Go has been moving at when it comes to new features. It still feels very conservative while still evolving.


I get iterators of this yield-sort, in other languages.

I still grumble every time I have to implement one. Something about the interface just bothers me. Feels sloppy.


That's the nature of the craft. If you go down the road of eradicating all the things that are a little bit sloppy and inelegant, you eventually end up with something like Haskell (with a lot of extensions) and lose most of other developers along the way.


See, iterators [edit: and “yield” in general] feel Haskelly to me.

Then again I can only understand key Haskell concepts when they’re explained using any of several other languages (then they’re typically very easy to understand) so maybe I’m not the best judge of what is or is not Haskell-like.


Kinda funny, given that the Haskell iterator solution is "convert your data to be a list, and the optimizer and RTS will do the right thing as if it were an iterator."


that sounds like a good thing


Right until you have to hire.


For touted Go simplicity, this looks messy and hard to read.

Look at Rust iterators which are just simple and readable ‘next() -> Option<Item>’ that can be targeted with ‘for item in iter’ and chained iterator expressions or C#’s yield return or IEnumerator<T> (which support cleanup on panic too) which works with foreach and the same kind of chaining. Both are faster, easier to read and write too.


I’m happy about the inclusion of Iterators in Go. Even if I don’t write Iterators, it makes reading code more challenging and it prepares me to read other languages with Iterators


That is a terrible reason for a feature in a language.


It's a terrible reason to add a feature in a language. But I'm still happy it's there for that reason.


All of this is specifically about the "Go's Apparent Philosophy" paragraph.

Instead of pulling the same quote from Rob Pike over and over and over again, people should watch/read is recent talk about Go titled "What We Got Right, What We Got Wrong": https://commandcenter.blogspot.com/2024/01/what-we-got-right.... A few excerpts:

> Given the title of this talk, many people might expect I'm going to be analyzing good and bad things in the language. Of course I'll do some of that, but much more besides, for several reasons.

> But the real reason I'm going to talk about more than the language is that that's not what the whole project was about. Our original goal was not to create a new programming language, it was to create a better way to write software. We had issues with the languages we were using—everyone does, whatever the language—but the fundamental problems we had were not central to the features of those languages, but rather to the process that had been created for using them to build software at Google.

> The creation of a new language provided a new path to explore other ideas, but it was only an enabler, not the real point. If it didn't take 45 minutes to build the binary I was working on at the time, Go would not have happened, but those 45 minutes were not because the compiler was slow, because it wasn't, or because the language it was written in was bad, because it wasn't. The slowness arose from other factors.

> And those factors were what we wanted to address: The complexities of building modern server software: controlling dependencies, programming with large teams with changing personnel, ease of maintainability, efficient testing, effective use of multicore CPUs and networking, and so on.

> In short, Go is not just a programming language. Of course it is a programming language, that's its definition, but its purpose was to help provide a better way to develop high-quality software, at least compared to our environment 14 plus years ago.

> And that's still what it's about today. Go is a project to make building production software easier and more productive.

I will repeat it because I think it is very important:

> And that's still what it's about today. Go is a project to make building production software easier and more productive.

Another quote about Go being more than the language:

> A few weeks back, when starting to prepare this talk, I had a title but little else. To get me going, I asked people on Mastodon for input. A fair few responded, and I noticed a trend in the replies: people thought the things we got wrong were all in the language, but those we got right were in the larger story, the stuff around the language like gofmt and deployment and testing. I find that encouraging, actually. What we were trying to do seems to have had an effect.

And finally:

> Perhaps the most interesting consequence of these matters is that Go code looks and works the same regardless of who's writing it, is largely free of factions using different subsets of the language, and is guaranteed to continue to compile and run as time goes on. That may be a first for a major programming language.

> We definitely got that right.

So now that we know better what Go is about, we can try to see why the iterators are added. From https://github.com/golang/go/issues/61405#issuecomment-16388.... I'll quote some parts:

> Can you provide more motivation for range over functions?

> The most recent motivation is the addition of generics, which we expect will lead to custom containers such as ordered maps, and it would be good for those custom containers to work well with range loops.

> Another equally good motivation is to provide a better answer for the many functions in the standard library that collect a sequence of results and return the whole thing as a slice. If the results can be generated one at a time, then a representation that allows iterating over them scales better than returning an entire slice. We do not have a standard signature for functions that represent this iteration. Adding support for functions in range would both define a standard signature and provide a real benefit that would encourage its use.

> There are also functions we were reluctant to provide in slices form that probably deserve to be added in iterator form. For example, there should be a strings.Lines(text) that iterates over the lines in a text.

To me this provides enough justification for adding the iterators. But that's not the important part. The important part is listening to what people say about Go, not just "Go the language", but Go as a whole. And listening to what people said recently, not what they said a decade ago. This is how you genuinely engage with other people, I think.


Thanks for providing the brilliant citations of Rob Pike! I like the most these ones:

> Go is a project to make building production software easier and more productive.

> Go code looks and works the same regardless of who's writing it, is largely free of factions using different subsets of the language

But I don't understand how these citations align with iterators in Go:

- Iterator functions are non-trivial to write and read because they return a function, which accepts a function, which accepts iterable items.

- The 'for ... range' loops with iterators are non-trivial to debug, since they __implicitly__ call iterator function and then __implicitly__ pass loop body converted to __implixit__ anonymous function, to the function returned by iterator function.

- Iterators do not replace existing functions - they add __new__ functions because of backwards compatibility. This complicates writing, reading and maintaining Go code, since there are multiple ways to do the same thing.

- Iterators encourage adding crappy iterator functions to the existing code, making it more complicated and harder to maintain.


thnx for compiling quotes! useful


You're welcome! I think as Go users we're very lucky to have all that information in the open, and that the people driving the language take the time to write it down, and discuss it with the community. I'll assume good intention and think it's not trivial to find, or the people writing articles/comments citing the same Rob Pike quote again and again don't know that he wrote and talked more about Go. I hope I can help elevate the debate a little by sharing these quotes.


This design is terrible. It feels extremely bolted on and poorly thought out.

Either give us magic or make it supremely readable. This accomplishes neither - it's a barely legible mess of questionable value.

To be clear, I'm not against features. But I am against anything that makes code this ugly by default.


I very much feel that both generics and deps were against the “spirit of Go” (trivial simplicity, stability, manageability) and a reaction to the perceived demands of “The Community” taking precedence over the opinionated, orthodox and even antisocial original design of the language. This just looks like another fluffy, friendly concession to “usability” and “what the crowd wants” at the expense of what set Go apart from the crowd.

But hey, maybe I’m just being a curmudgeon and it’ll be great.

As the article mentions, there was nothing stopping you from doing this sort of thing (and worse; channels as “iterators”, anyone?) in older versions of Go. But it would be silly to do, and you’d get (rightly) scolded for doing it.

Now I guess that kind of thing is encouraged. :)


The way "generics" were implemented previously by code generation was just gross, new generics just accept that the standard library didn't make enough containers and probably shouldn't have to make every container type part of the stdlib. Next step to that logic is these iterators, why should the compiler magically support "for range" syntax on slice and map, but not on a map or tree you made.

I don't understand the aversion to this, if you don't like it don't use it, but this cleans up the multiple different loop patterns and collapses them into a for range syntax, now you have 1 way to loop instead of multiple ways.

The actual implementation of this, making an iterator, is more of a fix for the people developing custom container libraries, which the majority of the go community probably doesn't do. You can go on being a productive go dev and literally never learn this syntax, it won't change your use.


jesus. both of those examples are awful.

I dont mean that as a statement against the author, I mean it as a statement to the language designers. people are gonna start writing code like this, or worse, and Go is gonna end up like many other unreadable languages. shame.


TBH, a lot of the complications in Backward() are because it's dealing with two iterators (one as an input, one as its output) and the input/output are both generics. This isn't a typical case.


Maybe it is just me, but I have a very hard time reading Go in general. I have a hard time with e.g. type annotations without a colon or an arrow. The := operator does not read fluently, and I always have to think about what it does to the variable binding (in python the := reads just fine).

Also, whenever I encounter multiple returns it feels very cluttered and I have to stop and think what the heck is going on.

And finally the the capitalization of variable does horrors to my understanding of the code that I’m reading, I know what it is for, but as I’m reading code, if a function being used is an imported from somewhere else is not a meaningful information, and just confuses me.

I bet there is more, but these are some of the examples I’m consciously aware of


They just sounds like ways it’s different from the language you’re using currently?


I could have written this post, except for exactly everything would be inverted.


It's not just you.

There is something about Go that feels "off" for me as well.

I started picking it up but couldn't get past some of the syntax.


It's kind of funny how the church of absolute simplicity above all things have became so dominant.

As someone who has seen the cost of C++ templates metaprogramming wizardry gone wild I understand part of this reaction. I reckon we would have a real talent shortage bigger by orders of magnitude if we required familiarity with Alexandrescus' Modern C++ Programming and Herb Sutter's Exceptional C++ to onboard new programmers in a project.

Yeah, definitelly there's such a thing in programming as "too clever".

But should we so radically into the opposite direction? Should we sacrifice any attempt at expressiveness into the altar of simplicity? Aren't we going too far into the opposite direction?


Sadly, it's not just Go. Everywhere you look, some people argue that there shouldn't be a learning cost for anything. The problem with this kind of reasoning is that while it sounds nice in theory, it obviously isn't practical. It's the perfect pretext. The distinction between what's advanced knowledge vs. what's a given is totally arbitrary and is often a proxy for personal preference.

Passing around functions or using them as return values is a common pattern in programming. It's also not the kind of knowledge that needs relearning either. You learn about it once and move on. If this can't be considered required knowledge for software professionals, I don't know what can.


In my view, aiming for simplicity is good, but only simplicity of the whole system, and superficial simplicity is often at odds with total simplicity.

The following code is composed from a small set of simple operations,

    transformed = []
    for (i = 0; i < len(list); i = i + 1)
    {
        el = list[i]
        temp = transform(el)
        transformed.append(temp)
    }
but together the simple operations form a complex operation. The effect of each individual line is immediately clear, but the effect of the whole procedure is (marginally, in this toy example) obscured. However, If we extend our small set of simple operations with complex but common operations, we can express complex operations with conciceness that, overall, makes them simpler to recognize and reason about:

    transformed = list.map(transform)
The effect of the line now requires more background knowledge, but in return the effect of the entire procedure becomes more readily apparent.


Yes we are going too far. Go treats us like we are inept.


I dunno, I think go simply treats us as if someone else will be reading the code we write.


I’m capable of reading someone else’s Node, Python, Rust, C++, and Java. I’ve seen good code and bad code. I’ve also seen good Go code and bad Go code. I know this is what the designers of Go intended, but they accomplished it by dumbing down the language far too much.


Where "someone else" includes values of "oneself, but in the future".


We are, that is why at the end of the day language like Go evolve into a kludge of poorly integrated designs, and the anti-complexity folks discover that those complex features exist for a reason, and then have to try to implement them into their earlier design decisions.

We have been through this reboots multiple times, Pascal vs Algol 68, C vs PL/I, Java vs C++, ....

Now compare where those "simpler" languages are 20 to 50 years later, compared with their version 1.0, and the amount of "could it been better" features if they were there in version 1.0.

Go folks, thought they are special and this wouldn't happen to them.


Considering that Rust is the most hyped language (and e.g. topping SO surveys) out there, and I definitely would not call many things in Rust simple, I wouldn't be worried about simplicity being overly dominant


I guess this means we need to add a “no iterators” linter to golangci-lint.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: