Hacker News new | past | comments | ask | show | jobs | submit login

    > To minimize disruption, each change will require
    > careful thought, planning, and tooling, which in
    > turn limits the number of changes we can make.
    > Maybe we can do two or three, certainly not more than five.

    > ... I'm focusing today on possible major changes,
    > such as additional support for error handling, or
    > introducing immutable or read-only values, or adding
    > some form of generics, or other important topics
    > not yet suggested. We can do only a few of those
    > major changes. We will have to choose carefully.
This makes very little sense to me. If you _finally_ have the opportunity to break backwards-compatibility, just do it. Especially if, as he mentions earlier, they want to build tools to ease the transition from 1 to 2.

    > Once all the backwards-compatible work is done,
    > say in Go 1.20, then we can make the backwards-
    > incompatible changes in Go 2.0. If there turn out
    > to be no backwards-incompatible changes, maybe we
    > just declare that Go 1.20 is Go 2.0. Either way,
    > at that point we will transition from working on
    > the Go 1.X release sequence to working on the
    > Go 2.X sequence, perhaps with an extended support
    > window for the final Go 1.X release.
If there aren't any backwards-incompatible changes, why call it Go 2? Why confuse anyone?

---

Additionally, I'm of the opinion that more projects should adopt faster release cycles. The Linux kernel has a new release roughly every ~7-8 weeks. GitLab releases monthly. This allows a tight, quick iterate-and-feedback loop.

Set a timetable, and cut a release with whatever is ready at the time. If there are concerns of stability, you could do separate LTS releases. Two releases per year is far too short, I feel. Besides, isn't the whole idea of Go to go fast?




(Copying my reply from Reddit.)

>If you finally have the opportunity to break backwards-compatibility, just do it.

I think Russ explained pretty clearly why this is a bad idea. Remember Python 3? Angular 2? We don't want that to happen with Go 2.0.

>Additionally, I'm of the opinion that more projects should adopt faster release cycles.

I am of the opposite opinion. In fact, I consider quick releases to be harmful. Releases should be planned and executed very carefully. There are production codebases with millions of lines of Go code. Updating them every month means that no progress will be made at all. The current pace is very pleasant, as most of the codebases I work with can benefit from a leap to newer version every six months.


> I think Russ explained pretty clearly why this is a bad idea. Remember Python 3? Angular 2? We don't want that to happen with Go 2.0.

The problem with Python 2/3 is that Python 3 didn't add enough new features to make people want to move to 3.

The problem with Angular 2 is that it just didn't know what it wanted to be.

If Go2 doesn't break enough yet still break it will be no different from Python 2/3 fiasco.

Go has serious design issues, the go team ignoring them hindered Go adoption pace at first place.


Breaking things aren't want makes people want to move; it's a cost, not a benefit. Now, you need to offer benefits to get people to pay the cost for those benefits. The Python 2/3 problem was because there was too much breakage for the perceived benefit (especially early on) for many users, not because there was too little breakage.


Fwiw, I think breaking implies improving. Arguing that breaking doesn't inherently mean improving is.. well, duh. So in the case of many peoples comments here, "not breaking enough" means not improving enough. I know this is an obvious statement, but I feel like you're arguing a moot argument.. i.e., being a bit pedantic.

As an aside, since you're making the distinction, can you have meaningful benefit without breakage? Eg, you're specifically separating the two - so can you have significant improvements without breakage?

It would seem that pretty much any language change, from keyword changes to massive new features, breaks compatibility.


> As an aside, since you're making the distinction, can you have meaningful benefit without breakage?

Sure, in two ways:

(1) Performance improvements with no semantic changes.

(2) New opt-in features that don't break existing code (such as where code using the new feature would have just been syntactically invalid in the old version, so the new feature won't conflict with any existing code.)

There be no reason for SemVer minor versions if you couldn't make meaningful improvements while maintaining full backward-compatibility.


Seems I can't edit my post, but:

I think I'm simply wrong here. I was envisioning "breaking" as being incompatible with Go1. If Go2 was a superset of Go1, it would be allow Go1 code to run flawlessly in Go2 and still allow any new keywords/features.

My assumptions were incorrect, and writing my reply to you sussed it in my head. Thank you for your reply, sorry for wasting your time :)


>As an aside, since you're making the distinction, can you have meaningful benefit without breakage? Eg, you're specifically separating the two - so can you have significant improvements without breakage?

One way is by having a new language feature which does not interact with anything else in the old version of the language, i.e. is orthogonal.


I think Python was too ingrained to pull a "Python 3"; Go is newer, and this could be more like a "Swift 3".

Break early and then stabilize for a long time; just make sure everybody knows the plan.


It's barely related to the topic, but another team in my company is thinking about rewriting a pretty large codebase in ObjC into something more sustainable. They briefly discussed Swift, but decided against it exactly due to its relatively frequent breaking changes.

Which emphasises even more the fact that large private codebases really dislike breaking changes.


Except that there should not be many breaking changes in Swift. Nothing on the scale Swift 2 -> Swift 3.


Adding generics isn't even backwards-incompatible. AFAIK Java 1.5 was just fine with Java 1.4 code, and so was C# 2.0. The latter using reified generics (and not cheating with builtins) lead to the duplication of System.Collections but given builtins aside as of Go 1.9 it has under half a dozen non-generic collections (versus 25 types in System.Collections, though some of them didn't even make sense as generics and don't have a System.Collections.Generic version) that's unlikely to be an issue.


> so was C# 2.0.

True, but C# also has nominal types, overloading, and explicit interface implementation. Adding generics without breaking existing code without those features looks very difficult to me.


They don't have to be Java-ish generics.

Think SML/OCaml/Ada/Modula-3 (modules parameterized by modules) or Haskell (typeclasses).


Actually that would be the easier way. It is also how CLU had them.


What makes it difficult?


They're trying to avoid creating the new Python 3




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: