Hacker News new | comments | show | ask | jobs | submit login
Proposal: Go 2 transition (github.com)
206 points by piinbinary 20 days ago | hide | past | web | favorite | 169 comments



There seem to be a few standard ways to evolve to the next major version of a programming language:

(1) Announce the next major version to be far off into the future, and say it will be as backwards compatible as possible to keep developers supporting the old version, e.g. Go 1->2.

(2) Build next version to be a little different to current version, provide no facility for interoperating between versions, then expect developers to switch over because of the branding, e.g. Python 2->3.

(3) Announce next version to be far off into the future, and say it will be a totally different language and expect developers to switch because of the branding, e.g. Perl 5->6.

(4) Don't announce or build a new major version, and keep profiting from developers using the current version for as long as possible, e.g. Java 1.x.

(5) Announce next version to be very soon, but delay it for as long as possible with unofficial roadblocks and keep profiting from developers using the old version for as long as possible, e.g. Apache Groovy 2->3.


(6) Design the new version to be compatible with the old version at the module level, allowing the different versions to interface with each other, where individual modules can be upgraded in isolation without breaking changes to their consumers, e.g. Rust 2018.


It only works if everyone is obliged to build all their dependencies from source code, hence why Swift is having such hard time coming up with an ABI that could survive language evolution.


This works on dotnet. The dotnet runtime is versioned separately from the language so you can use a library written and compiled with C# 5 in a project that is written in C# 7 so long as they both target a compatible runtime. You can go even further and use a C# 7 library in a F# or VB project.


That is because there isn't C#, F#, VB.NET, C++/CLI or any language that you decide to put on top, just MSIL.

With straight native code, the story is a bit different.


Different Go modules can use different versions of the language, per this proposal.


C# and Javascript (mostly thinking of strict mode) fall in this category too


C/C++ also. It amuses me that GP used Rust as the example for this family.

In fact C has an ABI and doesn’t suffer the problem that sibling commenter mentioned.


C doesn't do much evolution either, must less add generics or introduce new error handling semantics...

And C++ just piles up features, supporting everything forever...


That’s not really true. C++ makes changes after careful analysis. e.g. the repurposing of the auto keyword (which has been present since the days of K&R C!)

I’m not a language lawyer, but one would be able to rattle of a long such list.


The Python 2 to 3 transition is completely unlike what you describe, at least from my perspective. I regularly write pure Python with no translation, 2to3 or six module usage that runs on Python 2.6-3.7.


That's true now, but is was quite a bit less true back in the 3.0/3.1 days.


Any good resource for this? I'm currently stuck maintaining some Python 3 code that must run on CentOS 7 (only Python 2 available) and for this I maintain a large "translation" patch on top of the upstream source, which is painful to say the least.


Python 3.4 is available from the official CentOS repos, and 3.6 is even available in EPEL.


AIUI both are only available from EPEL.


I am not sure which one Ruby fits in, I guess it is more like 4, where major version isn't really breaking.

I really don't like the idea of always thinking of Backward compatible though. Things will need to move forward. I think for a major version of a programming languages, it must provide something that offer major incentive for programmer to switch. For example if you want JIT that offers 2-5x the performance, it should only be working with new version going forward, and gets a chance to clean up any debt in the languages made over the past decade.


Oh Perl 5-6 interopability was actualy promised at the very beginning - it was framed as "automatic translation" I think the problem that can't be dolved is only perl 5 can parse perl 5! There's other projects to make this happen (like: use v5;)

Calling a Perl 5 module from Perl 6 can work I believe ( as well as other languages!), which is somewhat of a miracle. But it's interfaced using Perl 6.


To elaborate a bit on this: the 'use v5' project is pretty much dead at this point in time. (see also my open letter to the Perl community: https://www.perl.com/article/an-open-letter-to-the-perl-comm... )

To call Perl 5 code from Perl 6, we currently have Inline::Perl5 (https://modules.perl6.org/dist/Inline::Perl5:cpan:NINE):

> Supports Perl 5 modules including XS modules. Allows passing integers, strings, arrays, hashes, code references, file handles and objects between Perl 5 and Perl 6. Also supports calling methods on Perl 5 objects from Perl 6 and calling methods on Perl 6 objects from Perl 5 and subclass Perl 5 classes in Perl 6.

In a similar vein, there are Inline::Python (https://github.com/niner/Inline-Python/blob/master/README.md) and other Inline modules (https://modules.perl6.org/search/?q=Inline ) that can be called from Perl 6.


Just make it backwards compatible! Use all the ingenuity available to figure out how to do such a thing.


But then you end up with C++ which is fine in some respects and totally dreadful in the others.


PHP: Good 4->5 transition, they almost made the Python 3 mistake with PHP 6, but sanely dropped it and the PHP 5 -> 7 seems to be going along nicely...

JS is looking a lot like the C++ story. Totally different language.


The 5 -> 7 transition was great from the point of view of someone writing PHP, but it was significantly more painful for extension writers. The only way I could update my stuff was with many hours of careful spelunking through the PHP source looking for examples of the new APIs, then many more hours of fun with valgrind working out all of the ZVAL indirection and zend_parse_parameters changes I'd missed.

I'm fine with the burden of the upgrade being transferred to users of the C API - that seemed like a very astute trade to me - but the process could've been made a lot easier with a proper upgrade guide rather than a few hasty, incomplete notes in the wiki.


I like this proposal. For unmaintained code that needs fixing when incompatible changes are introduced, I wouldn’t make the toolchain increasingly complex though. It‘d be better to just fork it and have it fixed by volunteers (or bots!), then put in some canonical location (e.g. github.com/go2compat/oldhost.org/olddir/...) so developers can just try to change the include path as a standard solution on such compile errors. This is more feasible now than 15 years ago since even unmaintained code is usually public on github or similar. It would help to break as few things as possible in this case, but that seems to be the intention anyway.


The conclusion?

> A real Go 2 would, perhaps unsurprisingly, be harmful.

Nicely done!


Imma say here that while perl5->perl6 transition didn't work out very well, the perl4->perl5 transition was hugely successful.

Back to Go!


I think that was mostly because Perl pretty much died about the same time.


Reports of perl's death are greatly exaggerated.


It’s fun to use that quote, but is it actually true? At one time, Perl was extremely popular, and was basically the Python of its era. Now, I barely ever hear about it, and it mainly seems to be used as a cautionary example of what can happen to a language when it goes through a major update.


Yes, it is actually true. Both Perls have a regular release cycle, and continue to be used in companies around the world. The amount of developers isn't really growing for Perl 5, but it's not really shrinking either. eBay isn't dead just because Amazon and Ali became the dominant players, nor is Perl dead because Python and JS overtook it. The market just got larger.

Also, please try Perl 6. The marketing may be a cautionary example, but the language itself reflects 15 years of polish.


I’m not sure that is the case. I know a lot of Perl developers who jumped ship. Including me.

While I respect that Perl development is still going, I haven’t seen it used by anyone in production for at least 15 years.


We (FastMail) use it in production, and all backend development is in Perl for all the products. Booking.com, cPanel, Craigslist, and ZipRecruiter all have sizable Perl codebases, as do many others.


As a FastMail customer I use Perl then!

You should have skipped cPanel though from that list. That doesn’t build credibility ;)


You probably use it indirectly. It's a fair percentage of the git source tree. It's also used in other places you might not expect, like Proxmox, Bugzilla, SpamAssassin, MRTG, and Gnu Parallel. It's also the main language for booking.com.


If you have to enumerate examples, then it proves the parent's point...


It is more niche than it used to be. I'm not arguing that. However, "haven’t seen it used by anyone in production for at least 15 years" seems exaggerated. That's why I enumerated examples. Assuming they've seen one of the examples in production.


>However, "haven’t seen it used by anyone in production for at least 15 years" seems exaggerated.

I think by that they meant "where I've actually seen" (e.g. c ompanies parent worked, visited, friends have etc), not that in general one can't get 10 or 1000 examples of businesses still using Perl from the whole of the internet.


No they aren't... I haven't seen a single Perl script in the last 5 years. Ask me how many Python scripts I've seen...


flamegraph.pl is a fairly common sight.

It would also likely be better (more readable and maintainable) and faster written in something else (did a partial Python port to understand what kind of munging the script actually did, the Python version was 50% faster, and a bit shorter though the latter may have been because I didn't test/check all features and some of the edge cases may have been missing in the non-default options).


pprof has a flamegraph and I'm fairly certain it isn't written in Perl.

Yep it is JavaScript: https://github.com/google/pprof/blob/master/third_party/d3fl...


Zombies can move, but they're still dead.


I switched to Perl in 2010 (from Progress) and sticked to it.

I couldn't be happier for the choice I made: I'm working remotely from Romania for a US based company since 2015, making multiple times the average national income.

I am contacted quite often on LinkedIn for new Perl job opportunities.

I would say Perl is quite a vibrant and well living language, with a great career perspective for those who learn it.


I'm not sure how this would work for changes that would affect a whole program.

For instance, say Go wanted to introduce a moving GC, and say that required removing interfaces as map keys. How could a function that returned a map[interface{}]int in one module be called from another that was compiled with a later version of Go?

Would the whole program would have to be compiled with the non-moving GC?

Perhaps the answer is that Go would never introduce such a change.


For this particular case they thought ahead and require implementations to work with a GC that might move pointers. You run into it with cgo, which, as a result, has quite arcane rules for passing data between the two: https://golang.org/cmd/cgo/#hdr-Passing_pointers

Like with randomizing hashmap keys, they randomly enforce the rule to make sure that no one relies on it just happening to work since Go's GC doesn't currently move pointers.


I don't understand - as you say, cgo requires extra rules. Are you suggesting that extra rules will apply every time you cross a particular Go version boundary? This proposal doesn't seem to suggest that.


Sorry for the confusion. The rules were put in place to leave open the possibility of GC which could move pointers. Its not really related to this proposal directly, but its an example of how you can define a feature in such a way as to allow future changes.


Go could introduce a moving GC without requiring that map keys not be interface types. The simplest approach would be to use read barriers as we already use write barriers, and forward pointers during the moving phase of the GC. (Efficiency costs might prohibit that, but it could be done.)

Another approach would be to use two different map implementations, and use a less efficient one for older code that used an interface type as a map key.

It's an interesting example, though. You're right: if there is an old language feature that can not be supported by a newer runtime, then there needs to be some sort of shim to let the older code keep working. That necessity may prevent us from making certain sorts of language changes.


Go doesn't have a stable ABI, AFAIK, so what (probably) actually going to happen is that the same Go 2 compiler will compile both Go 1 and Go 2 packages using the same runtime.


But that would prevent the Go 2 compiler from using the given improvement to the runtime (i.e. moving GC).


The proposal document seems to be quite good.


IMVHO things changes, hopefully for good, so give time and advise far before etc is a needed and good practice but evolution must happen so if someone have problem keeping up with reasonable time it means someone do have a problem, not a thing Go developer can fix or slow evolution because of that.

We all know how bad evolution is in commercial world, force evolution is a need. Software is not a product, can't last forever untouched.


150 comments, and nobody is mentioning the maximum version thing?

Am I alone in thinking that's a terrible idea, that having module authors around the globe define a maximum version is going to result in masses of libraries pinned needlessly to old versions, and masses of unmaintained but otherwise solid and complete libs unpinned, despite them now needing a maximum version pin?


Remember that they are only pinned to an old language version. They will still work fine with new release of Go, they will just be built with the old language semantics. So what's the harm?

I agree that unmaintained libraries that don't adapt to modules could be a problem. We'll have to see what happens.


Yes, --lang= or --std= would do the job. Even crosslinking should be easy since object code is the same.


This is how e.g. C/C++ does it, this design is a mistake. The version information should be attached to the source code, not the compiler invocation.

It is pointless to compile source code of a certain version (e.g. C++17) with the wrong options switch.


Don't be Perl 6.



Uhoh, they're still proposing to remove features. This is a terrible idea and crippled both the Perl and Python transitions. If you remove a feature and it doesn't compile existing programs, it's a new language. The automated backwards compatibility proposed might help, although it's not clear if you can mix files and libraries from different versions of the language?


There's a distinct difference.

python3 code cannot import python2 libraries. That split the ecosystem.

Go intends for a go2 library to be able to import and use a go1 library.

This means there isn't a split in the ecosystem at any point in time.

It's fine to remove a feature as long as old programs still build because you must opt in to the removal via setting "version=go2" or whatever in go.mod.

This is distinctly different from being the perl and python transitions because the new compiler can still build old code, including with removed features, until you opt into the new language version.. and even once you do opt in, all your dependencies are fine whether they have opted in or not.


You can actually use Perl 5 and even Python code in Perl 6.


They do propose that, but with caveats. They mention removing string(i) in, say go 1.20. If a module notes that the maximum version of go that the module works with is 1.19 (because it uses that feature), then the 1.20 compiler can compile that module in go language 1.19 mode. Thus you can use 1.20 features in your module and still depend on the module that uses the deprecated syntax, because the go compiler can still operate on that old code.

This is different from python 3 because python 3 cannot run python 2 code, even if told that some library is python 2. That is where the migration pain came from.


I'm unconvinced by the max-version part, I can't see how it doesn't require either

A) being overly restrictive, and declaring the current released version as your max, breaking for a short time on each new lang version

B) being psychic, and knowing when (without the aid of semver) your package might break

An unmaintained package will not be updated with the max flag when the breakage occurs.

Any package always declaring the current version will have to be updated for every new release.

Both seem like a lot of admin.

I know using unmaintained packages isn't ideal in the first place, but it happens, and packages exist that are pretty much "finished" and very light on bugs, if they were stable and heavily exercised for some time before losing their maintainers.


An unmaintained package doesn't need new features from then-future versions of Go, so it's fine if you declare the current version as your max. It doesn't break in the new version, it just doesn't have access to features introduced after the max version.


Won't you then lose out on performance improvements that don't break your code.


As outlined in the proposal, you wouldn't miss out on performance improvements since it would still be the same underlying compiler and the version just controls the syntax rules. (And even if that were the case, it's not a very bad worst-case scenario for an abandoned project.)


version of the language spec != version of the compiler

I.e. future compilers can understand previous version of the language spec.


The breakage you describe in A) shouldn't happen according to the proposal. The compiler for toolchain version X+1 should still be able to compile code that is language version X.


What did Python remove?


This is my attempt to be sympathetic to the GP's point of view, perhaps they have a different example in mind.

One example comes from the new-style / old-style classes distinction and the method resolution order when considering multiple inheritance.

Python 2.3 introduced "new-style" classes which inherit from `object` and use the C3 resolution algorithm [1]. For backwards compatibility reasons, "old-style" classes without an explicit `object` base class still used the old method resolution semantics.

Python 3 does away with "old-style" classes entirely and all classes must use the "new-style" semantics.

There's presumably a ton of code that doesn't specify `object` as an explicit base class, and thus may have subtle broken behaviour under "new-style". Given Python 3's inability / unwillingness to simulate the old behaviour, I'm unaware of any tool that's able to rewrite Python 2 to 3 in a bullet proof fashion [2].

This is not the same kind of breakage that other people mention, like `print` being a function or `reduce` being moved to `functools`. Those are simply backwards incompatible "movements", not outright removals, and thus can be rewritten by automated tools.

[1] https://www.python.org/download/releases/2.3/mro/ [2] https://portingguide.readthedocs.io/en/latest/classes.html#n...


See the release notes of Python 3.0:

https://docs.python.org/3.0/whatsnew/3.0.html#removed-syntax

The removals were actually an easier pill to swallow than the numerous subtle changes in behavior introduced by the big, deep switch to Unicode. I'm over it now, but that was painful!


maybe not removed, but they certainly did make breaking changes to the language. here are some of the most obvious

python 2:

  print "foo"   # foo
  3 / 5         # 0
  apply         # <built-in function apply>
python 3:

  print "foo"  # SyntaxError: Missing parentheses in call to 'print'
  3 / 5        # 0.6
  apply        # NameError: name 'apply' is not defined


But at the same time, f-strings are soooo much better than the old way, or even .format().


These are precisely the kind of changes one would make to a language if you wanted to cause as much pain as possible. (I say this as someone who has been paid to write a syntax-driven code rewriting tool to go from one language version to another.)


For one, it removed the ability to do % formatting with bytes objects.

When working with a wire protocol like ssh, you need to work with sequences of bytes, not Unicode strings, and porting code that assembled them using % formatting was a huge pain; I tried and failed to port paramiko to python 3.


You might already be aware, but %-formatting with bytes is back as of 3.5.


Many things. The linked document notes that the print statement of Python 2 was removed from Python 3, harming migration.


The print statement to function didn't hurt migration. Nor did any of the other actual changes. What hurt the migration is a small but loud group of Python developers who think everything in the language was just fine, that not only was Python 3 unnecessary, it was actively bad and signaled a bunch of know-it-all theoretical type people from other languages coming in and taking over their own little corner of the wild west and imposing law and order where it wasn't asked for.

Actually, even those people aren't what hobbled the Python migration. What made it such a mess was that the core team didn't tell that crowd to fuck right off and get with the program or get left behind.

And even now with 2.7 nearing end of life, those same grumpy curmudgeons hang around web forums and reddit and blogs here and there, giving new people bad advice, complaining about how much better the old days were before unicode and async and types. I even saw someone griping about those new-fangled decorators and how monkeypatching was better.

The thing that made a mess out of the transition was the core team trying too hard to coax people who didn't want to move into coming along with them with a small bucket of carrots and no sticks. They tried to pull the bandaid off slowly and it took too long and still hurt like hell.


The Python 3 transition has taken 10+ years and millions of developer-hours. It is perfectly reasonable to question whether the benefits have been worth the costs, especially when compared to other platforms like JavaScript that have made much greater advancements during that time.


I couldn't agree more. There was a huge amount of wailing that, for instance, Python 3 suddenly made devs care about the difference between text and bytes, when Python 2 had let them happily write broken code without complaining.

Yes, you have to write correct code now. Yes, this requires changing some stuff. But yes, this is better for everyone in the (not so) long run.


Always make new mistakes :-)


I'm surprised there's no reference to rust epochs [0]. Sure, they haven't seen use in the wild to learn lessons from in that regard, but they're a very well thought out solution to an effectively identical problem.

They both ended up with very similar results (define a version in cargo.toml/go.mod, use that version of the language to build), but there's also more to it than that, and the rust proposal could help guide discussion of other possible issues too.

[0]: https://github.com/rust-lang/rfcs/pull/2052


(They’re called “editions” now, by the way https://blog.rust-lang.org/2018/07/27/what-is-rust-2018.html )


I suspect rust isn't referenced because these things play out on the scale of years, and rust is only 1 year in, so it'd probably be premature to judge. Sure, it's a good source of ideas, but it's too soon to really have enough data to judge the results. These other languages have over a decade behind each of them.


Rust has been stable for three years.


I'm not sure it's wise to talk about Rust in those discussions, Rust is the language that needs nightly for a lot of recent libraries to work... talking about stability / backward compatibility.


I don't think the fact that Rust is introducing features at a brisk pace, and the fact that people start writing libraries around them before they're released, have anything to do with whether Rust's backward and forward compatibility policies are good or bad.

It seems unlikely, if Rust's policies do a good job of maintaining guarantees in a chaotic environment like that, that they would fail in a more placid environment.


I'm drawing a parallel between editions and Go's planned "go2" compatibility.

Whether Rust has some differences in its versioning, the situations are similar enough that I assert Rust's choices are relevant.

The fact that Rust itself may or may not have stability or backwards compatibility (I mean, I think it objectively does) isn't actually as important as the fact that the rust team wants to have those things and has thought hard about basically the same problem Go is tackling now, and has already taken the effort to write down its thoughts and conclusions.

If you want to claim I shouldn't bring up Rust's "editions" work, I'd rather you talk about the "editions" RFC rather than Rust's slightly different versioning scheme.


Which libraries are you talking about? Empirically, the majority of Rust users use stable. Additionally, the nightly/stable split exists to serve exactly this problem; if you need stability, you use stable, and the need for nightly drops each and every release.


I think the most popular web framework is rocket.rs, it needs nightly. That's latest example I have, honestly when I looked a year ago others libraries were using nightly.


Rocket does, but there are other quite popular frameworks that do not.

A year is a long time!

dx87 20 days ago [flagged]

Why would you tell people that the recent libraries require nightly, when you admit that you haven't even looked at the library ecosystem in the past year? I honestly don't get the motivation that would make you want to come here just to post lies about a programming language.


Crossing into flamewar will get you banned here. Please post civilly and substantively, or not at all.

https://news.ycombinator.com/newsguidelines.html


Hey hey, calm down a little.

I use rust a lot, and it’s been heavily geared towards nightly for a long time.

The whole rust nightly/beta/stable hasnt worked well, because (afaik) the beta usage is tiny and people tend to jump on nightly or stable. It’s only very recently any effort has been made to change this for the new edition.

...so, you know, the comment may not be entirely accurate, sure, but it’s our fault people have come away from rust feeling that way.

The OP is not at fault here, and definitely not, I think, willfully lying.

Take a deep breath; there’s no need for this sort of attitude, and frankly it reflects badly on the rest of us (rust users).


Please don't post personal swipes like "calm down a little" and "take a deep breath", even if someone else is wrong or misbehaving.

https://news.ycombinator.com/newsguidelines.html


Rust wouldn't be a better language if nightly were instantly made stable. Instead, we'd have a ton of half-baked features that we'd be supporting for eternity.

Rocket requiring nightly is unfortunate, and I wish it didn't, but aside from Rocket stable has been a good forcing function to get crate authors to not add nightly dependencies.

Anyway, from what I see, actix seems to get more buzz than Rocket these days, precisely because the former works on stable.


True, I’m not arguing that; but, for example, two days ago there was a big thread about using NLL to solve a problem.

It’s not like we’re past the nightly being the first toy people reach for to solve problems; that is still very much a thing.


You're getting pretty far away from the original claim though. Which widely used crates require nightly to take advantage of nll?

Some crates will always require nightly because there will always be some people that want to take advantage of the latest features. That's a good thing to a degree, because it encourages experimentation with said features before stabilizing them. This is completely different from implying that nightly Rust is required by a lot of popular crates, which just isn't true. Most people are productively using stable Rust.


wasn’t the entire wasm ecosystem on beta until 3 days ago?

/shrug

The OPs point about some/many/[??? numbers of] people not using stable isn’t that unreasonable imo...


> The whole rust nightly/beta/stable hasnt worked well, because (afaik) the beta usage is tiny and people tend to jump on nightly or stable. It’s only very recently any effort has been made to change this for the new edition.

My impression was that most people feel the Beta toolchain is doing its job just fine, because most projects that use CI include a Beta test run. I don't think the plan was ever to have people using Beta on the command line as their daily driver.


The recent regression soundness issues last release were indicative of people not using the beta.

Notice specially the last month or so of actively asking people to try the beta.

If it wasn’t a problem, that wouldn’t be happening.


Can you give an example of soundness regressions in the previous release? Neither the 1.29.1 release nor 1.29.2 are indicative of people not using the beta; the former concerned a potential security vulnerability in a stdlib API that had existed for several releases, and the latter concerned a runtime heisenbug that could neither be found by merely compiling with beta nor even reliably by running the code.

> Notice specially the last month or so of actively asking people to try the beta.

This is because for the last month the beta has represented the first release candidate for Rust 2018, which is more important than normal releases in all three of implementation scope, backwards compatibility concerns, and marketing potential.


Well, one of the major use cases for the beta channel is for the Rust teams to fix regressions before they hit stable.


> I'm not sure it's wise to talk about Rust in those discussions, Rust is the language that needs nightly for a lot of recent libraries to work

No, it doesn't.


[flagged]


There are proposals to add enums to Go [1]. Enums are specific instance of sum types (aka discriminated unions), and while having simple typed enums would be nice, Go would benefit, in my opinion, from having true sum types, and this has also been proposed [2].

Unfortunately, due to Go's current semantics (e.g. zero values) and memory layout, adding sum types is not as straightforward as one might hope [3]. However, given that Go2 is allowed to break backwards compatibility, something might come out of the discussion.

[1] https://github.com/golang/go/issues/19814

[2] https://github.com/golang/go/issues/19412

[3] https://www.reddit.com/r/golang/comments/46bd5h/ama_we_are_t...


Explanation: your comment was unnecessary hostile itself (“unprecedentedly stupid level”), and folks are following the time honored tradition of downvoting and moving along.

It really looks like you’re trying to pick a fight. It’s cool if you want enums. I miss them too. That’s not the problem here.


The enum thing was just an example, that wasn't even the main point of my comment


This has in fact nothing to do with the story itself.


It would be fun to take polls on those points to see whether go users like it or not in a way for example through GitHub login but only allow users of a golang project that has been existing for a while before the poll started or put it in the yearly stackoverflow surveys.


If you're frustrated by the lack of a strong type system in Go then check out Rust. Rust has type parameters and type constraints that let you write generic code. The authors have also been super attentive to the good parts of Go. Rust has great tooling through Cargo, it has rustfmt for formatting, rls for language server support.

Rust also adds real sum types and a tuple type. Multi-value returns are just tuples, they aren't a special case of the semantics. Result types let you avoid writing if err != nil everywhere, but we still believe in the mantra of not panicking for things that are not truly exceptional.

Rust is also closer to C++s performance, and it's a strongly community driven project. We also have great concurrency primitives. We have channels, but we also have futures, which can be scheduled on green threads in a CSP style just like Go, or can be run behind the Actor model, using async/await syntax with polling... there's a lot more flexibility there to align the language to your business needs.


You took this thread on an offtopic tangent. When people do that by baiting the discussion into being about some hot generic controversy (Rust vs. Go! type system wars!), thread quality suffers. The generic baity discussions are all the same, and usually become flamey too.

Other users have started to complain about you doing this repeatedly, so we need to ask you to stop. No more of this, please, regardless of how much you like Rust.


So I am a C++ dev by day and I write some go and rust for side projects or smaller projects at work. Go is SOOOOOO much simpler to write (unless you get into crazy threading data race stuff) than rust. I like rust (a lot), but the mental load is a lot more to take on than go.


An interesting comparison between experiences is that people describe Rust as having a lot of mental load at first, but eventually as having way less, since the compiler does so much for you.

On the topic of the original thread, I wonder why Rust wasn’t included; maybe it’s because our first release that’s achieving similar goals is happening six weeks from now, and they wanted to see how things played out in practice, not in theory.


Yeah I agree that as I learned rust I got used to it and became slowly more productive. But I am still nowhere near as fast to accomplish things with it as I am with modern C++ or go or swift.


people describe Rust as having a lot of mental load at first, but eventually as having way less.

Well that's true for most not-so-semi-trivial stuff one has to learn, it doesn't mean is always a good idea though.

Just giving a different perspective to beginners.


Sure! You should use whatever tools you prefer. Rust isn’t the best programming language of all times, it has pros and cons like everything else.


> maybe it’s because our first release that’s achieving similar goals is happening six weeks from now

What's happening in the next release? Any backward compatible breaking changes?



Yes, but it's also much more likely to result in a run-time failure. I'm not talking about memory safety here -- just the fact that Rust has (Algebraic Data)/Sum Types[1]. Of course these kind of "want" pattern matching, but it's not an absolute requirement.

Having ADTs/Sum types will improve the reliability of your software immensely just by being available. It's truly absurd just how much work they can do: They can replace "null", they can ensure that there are no surprises in a state machine where some state carries over unexpectedly, etc. etc.

[1] Not higher-kinded, but that's probably a distraction for the point I'm making.


Yes I know. I love Sum types, and I love Trait oriented programming. And I definitely miss sum types in Go and C++. If C++ had sum types and no header files that would probably hit my (very subjective) sweetspot.

Tangent: I wish Rust had optional chaining like swift, which I know is a limitation due to Option just being a normal sum type. Having to pattern match every optional is burdensome


Heh, I was actually alluding to C++ with the pattern matching thing. Std::variant<> is actually a bona fide sum type, but using it is horrendous because of the lack of pattern matching. You can use the the "cppreference overloaded"[1] trick to sort-of do basic pattern matching of the outer constructor, but it's still not particularly well supported by compilers.

[1] I'm using this as a proxy name for a trick I saw on cppreference where you can (via template tricks, why not?) write code very similar to pattern matching, but it's ultimately quite simplistic and brittle.


Ahhh ok I gotcha now. To be honest I have not used std::variant yet as I am stuck on C++11 at work. I agree though at some point there is a limitation with no pattern matching like you have in rust


Agreed. You could encode valid state transitions phantom types, for example. Go's weaker type system means it has less power to encode was is and is not correct program behavior.


I've found that Go works well for small code bases where duplication isn't too much of a worry. It makes sense. The authors of Go are the same people who advocate for small, focused tools: the unix philosophy.

When you grow to a certain size though, it becomes very messy. Go has very little facility for abstraction despite being a higher level language with a GC and a substantial runtime. I also personally find the error handling frustrating. Most of the time all you want to do with an error is send it to the caller, who has more context on how to handle it.

I still write Go alongside Rust, but I use it as a scripting language, for all the things I used to use Python for. Rust just has the tools I need to build abstractions, and lets me avoid continually rewriting data structures.


Yes there is much more mental load you pay upfront during development but once you get code working and satisfy the borrow checker you now have significantly stronger runtime guarantees about your application in terms of safe and proper concurrency.

This is the trade off.


I understand that, I posted my response becasue the OC was presenting Rust as a replacement for Go but they are very different languages with different tradeoffs


Actually, the Go team even bills Go as a systems language. I think if you're going to consider Go, you should also consider what you could get with Rust.

Sure the upfront investment in Rust higher, Go is almost too small, but the upfront cost of Rust is amortized via the hours you save with all the abstractions and performance you get right out of the box.


They’ve moved away from “systems” as a moniker, and Rust slowly is too.

It’s just too confusing of a term these days, and nobody knows what it actually means, so it’s not useful.


I would say if one can write OSes, device drivers, debuggers, compilers, target bare metal deployments on language X, then it is a systems language, regardless of what label people put on it.


The problem with that is it's a narrowing of the 'historical' use of the term and just confuses things further. The Go (and now Rust, it seems) people are right to move away from it.


Even when Go started and used the term I don't think this is what they meant by it.


Well given its use in Fucshia TCP/IP stack, file system management tooling, Android GPU debugger, not everyone at Google thinks like that.

Likewise those universities doing OS research in Go, submitting their findings to USENIX.


> you now have significantly stronger runtime guarantees about your application in terms of safe and proper concurrency.

I agree, but I think many people assume that this is the only benefit achieved. The presence of the borrow checker and rigorous memory safety requirements allows the compiler to take substantially more advantageous paths in allocation/deallocation compared to e.g. C. The benefit attained is similar to the benefit of switching from C to very well memory-managed C++, except it comes by default.

This is a significant advantage even in code that is not concurrent at all. I feel like it's a shortcoming of Rust's marketing, if anything, that the safety restrictions are often pitched only in context of thread-safety rather than memory-usage-optimization.


Yes I agree with those points as well...because Rust is now able to statically have such guarantees, this additional knowledge allows for some great optimizations under the good.


Is there a reason this guy keeps posting about Rust, especially on articles about other programming languages (ex: also a recent Elixir post)? Overall across posts, I sometimes agree with this guy, but it's getting really obnoxious that people continue to leave such replies every language post.

I really like Rust and use it for major projects, but I fail to see how it's much on topic. I agree with other comments that it gets compared to Go all the time, often without prompting or relationship to articles. Want to do that, why not write an article or do a separate post? Hijacking the comments and/or shilling Rust is getting old.

I have to say that although I love Rust and somewhat despise Go, if it works for you, use it. Go is good at a lot of things - identify what they are and use it because it solves your problem. If another language does that better, use that instead. If you don't know what those things are, you probably need to spend some time not only reading, but writing actual code in that language.

Don't pick one language to solve all your problems and don't look for problems because you like a language or some specific features of a language. Consider the entire scope of your issues, both technical and non-technical. In that sense, I can see why a lot of people choose Go and it is just as valid as any other language choice.


I'm sure this was well-intentioned but it had the effect of turning half the comments on an otherwise interesting thread into a Go vs. Rust language battle, which is not the topic of the story we're commenting on.


I'm commenting on the pain points specifically being addressed in Go v2, so I feel it's appropriate to talk about the alternatives here. I also tried to focus on specifics.


This document is in fact not about the features in Go 2, but rather on the migration path between two languages, whose features are (in the piece) an abstraction. Please don't start language wars.


Why do people feel the need to promote Rust on many Go topics? Also can someone explain the "lack" of strong type system in Go, I don't use interface{} so where is the problem?


Why do people feels the need to promote rust on many Go topics?

Its more fun for some to do so, for ex: the Go2 proposal might be too boring to discuss.


I think since there are so many languages being rapidly developed these days (including c++), comparing language features is a good way to know how things are and aren't used in practice.

Rust just happens be extremely popular right now, along with python and go.


I like Rust a lot, but lets be realistic here, being extremely popular is very subjective to which bubble one moves on.


That true. I always assume HN is reflective of the community at large, but at work it's very different.


One limitation is that although "int" can be sub-typed, such a variable cannot directly index a slice.


I’m confused. First, there’s no notion of sub-typing in Go. Second, the closest thing I can think of what you mean does work: https://play.golang.org/p/IE9sg4CwCd_C


Rust doesn't have proper integer subtypes either. Ada has.


Aren’t ada’s runtime checked? You can do that in Rust.


Only if it cannot be proven at compile time.


Interesting. You wouldn’t happen to have a link to docs, would you?


The docs are the Ada language standard.

http://www.ada-auth.org/standards/index.html

Like with other languages, each vendor is free to do their best for optimizations, while keeping language semantics.

Besides Ada Core, there are a couple others but at enterprise budget levels.

http://www.adaic.org/ada-resources/pro-tools-services/


Thanks


Two very important things for me that Go offers are the garbage collection and the stackful co-routines (i.e. go-routines). Makes code so much easier to read and write.

If they would give me generics and ADT, I don't think I would ever use any other language.


Another quite fast language with GC(actually you can choose between several), generics and ADT(also pattern matching with libraries) is Nim. People also use it for concurrent code, but I don't have so much experience in that aspect


Rust has the borrow checker, which gives you the safety of garbage collection coupled with the power of manual memory management. As of Rust 2018, Rust also has async and await, and futures can be cooperatively scheduled on a green thread model (but you're not limited to that model).


I personally prefer having a good GC because I need the speed of manual management in only a minuscule percentage of my code and it's much simpler to use a GC instead of constantly fiddling around with manual memory management, let alone the borrow checker and lifetimes. It's a matter of productivity and fun.


In addition, it's pretty easy to avoid heap allocations in Go, so you can do some basic (but effective) management without breaking a sweat.


Rust is a great language, and has its uses, but it's not a cure all. IMHO stackful co-routines are superior to stackless (i.e. async-await). Also if I am not mistaken the futures library is still in the nursery and subject to change.

When I write code that is going to run on the customer’s device, I choose Rust for the guarantees it offers me, (I know it will not crash because of my mistake). For server side code on the other hand, something easy going as Go is much preferred.


Futures are subject to some more change, though it’s getting close!

FWIW, async/await does produce a “stack full” coroutine: you get a single, exactly sized stack for each task, every time. This is made possible by coroutines being fundamentally stackless, but in the end, tasks still have stacks.


> FWIW, async/await does produce a “stack full” coroutine: you get a single, exactly sized stack for each task, every time. This is made possible by coroutines being fundamentally stackless, but in the end, tasks still have stacks.

I see, thanks for the info. I wasn't talking so much about the implementation but the difference between the program control flow differences between the two models, go-routines vs async/await.


Cool cool. Both have advantages and drawbacks; I think both languages have made the right choice given their contstraints.


While it gives the safety of a GC, it still doesn't give the ergonomics, specially when doing GUIs.


Rust and Go are very different languages for very different use-cases and with very different goals at every level of design.

Rust is a C replacement, Go is a replacement for everything else ;)


But simple reality checks reveal that Go is an evolution of C and Rust grow out of frustration with C++.


Rust is not a good replacement for C++, though. It doesn't even have inheritance, let alone multiple inheritance and/or multiple dispatch.

It's kind of in-between C and C++ in terms of capabilities, and neither a subset nor a superset of Ada's features in terms of safety. Some of Rust's memory safety features are only now being added to Ada and not yet in the forthcoming standard, whereas Ada has many features that Rust still lacks (e.g. integer range subtypes, OOP with inheritance). Rust is less secure than Ada/Spark, though, and clearly not intended for high integrity systems.

It seems that the niche Rust is aiming for is "safe general system's programming for people who primarily use C and C++ but have never used much Ada or Haskell", and in that area it's quite successful.

Both Rust and Go are odd, because they have less features of C++ (plus added safety) and attempt to sell a lack of features as advantages, whereas in reality no one forces you to use a feature and their developers are merely pushing their own, personal agenda about "what's the right thing to do". I personally don't like C++, but one of its benefits is that it leave its users the choice of what's the right thing to do and offer as many zero-cost abstractions as possible. Rust and Go are way more patronizing.


What do you mean Go doesn't have a strong type system? Are you refering that it allows for interface{}?


I think it's more that it requires it in many cases, due to the lack of generics.


I always find work arounds or just write more code. I feel like it's annoying sometimee but when reading code I feel blessed that generics are not supported.


Reading redundant code is always more work than letting the computer handle those low-payoff details.

And nobody's going to provide their data structures specialized to every type you could possibly want. Case in point: container/heap from the standard library takes and returns interface{}.


Oh no. Believe me reading code full of templates is not a pleasure. There is a reason Go is so pleasant to read.


C++ templates are hardly the best example of generics out there, and certainly not what you should be looking at from a Go perspective, seeing how Go is a higher-level language that doesn't concern itself with zero-cost abstractions anywhere near as much as C++ does. Look at C# or D generics instead.


Wow, that is a long piece of text. Can someone TLDR the actual conclusion from out of that verbosity?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: