I'm a C++ developer, and Rust looks very exciting to me. Maybe I'm optimistic, but Rust really looks like it can be a serious C++ contender in the long run.
Talking with other C++11/C++14 users here, the general feeling is that, yes, C++ is a good language because of the performance you can get out of it and some of the nice features that have been added, but the language carries around a lot of cruft that is unavoidable due to its age and maturity.
C++ has always been an expert's language. These days it's the expert's expert language++. There are no shortages of landmines and gotchas in the C++ landscape.
Modern C++ is a new language, and it really does help, but for the first time there's another language that can actually fill those same shoes.
A really big deal that isn't really addressed by golang is the fact that you can produce shared libraries with rust, and call them from existing C/C++ programs through a C interface. That's a really big deal in places with a lot of legacy code.
Another big deal (from the scientific computing POV) is the lack of garbage collection. Yes, GC is helpful, but replacing C++ means squeezing every last bit of performance out of your hardware. Rust does not sacrifice in this regard.
I really hope that Rust has a bright future. It's almost like "C++ done right", but I would stray away from such a label since someone might Linus you and say, "there's no way to do C++ right".
I also view golang as the successor to Python. It's faster, modern, addresses concurrency, and makes things easier for developers. It takes a strongly opinionated stance, and that's exactly the right choice for many problem domains.
Python, unfortunately, is dead to me. The GIL is a deal-breaker, and even though we have lots of Python code in production, I can definitely see a future where Python is phased out in favor of something like golang.
Basically, I'm bullish on Rust + Golang, neutral on C++, and slightly bearish on Python/Ruby-like languages that do not address the concurrency problem.
I think C++ has a very bright future - due to two driving forces:
1) The C++ Standards committee has been doing a very, very good job. Aside from the big one (compiler enforced memory safety), some of the best Rust features are on their way into C++. For example:
- Traits: C++ concepts are on their way - there is even a working implementation in a gcc branch.
- Iterators: See Eric Niebler's range proposal and accompanying range-v3 library.
But there are a few things in Rust that would be very difficult if not impossible to ever add to C++:
The borrow checker - the Rust type system is much more powerful in lots of ways, of which the ability to check and enforce ownership of objects is just one.
Macros - C++ pre-processor macros aren't in the same league as a proper syntax tree based macro system like Rust (and Lisps etc.).
Another thing that you've left off your list is modules, which has been proposed for C++. My suspicion is that the need to support the legacy is always going to make those features not quite as good as their equivalent in a language designed from fresh.
- Traits: C++ concepts are on their way - there
is even a working implementation in a gcc branch.
Rust traits and C++ concepts are not the same thing at all. C++ concepts are sort of like template parameters with better (not complete) type checking. Rust traits are closer to interface declarations. Though it's really hard to do a description of either justice in an HN comment.
The rest of your bullet points are fair, if a bit optimistic. I don't see all the features you listed making it into C++ 2020. But I'm hopeful that many of them will be, especially sane libraries for extremely common problems like ASIO.
The closest comparison that comes to mind for Rust traits is Haskell type classes, except for some (important!) implementation details like Rust's static monomorphization. You even get something similar to existential types with trait objects.
Describing what they are may be too hard for a HN comment, but what they achieve is easier. Traits are the way Rust supports both dynamic dispatch and static dispatch in polymorphic functions.
C++ supports both kinds of dispatch too, but the means to achieve it are not the same across all scenarios. If you want to go from one to the other, you'll probably have to rewrite your code or create wrapper classes in C++. In Rust, you accept a &trait instead of constraining a generic type.
C++ handles static dispatch through implicit template interfaces today, in the future by concept maps (but you can implement an ersatz version of it today, described in a blog post I wrote on this very topic[1]). Dynamic dispatch is always through your class's vtable.
I don't think adding features helps much, it just continues to raise the barrier to entry; C++ needs a means of "banning" "unsafe" techniques; e.g. an option you can turn on that causes C-style casts or pointer arithmetic to be a compiler error.
I'm not sure that's actually true in the case of ThreadSanitizer - a well-designed data race detector can detect race conditions that didn't actually get triggered in that run. In any case, usually people try and exercise those rare code paths by running a fuzzer against sanitizer-instrumented code.
ThreadSanitizer can also only catch bugs that can occur in the code paths that are exercised – and running a fuzzer can certainly show the presence of bugs, but I'd be wary if someone claimed it can show the absence of them.
While it's true that C++ still has lots of great things, the features aren't always directly comparable, due to C++'s existing semantics and commitment to backwards compatibility.
For example, if you move a uniq_ptr, it will become null, and using it will cause a segfault. In Rust, moving a Box<T> works just fine, because using it after the move is a compile-time error.
From a blog post by Rob Pike, one of the designers of Go:
"I was asked a few weeks ago, "What was the biggest surprise you encountered rolling out Go?" I knew the answer instantly: Although we expected C++ programmers to see Go as an alternative, instead most Go programmers come from languages like Python and Ruby. Very few come from C++.
We—Ken, Robert and myself—were C++ programmers when we designed a new language to solve the problems that we thought needed to be solved for the kind of software we wrote. It seems almost paradoxical that other C++ programmers don't seem to care."
I don't think that's an accurate portrayal that they were "C++ programmers" as if to imply they were experts or even proficient at it. Plan 9 doesn't even have a C++ compiler. From his wikipedia bio, Ken Thompson "later refused to work in C++".
I think they were surprised that C++ programmers didn't like Go because Pike and Thompson weren't actually C++ programmers. Griesemer however appears to know C++ well.
From point of view of programming language/compiler theory, Go is of course quite far from any dynamically typed VM-based language.
But from point of view of where it would be a good fit in practice, majority of large Python/Ruby projects, like those that are driving millions of websites right now, will greatly benefit from being rewritten in Go, without sacrificing anything important. That can not be said about C++ world — hardly anyone who chooses specifically C or C++ for a new project, over everything that is currently available, would consider Go an option.
I would disagree with it being a successor to Python likewise I disagree that it is close to C++. It is more like Java 1.0. A simple, fast and approachable platform for building smaller server side systems apps.
I just can't imagine anyone writing larger, monolithic apps with it. Whilst people have been doing it with C++ for decades. Then again Go is in its infancy so will be interesting to see which direction it does take.
Have you ever programmed C? Almost everything Go has is somehow related to C. Of Nim, Rust and Go, it's Go whose the closest programming language to C. That said, Go is the least suitable C replacement of these three.
Yes. And it's less about the language and more about the platform.
There is an ecosystem around C/C++ for supporting large codebases. There isn't anything like it (yet) for Go. And to be honest given that there is a gap in the "market" for a microservices language I can't imagine much being built in the future.
There is an ecosystem around C/C++ for supporting large codebases. There isn't anything like it (yet) for Go.
Sorry, does not compute. Go was specifically designed for large teams and big codebases.
While I find the new features of modern C++ interesting, the (lack of) tooling is what keeps me away. I simply refuse to write Makefiles or deal with preprocessor nonsense in 2015. "go build" just works. "import github.com/foo/bar" just works. "go test", "go test -bench" just work. Etc.
It's a successor to Python in that many of the non-Google big name early adopters are big Python shops - Cloudflare, Dropbox, etc. My company was almost all Python on the server side and is now almost all Go.
Don't think of "successor" as a similar-but-better language (like I consider C# to be to Java) but in terms of where the community comes from in general. Rob Pike has said how he expected Go to attract C developers and was surprised how it attracted Python-ers in droves.
It has static typing, but on the other hand it's got duck typing and a batteries included standard library. And in practice people tend to move to it from Python rather than C++.
The approach is closer to C++, but it hits an area where many developers find C++ lacks the kind of clean expressiveness many would like (being more low level than is needed) and Python, Ruby, etc., lack the performance and (in most implementations) support for hardware parallelism. The creators were people who, without Go, reached for C++ for those problems, but I think lots of other devs favored Python, etc.
Isn't Go more like the new Java? Python is probably still going to grow and Ruby possibly as well, while Java is disliked fairly heavily in many communities.
I did lots of programming in Java between 1995 and 2004 when the language had no generics. I did lots of programming in C++ prior to 1995 when it barely had generics. Now I do a fair amount of programming in Go and I don't really miss generics.
Well, just have a look at gosearch and see all the generic generators poping up in Go. Most developers want generics period. By the way while Java didn't have generics before Java reflection and static polymorphism could make up for them. C++ has a preprocessor so even without template you can do generic programming.
Go has ... code generation that isn't even triggered by Go compiler.
Java programmers don't hate Go and isn't classes, generics that make the difference. It is the fact that pretty much every single problem every experienced has a Java library somewhere.
Java is a complex, mess of a platform for a reason.
From my experience it definitely doesn't seem to be the case that Java programmers will hate Go. I personally am a "Java programmer" and would love to switch, although that has more to do with my dislike of Java. There was an internal talk in Amazon about Go and all the speakers who switched from Java loved it.
It is so embedded in core enterprise systems that even if the language was frozen in place it would still be used for the next decade or longer. Even more so given that Java/Scala is the lingua de franca of analytics/big data which is all the rage right now in your Fortune 500s courtesy of Hadoop.
A big part of it is also the huge amount of talent from countries like India and China who have been taught programming via Java 6/7.
Personally I tried to like Rust but I just found it endlessly frustrating to write code in. Yeah C++ has a lot of gotchas but they're gotchas I know. I feel like the Rust compiler is just yelling at me constantly and a lot of patterns from C++ land don't really translate well into Rust code.
I feel like the Rust compiler is just yelling at me constantly
This is all about your frame of mind. When I started learning Haskell I felt the same way. Now I treat it differently. I ask the compiler questions and it gives me the answers. What is the type of this partially-applied function? Oh, that.
It helps to have an editor and workflow that supports this style of programming. Writing large amounts of code and submitting it in batch to the compiler is just going to give you a headache when you get dozens (or more) errors and warnings. With a tool like emacs flymake/flycheck, you can get continuous feedback and use hotkeys to jump to and quickly fix errors as they occur, rather than letting them pile up.
a lot of patterns from C++ land don't really translate well into Rust code
Well of course. Rust is a new programming language, not a rehash of C++. To take full advantage of Rust you need to learn its patterns and idioms.
We would love to hear more about your experience, if you care to elaborate. Understanding where people are having trouble helps us direct our documentation efforts at making the language easier to learn for people from various backgrounds. :)
I've written about 10 thousand lines of Rust at this point and hope to write much more. After "suspending frustration" and internalizing the borrow checker and the idioms it just works for me now.
That said, I still get minor jets of frustration when this works:
let y = a.b.c;
But this does not:
let x = a.b;
let y = x.c;
And this works:
let x = foo.bar();
let y = foo.baz(x);
But this does not:
let y = foo.baz(foo.bar());
due to scoping and borrow checker rules. Papercuts, yes, but annoying nonetheless.
That said, Rust (effortlessly) caught what could have been a very dangerous bug for me last night (I'm using a Rust wrapper around Lua): when you call "tostring" in Lua it returns you a reference to a string on the Lua stack. If you then pop the Lua stack the reference becomes invalid. If you then use the reference to the string, well... (This is akin to iterator invalidation.)
Rust's borrow checker flagged the error and its error reporting made the problem obvious. This will make up for a very large number of papercuts.
You've captured one of my somewhat late realizations about rust – some seemingly-simple refactors change semantics in ways you might not immediately expect, coming from most other languages. In addition to the ones you mentioned, I'm sometimes frustrated when trying to extract methods, sometimes because of moves and sometimes because of needing to write out a complex return value where the type was previously inferred.
But I have personally found it to be true that (as with any other language) you get a gut sense for the semantics of the language and stop fighting against them over time.
The core problem is that the set of safe programs is considerably larger than the set of programs which pass a tractable, maintainable, and (potentially) provably safe compiler (including borrow and type checkers).
You want to enlarge the latter set, since that lessens programmer frustration and increases expressiveness, but in the context of a non-garbage-collected language that often requires adding complexity to the type and borrow checkers which may in turn end up introducing bugs.
A discussion of various run-ins with the borrow checker and some details of attempts to reduce the friction:
'Both @pcwalton and @zwarich spent some time trying to actually implement this work (with a possible RFC coming hand-in-hand). They ran into some unexpected complexity that means it would take much more work than hoped. I think everyone agrees with you that these limitations are important and can impact the first impression of the language, but it's hard to balance that against backwards-incompatible changes that are already scheduled.'
[added on edit:]
In particular, here is a very nice statement of the issue(s):
> if it's yelling at you you are probably doing something
> wrong.
That's not necessarily the case, any static type system must be inherently conservative and therefore must reject some otherwise valid programs. It also takes time to learn Rust's idioms for creating cyclic data structures, and surely we can still take steps to improve our documentation to that regard.
Often, it's not at all obvious what the type inference system is doing. The borrow checker has better error messages than the type checker.
As for cyclic data structures, one would often like to have a primitive which encapsulates a forward owning pointer and a backpointer tied to it, with the backpointer unable to outlive the forward pointer. That's enough to handle most tree-like structures, and doesn't require reference counts or reference-counted weak pointers. It's a compile-time concept checkable by the borrow checker. Something to think about for Rust 2.
Also, for Rust 2, I'd suggest exceptions. The "new error handling model"[1] is excessively verbose. Repackaging low-level errors for the next level up requires considerable coding. I understand the big problem with exceptions is what to do about unsafe code. I'd suggest "just say no". Unsafe blocks cannot raise exceptions. If an exception in safe code unwinds into an "unsafe" block, it becomes a panic. Unsafe code in Rust is usually to interface with non-Rust code. If you have unsafe sections in pure Rust code, you're probably doing it wrong.
The secret of exception sanity is a standard exception hierarchy, like Python's. Catching an exception class in Python also gets all its subclasses. "EnvironmentError" covers all I/O errors, network errors, and such, but doesn't cover internal program errors. New user-defined exceptions are added below the most appropriate part of the standard exception hierarchy. So Python usually doesn't have Java's exception hell. If exceptions go into Rust, do it that way. Require that all exception objects descend from some predefined exception object in the standard exception hierarchy.
An additional advantage of a standard exception hierarchy is that it leads to straightforward internationalization. If all the standard exceptions have translations, users get semi-reasonable error messages in their own language. If you have too many string-type error messages in a program, it's hard to handle multiple languages.
Go is starting to get exceptions through the back door. People are abusing the "panic"/"recover" mechanism in Go to get exceptions.[2] Don't go there.
> The secret of exception sanity is a standard exception hierarchy, like Python's. Catching an exception class in Python also gets all its subclasses. "EnvironmentError" covers all I/O errors, network errors, and such, but doesn't cover internal program errors. New user-defined exceptions are added below the most appropriate part of the standard exception hierarchy. So Python usually doesn't have Java's exception hell.
You know, Java also has a standard exception hierarchy. However, whether in Java or Python, it usually makes sense for the exceptions of a library NOT to be part of the standard hierarchy. For instance, SQLAlchemy's exceptions all inherit from SQLAlchemyError. It's the same thing in Java.
> As for cyclic data structures, one would often like to have a primitive which encapsulates a forward owning pointer and a backpointer tied to it, with the backpointer unable to outlive the forward pointer.
One can already do this with `&T` (for the backpointer) and `Box<T>` or `&mut T` (for the forward pointer). The hard part is letting you safely mutate something once the back reference has been added, without any runtime checks and with arbitrary data in the pointer. I don't believe what you've described addresses that issue, but maybe I'm missing something.
My point is that it's possible to extend the Rust model to check this at compile time. The borrow checker has to understand backpointers. The owned object cannot outlive the owner, so a simple backpointer is safe in that sense. But ownership can change, and when it does, the backpointer must also change.
The Rust ownership model can almost do this now. A move-type assignment to the forward pointer which causes a change in ownership would have to do a mutable borrow on the backpointer to get exclusive use, then change the backpointer to the new owner. That prevents a change to the backpointer while it's being used for something. The borrow checker would have to understand that those two pointers are tied together by an invariant. There would need to be syntax for back references, understood by initialization as well as moves.
It's such a common case that it's worth the special handling. It would let you do trees and doubly-linked lists without reference counts.
The Rust ownership model is very powerful. I was fooling around with something like this a decade ago as an improvement to C++, but all I expected to get out of it was the elimination of reference count updates when you passed an object to a function without giving the function permission to delete or keep the object. Rust pushes that model much harder, and it works. It can be pushed even further to make more common use cases compile-time checkable. This would cover more common idioms from C++.
Another possible extension is lifetimes on objects allocated with "new". If you need to add something to a tree, you want the new objects to have the same lifetime as the tree. If you could specify a lifetime in "new", resulting in an object being allocated in an outer scope, that would cover the idiom where you call some "add to data structure" function and it creates objects to be added to the data structure. Currently in Rust, you have to create new objects from within the scope that owns the data structure, then call some function to install them in the data structure.
The idea of creating an object in an outer scope seems strange, but when you create a closure, you're doing just that.
> One can already do this with `&T` (for the backpointer) and `Box<T>` or `&mut T` (for the forward pointer). The hard part is letting you safely mutate something once the back reference has been added, without any runtime checks and with arbitrary data in the pointer.
Can one? A &mut T and &T both pointing to the same memory and existing at the same time causes undefined behavior, and such a pairing could only be constructed via a compiler bug or (wrong) unsafe code.
My understanding is that one of the basic assumptions you must have when choosing Rust over C++ for a new project is that you're looking for long term maintainability over short-term ease of prototyping.
A lot of the yelling done by the compiler comes from real-world experience building gigantic piles of code that need to be maintained for years and years.
And the bit about patterns that don't translate well, that's easily fixed with more experience coding in Rust. C++ programmers know C++ so well because they've been doing it for decades and the language forces you to be on the watch for all the gotchas.
Rust is described as the successor to C++ so often that it's easy to end up assuming that it's a drop in replacement.
It's not. It's a very different language, but designed for the same use cases (mostly). There are many C++ patterns that become pointless, and new patterns you need.
It gets better. It really does. Yeah, you have to learn the lifetime formalism and how to express the stuff you want in it, but once you get your head in the game, learn the Rust patterns, it's no more frustrating than your average statically typed language. Or maybe it's a bit more of a pain than C++, but the payoff is worth it.
Servo's been looking pretty exciting for a while now. I believe they just passed a milestone where it now implements the CSS rules used by 50% of all webpages (according to a survey performed by Google), and its performance is still blowing other layout engines away if preliminary reports are to be believed (as ever, we'll see if that holds up as they implement more of the web). Definitely a project to watch.
Once my two layout PRs (border-collapse and transition) land, we will have at least some support for every standard CSS property used by 50% or more of Web pages, as measured by Chromium's usage counters. That does not mean that the implementations are bug-free or that all features of those properties work; for example, multiple backgrounds, elliptical radii, and many transition properties are not yet supported. But it's an important milestone nonetheless, IMHO.
Looking at the number of outside contributors shows a bright future for Servo even if Mozilla doesn't fund it beyond research. It is much much more friendly to newbies than trying to contribute to gecko or webkit/blink. And webkit started out entirely unfunded as KHTML.
I am excited for Servo. I think it has the potential to act as a real competitor to the current Webkit ubiquity, which, while not necessarily bad, probably isn't ideal.
Agreed. The web has always been healthiest where there are a handful of independent browsers in the mix. We're sliding down the path of a ChromeDesktop/SafariMobile duopoly which isn't good for the web.
I'd really like to see Servo start pressuring the big guys and possibly start winning some of the embedding battles.
Mozilla's experimental rendering engine, written entirely from scratch to avoid a lot of the problems that Gecko and other engines have because of their age. Along with that they're experimenting with parallelized layout and other things to get better efficiency and performance.
Yea once it's gotten to where it's stablish and can render most websites I'm really wondering how hard it'd be to start making it work on the desktop as a replacement. I know there's the whole XUL issue but I'm betting that's solvable since they don't have XUL on mobile/firefox OS either.
A small electric engine, I believe. In this case probably a play on how we call web renderers "browser engines". Servo is Mozilla's next-gen browser engine.
That's very nice. This is great progress for Servo and Rust. It's a screenshot of a page rendered without any Javascript, though. Here's that page, run through our service which removes all Javascript, Flash, and embedded code, parses the HTML, and re-emits cleaned-up HTML.[1] (This tool is normally used to check how our crawler sees a page.) It looks exactly like the screenshot on Twitter, once you scroll down past the video they're not showing.
Now they have to interface Servo to all the ugly stuff - the Javascript JVM, plug-ins, etc.
Servo is already hooked up to a (very old) version of SpiderMonkey, but it's missing enough DOM features that most pages hit errors. This Steam page hits "ele.canPlayType is not a function", "document.write is not a function", and "link.href is undefined".
Last I heard, Servo does not plan to support plugins such as Flash. I'm not sure if the existence of Shumway changes this.
Fun times ahead if document.write isn't already supported. When I rewrote Gecko's HTML parsing, accommodating document.write was a (or maybe the) dominant design issue.
It's quite different from innerHTML, since document.write inserts source characters to the character stream going into the parser, and there's no guarantee that all elements that get opened get closed. There's even no guarantee that the characters inserted don't constitute a partial tag. So document.write potentially affects the parsing of everything that comes after it.
For this to work, scripts have to appear to block the parser. However, it's desirable to start fetching external resources (images, scripts, etc.) that occur after the script that's blocking the parser. In Firefox, the scripts see the state of the world as if the parser was blocked, but in reality the parser continues in the background and keeps starting fetches for the external resources it finds and keeps building a queue of operations that need to be performed in order to build the DOM according to what was parsed. If the script doesn't call document.write or calls it in a way that closes all the elements that it opens, the operation queue that got built in the background is used. If the document.write is of the bad kind, the work that was done in the background is thrown away and the input stream is rewound. See https://developer.mozilla.org/en-US/docs/Mozilla/Gecko/HTML_... for the details.
For added fun, document.write can write a script that calls document.write.
What a mess. To support a stupid HTML feature, the browser's parser has to be set up like a superscalar CPU, retirement unit and all. Hopefully the discard operation doesn't happen very often.
i86 CPUs have to do something like this if you store into code just ahead of execution. That was an optimization technique marginally useful in the 1980s. Today it's a performance hit. The CPU is happily looking ahead and decoding instructions, when one of the superscalar pipelines has a store into the instruction stream. The retirement unit catches the conflict between an instruction fetch on one stream and a store into the same location in another. The CPU stalls as instructions up to the changed instructions are committed. Then the CPU is flushed and cleared as for a page fault or context switch, and starts from the newly stored instruction.
Only x86 machines do this, for backwards compatibility with the DOS era. The same thing seems to have happened in the browser area.
(Prospective fetching should be disabled on devices where you pay for data traffic. Is it?)
Apparently a lot of the older webpages do stuff like this. You could have a table being outputted by nesting for loops for the `<table>`, `<tr>`, and `<td>` opening/closing tags. It's a small step from here to constructing tags bit by bit.
I don't see people doing it in modern websites (doesn't mean there aren't), but we sort of have to support all of the Internet, so...
It is rendered with JS, but we don't support all of the DOM yet so some attribute accesses fail, making most of the JS fail before it does something useful.
This is significant because there's an ongoing effort to implement the Chromium Embedded Framework (CEF) API on top of Servo, and Steam uses a derivative of CEF in its official client application. I'm guessing this is how the idea of using the Steam pages as a Servo testcase came to be, since Steam is at least a hypothetical embedding testcase.
While we are interested in embedding (and there are at least three proof-of-concept-phase browser shells for Servo as a result), there's no motive behind using Steam other than that Jesse's a gamer. :)
I don't care about the specifics of his language but I think he brings up a very valid point which my summary would be, we should design a language that is both performant AND fun to program. Fun = type less boilerplate, make refactoring easier.
C++ is not that language and from what I've read Rust is not that language either. C++ is stuck with backward compatibility so it can arguably never achieve those goals. Rust didn't seem to have those goals it mind, it had other goals so it's not really a fun language either. Too verbose or so I've read.
For a browser engine, "memory safe" as a design criterion massively outweighs "fun to program in" (although I do think Rust is pretty fun). Most critical browser engine vulnerabilities are memory safety issues, and we would be not be doing our jobs if we wanted to have "fun" at the expense of security.
Jonathan Blow's language is not memory safe, and is not trying to be. That may well be fine for his domain, but not for ours.
Like I said, my point is not about the specifics of his language. It's about the ideas he discusses. I agree with him that anytime I'm typing 150 lines of boilerplate on C++ or refactoring things from global function to member function etc, all the busy work required to do that is depressing. I wish more language devs would take stuff like that into account when designing their new language.
I can't. But I can make an educated guess that if there are complaints about Rust being too verbose then it's going in the wrong direction from "fun" as defined by the video I linked to.
Talking with other C++11/C++14 users here, the general feeling is that, yes, C++ is a good language because of the performance you can get out of it and some of the nice features that have been added, but the language carries around a lot of cruft that is unavoidable due to its age and maturity.
C++ has always been an expert's language. These days it's the expert's expert language++. There are no shortages of landmines and gotchas in the C++ landscape.
Modern C++ is a new language, and it really does help, but for the first time there's another language that can actually fill those same shoes.
A really big deal that isn't really addressed by golang is the fact that you can produce shared libraries with rust, and call them from existing C/C++ programs through a C interface. That's a really big deal in places with a lot of legacy code.
Another big deal (from the scientific computing POV) is the lack of garbage collection. Yes, GC is helpful, but replacing C++ means squeezing every last bit of performance out of your hardware. Rust does not sacrifice in this regard.
I really hope that Rust has a bright future. It's almost like "C++ done right", but I would stray away from such a label since someone might Linus you and say, "there's no way to do C++ right".
I also view golang as the successor to Python. It's faster, modern, addresses concurrency, and makes things easier for developers. It takes a strongly opinionated stance, and that's exactly the right choice for many problem domains.
Python, unfortunately, is dead to me. The GIL is a deal-breaker, and even though we have lots of Python code in production, I can definitely see a future where Python is phased out in favor of something like golang.
Basically, I'm bullish on Rust + Golang, neutral on C++, and slightly bearish on Python/Ruby-like languages that do not address the concurrency problem.