Hacker News new | past | comments | ask | show | jobs | submit login
Two Weeks of Rust (matusiak.eu)
323 points by omn1 on Jan 12, 2016 | hide | past | web | favorite | 343 comments



"No segfaults, no uninitialized memory, no coercion bugs, no data races, no null pointers, no header files, no makefiles, no autoconf, no cmake, no gdb. What if all the problems of c/c++ were fixed with one swing of a magic wand? The future is here, people"

It certainly is.

The advantages over C++ seemingly never end. I'd add to the list: rigorous portability, non-broken macros, ADTs and pattern matching, a careful language stability protocol, standard test and doc tooling, flexible namespacing via visibility and modules, the absense of vast lacunas of undefined behaviour, simple numeric promotion rules, meaningful compile-time errors for generic code. Plus things already mentioned in the post like Cargo.

People tend to focus on the safety side of Rust, but as impressive to me is just the... solidity on all these other axes. If you stripped out lifetimes and the borrow checker you would still have an incredibly compelling alternative to C++.


Languages come and go but C++ remains. I wonder if this one will be different.

Im not trying to be a jerk. Just voicing my concerns. Learning a new language and producing code in it is a huge commitment. It better be around in 10 years. Will it still be as clean and simple, or will it have grown complex. What if Mozilla stops sponsoring it?

C++ is 35 years old. It has stood the test of time. I would not be surprised if it was still popular in 35 years time, when I retire. There is so much C++ infrastructure code in this world, that its not going anywhere.

In the meantime C++ has evolved. I cant remember the last time I had an uninitialized memory, null pointers, or buffer overflow -bug. Those are low level C problems. If you avoid C-style programming and stick with modern C++ (RAII, value semantics, STL, boost, etc) you wont get them.

C++ is changing fast. In the next few years we are getting high level concurrency and parallel language support and libraries (no more low level thread problems), we are getting modules (no more preprocessor hacks), compile time reflection (no more manual serialisation), concepts (no more ugly error messages), ranges (no more cumbersome iterators), and a whole lot more.

And finally there is "C++ Core Guidelines" which aims to be checkable by tools. So you get a warning when you are relying on undefined behaviour.

I think C++ is still the future.


> Languages come and go but C++ remains.

And those other languages have slowly eroded C++'s dominance. You have to remember that in the early '90s, C and C++ had nearly 100% market share for all applications. That dwindled with the rise of Java, then dwindled more with the rise of dynamic languages, and the mobile/modern era made it dwindle even more with Objective-C, Swift, and Go. Now C++ is a popular language for a certain class of applications, but by and large new programmers aren't even learning it anymore. That's a huge shift.

> In the meantime C++ has evolved. I cant remember the last time I had an uninitialized memory, null pointers, or buffer overflow -bug.

I do. https://bugzilla.mozilla.org/buglist.cgi?query_format=specif...

> And finally there is "C++ Core Guidelines" which aims to be checkable by tools. So you get a warning when you are relying on undefined behaviour.

As I've said before, I have questions about this tool: how much like C++ it will really be after ruling out so much code, and how they plan to statically guarantee noalias on function boundaries in the presence of shared_ptr.


True, C++ has slipped since its hay days. Its natural in a way. C++ as a general purpose language cant compete with domain specific languages at their own domain. Even if every domain got its own language, there might still be room for a generalist language that works across many domains.

But C++ also has some domains of its own where it is the king. Systems programming and any application where performance is a priority.

If C++ started slipping against a new languages in those areas, then it might be a sign of the end for C++. I don't think its going to happen for a very long time. Its hard to beat C++ when it comes to performance (while still having high level abstractions). And any general purpose language needs to know how to do "a bit about everything", so it would probably be just as complex as C++.

(Even though C++ has slipped in percentages, I bet there are no fewer C++ programmers today than there ever was, there's are just more programmers in general.)

If something is going to kill C++, its probably a language that scales very well to multicore. Maybe a functional language. I be happy if that happened as that would be a revolutionary breakthrough.

On you final points. I think C++ programmers are slowly but surely moving away from C-style coding towards modern C++ (and we will see less of those types of bugs). It takes time though. Id be happy if C++ at some point was sub-setted and the worst parts of C where deprecated (Thats the other aim of "C++ Core Guidelines" as I understand it).

About the aliasing problem. I'm not sure how it could detect that either. We will probably have to live with it. In general though smart_pointers are still just pointers, and its better to avoid them when one can (and use value semantics instead).

http://www.tiobe.com/index.php/content/paperinfo/tpci/index....


> Its hard to beat C++ when it comes to performance (while still having high level abstractions).

Re: performance, Rust's doing pretty well (on a bunch of arbitrary microbenchmarks):

https://benchmarksgame.alioth.debian.org/u64q/compare.php?la...

Re: high level abstractions, Rust doesn't have many of the template shenanigans C++ does (but that's a good thing, IMHO). However Rust does have a phenomenal type system for a language with such predictable and solid performance. And it has many wonderful high-level abstractions (exhaustive pattern matching on algebraic data types, iterator syntax for collections, many functional idioms, an AST-based macro system, etc.).

> If something is going to kill C++, its probably a language that scales very well to multicore.

Rust has fantastic concurrency, and the libraries for it are still in relative infancy. The type system keeps un-synchronized shared mutable state from happening (unless you intentionally turn off certain checks using unsafe code, IIRC). There are freely available libraries for lock-free data structures and stack-scoped threads. As an aside, it's also really easy to use those libraries without having to vendor some random header files that may or may not conform to your coding style.

> Maybe a functional language. I be happy if that happened as that would be a revolutionary breakthrough.

Rust also has some pretty cool functional syntax, although I'm not an expert on functional programming, I've been enjoying it quite a bit.

> About the aliasing problem. I'm not sure how it could detect that either.

Not sure how C++ intends to do it, but Rust does it with a strict ownership system that is fully enforced at compile time.


Yeah lets see how it plays out. I might have a look at rust at some point.

On the benchmark though, its doesn't seem quite fair. At least that first one is comparing a single threaded c++ program, to a multi threaded rust program. And for the regex example the author doesn't use C++ std::regex and std::futures (std::async) like he does for Rust. It makes me think a proper C++ implementation would do much better. (Personally I use the Parallel Patterns Library. I like it a lot. We will get something similar in C++ in a few years)

I don't know much about rust. It worries me though the examples are still using threads and mutexes. I think we need much higher level abstraction (like ppl::task, coroutines, etc) to get better scaling. Also lock free data structures dont scale that well either. As they still require synchronization and that hurts scaling). I think we need some kind of revolution in how we code to scale our programs to hundreds of cores.


  > And for the regex example the author doesn't use C++ 
  > std::regex and std::futures (std::async) like he does 
  > for Rust.
Rust's standard library implements neither regexes nor futures, so the code you're seeing must be coming from third-party libraries. If anything, this comparison would favor C++, since its own libraries are likely to be more mature and optimized than Rust's.

  > It worries me though the examples are still using 
  > threads and mutexes.
You shouldn't be worried. :) C++ may require higher-level abstraction to make concurrency tenable, but Rust was designed as a concurrent language from the outset. Rust's type system prevents data races at compile-time, so programming with raw threads isn't nearly as fraught as it is in every other language and refactoring concurrent code can be performed with compiler-assisted confidence. I recommend Aaron Turon's blog post "Fearless Concurrency with Rust": http://blog.rust-lang.org/2015/04/10/Fearless-Concurrency.ht...

  > Also lock free data structures dont scale that well 
  > either. As they still require synchronization and that 
  > hurts scaling
This is another assumption that I suspect that Rust obviates. Let me recommend another of Aaron Turon's blog posts, "Lock-freedom without garbage collection": http://aturon.github.io/blog/2015/08/27/epoch/


> hundreds of cores

That was the dream of the mid-2000s. But now hundreds of cores are never going to happen. The CPU vendors have decided that the industry has run out of time to parallelize their programs and are now refusing to scale up. Our hope for speedups now lies in using SIMD and GPUs effectively ("heterogeneous computing").


For what it's worth I actually think that Rust (and a smattering of other languages and tools) could push us to revisit the "hundreds of cores" thing again if these parallel-first/friendly tools get popular enough that CPU vendors see a market of highly parallel consumer-grade applications.


It is worth noting that the implementations for those benchmarks are contributed by a community, not all written by the same person (IIRC), so if someone is interested in using a language to the best of its hypothetical performance ceiling, they can definitely submit a solution.

I'm not sure about the details for the "fasta" benchmark, but I know it's at least partly I/O bound, and many implementations for the benchmark are single-threaded:

https://benchmarksgame.alioth.debian.org/u64q/performance.ph...

Notably, Rust also edges out a multi-threaded C implementation for that one. I'm sure they've done some very hairy optimizations to get to the top on that board, but it's cool to see that it's possible.

On the subject of regexes, I am not familiar with the C++ std::regex implementation, but it does look like (some rather slow) C++ implementations are using a boost regex library. Those are generally fairly "standard" in C++, right?


> Notably, Rust also edges out a multi-threaded C implementation for that one. I'm sure they've done some very hairy optimizations to get to the top on that board, but it's cool to see that it's possible.

The most important optimization I did was to avoid regex machinery as much as possible. In particular, Rust's regex library has very good support for prefix literal optimizations. In particular, regexes like `abc[x-z]foo|bar` will compile down to an aho-corasick[1] automaton over the strings `abcxfoo`, `abcyfoo`, `abczfoo` and `bar` with the failure transitions completely evaluated. (The end result is an automaton represented by a matrix. This is memory intensive, so only prefix literals of a certain size can be accommodated. But you don't need a lot to realize huge gains!)

I wrote more about it here: https://www.reddit.com/r/rust/comments/39unje/ahocorasick_fa...

Hint: most of the benchmark game regexes are relatively simple and compile down to either simple `memchr` calls (most regex engines will do that, nothing special) or a aho-corasick DFA that completely avoids the traditional regex evaluation machinery.

In general, regex implementations differ dramatically in the optimizations they perform. It's hard to draw conclusions about them without some explicit knowledge of how it is implemented.

[1] - https://github.com/BurntSushi/aho-corasick


I cant be sure about Rust, but C++ should at least be as fast as C for any fair benchmark.

About the regex example.

https://benchmarksgame.alioth.debian.org/u64q/program.php?te...

The boost regex library is fast and widely used. It doesnt look like the example is using it though, but a library called "re2". Never heard of it before. Googling turned up this https://github.com/google/re2


There are 4 C++ benchmark submissions for regex-dna. One of them does indeed use boost, but it is significantly slower than RE2: https://benchmarksgame.alioth.debian.org/u64q/program.php?te... --- I don't know much about boost's regex support, but after a quick search, it does support backreferences, so it's likely a backtracking implementation. It's no surprise to me that it is beaten handedly by an implementation that uses DFAs (RE2, and in this case, Rust's regex library as well).


Re: regex, I was referring to some of the other C++ implementations:

https://benchmarksgame.alioth.debian.org/u64q/program.php?te...

https://benchmarksgame.alioth.debian.org/u64q/program.php?te...

I have no idea whether it's the regex implementations that are causing the comparative slowness, to be honest. Just seemed interesting that some C++ implementations using what I would think are common techniques fall behind many other languages' implementations.

Of course, benchmarks are always arbitrary, and some of them will just disadvantage approaches that would be perfectly realistic ways to solve "real-world" problems, so it's always a grain-of-salt situation. However I'm not sure it's necessarily fair to say that "C++ should be at least as fast as C for any fair benchmark." Doesn't that expose you to the danger of redefining fair benchmarks as "those benchmarks which reinforce my preconceived notions of which tools have what performance characteristics"?


Probably bad wording by me. Cpp vs c has been benchmarked so extensively that it would be big news if C was quicker than Cpp at anything. Cpp is often faster though because of inlining of templates. Finally c is mostly just a subset of cpp. Its difficult to see how compiling the same code would be slower with a cpp compiler.


Rust's ownership system makes shared memory much more safe than you'd expect. And you can also use shared-nothing approaches if you prefer those.


  > any general purpose language needs to know how to do "a 
  > bit about everything", so it would probably be just as 
  > complex as C++.
Unless C++ massively breaks backwards compatibility (which would be a death sentence for the language, IMO), new general-purpose languages will be capable of achieving an order of magnitude less complexity than C++ simply by learning from C++'s history and avoiding all of its unfortunate missteps. Rust is one such language, having studied C++ extensively.

  > If something is going to kill C++, its probably a 
  > language that scales very well to multicore.
Rust was conceived in order to write a browser engine with extreme fine-grained concurrency and parallelism, at every level, and capable of scaling from one core to as many as you can throw at it. See https://github.com/servo/servo/wiki/Design for a basic idea of its structure.

  > I think C++ programmers are slowly but surely moving 
  > away from C-style coding towards modern C++
This is not my experience. As far as I can tell, C++98 remains the most popular dialect of C++ used in a day-to-day capacity in industry. Even among the relatively "hip" Hacker News-ish crowd, I have seen just as many people advocating that C++ usage revert to the "C with classes" approach as I have seen people advocating modern C++.

  > [TIOBE link]
Even if you accept TIOBE as an authority on language popularity, you need only scroll down to the historical graph to see that C++ has eroded heavily over the past 15 years, whereas Java and C have more-or-less held steady. Even among the tier of languages that C++ leads now, PHP, Python, and C# have all held historical market share greater than C++ does today, which to me indicates that we are past the point in time where C++ can be said to hold a commanding position in the market. And even disregarding that it has Java and C to compete with, it would supremely difficult for C++ to regain a commanding market position now that it has to compete with new languages on multiple fronts where it formerly dominated: systems programming (Rust), server programming (Go), application programming (Swift).

So no, C++ isn't going to "die" (what language ever truly dies? COBOL still powers all the world's banks, and I've personally been paid to write RPG-LE), but I wouldn't bet on its ascendancy.


>In the meantime C++ has evolved. I cant remember the last time I had an uninitialized memory, null pointers, or buffer overflow -bug. Those are low level C problems. If you avoid C-style programming and stick with modern C++ (RAII, value semantics, STL, boost, etc) you wont get them.

While your small walled garden within C++14 is nice, clean, safe, and well maintained. Massive parts of it are built ON THE VERY FOUNDATIONS YOU ARE ATTEMPTING To AVOID.

This is the irony of C++. While there is a very nice subset of the language, that does nearly all the same things Rust does. You still have 30 years of legacy laying around. You likely won't find a job that EXCLUSIVELY follows the C++14 guidelines. You WILL have to learn the legacy patterns, and work with legacy code, and make legacy mistakes.

>C++ is 35 years old. It has stood the test of time. I would not be surprised if it was still popular in 35 years time, when I retire. There is so much C++ infrastructure code in this world, that its not going anywhere.

I don't debate this fact. But legacy C code never stood in the way of C++'s adoption. There are always gonna be MORE programmers tomorrow, then today. People adopting a language is hardly a zero sum game.


> [...] This is the irony of C++. While there is a very nice subset of the language, that does nearly all the same things Rust does. You still have 30 years of legacy laying around. You likely won't find a job that EXCLUSIVELY follows the C++14 guidelines. You WILL have to learn the legacy patterns, and work with legacy code, and make legacy mistakes.

Large existing codebases are a strength of C++ though. Any new replacement language won't magically make those codebases go away.

> [...] But legacy C code never stood in the way of C++'s adoption. There are always gonna be MORE programmers tomorrow, then today. People adopting a language is hardly a zero sum game.

The killer feature of C++ was the seamless interoperability with C which allowed trivial use of C libraries, and, most importantly, easy piecemeal evolution of large C codebases.

For example GCC is begin ported to C++ with relative ease, while a rewrite in rust would be a significantly more complex endeavor.

Then again, beyond lifetime checking, rust and C++ are very similar languages and seamless interoperability is not inconceivable. I believe that's the missing feature for rust to achieve world domination.


>Large existing codebases are a strength of C++ though. Any new replacement language won't magically make those codebases go away.

Which I already said...

>>I don't debate this fact. But legacy C code never stood in the way of C++'s adoption. There are always gonna be MORE programmers tomorrow, then today. People adopting a language is hardly a zero sum game.

>For example GCC is begin ported to C++ with relative ease, while a rewrite in rust would be a significantly more complex endeavor.

Source or opinion?

>Then again, beyond lifetime checking, rust and C++ are very similar languages and seamless interoperability is not inconceivable. I believe that's the missing feature for rust to achieve world domination.

No I agree. Seemless importing with C++ would be amazing. Name mangling in C++ is just a bit of an issue. Rust works great with C. The FFI is very stable. I use Rust libraries at work as generating 32bit DLL's is possible with nightly.


I haven't used it yet, but Rust supposedly works very well with C libraries, both calling into them and being called from them.


The hardest part is creating the comfortable, Rust-ish interface. Actually interacting with C code is trivial.


I've read your replies and I think you're arguing too passionately and missing facts.

It doesn't matter that there's much legacy C++ code. Two things will happen:

a) it will stay C++03, regardless of what changes in C++ or what other languages pop up. This kind of code is probably part of a working product which fulfills its mission and maybe gets small updates or fixes.

b) it will continue to be maintained, and it will be able to benefit from some or most of the changes coming in new C++ versions. It doesn't mean that it will be converted to super duper C++14, but that's not how software maintenance works anyway! And guess what, they probably won't be converted to Rust either, but they will be better due to the new standards. You are building a strawman when claiming that it's either Rust or failing to use pure C++14. Successful software tools offer backwards compatibility when making improvements and iterate instead of replacing everything. Just ask the Python and Perl teams.

And this is how C++ was able to build marketshare, by offering a transition path from C. What exactly is that path for C++ to Rust, rewriting everything?

Finally, new projects will use the new standards. My current project is C++11. The next one will be C++14 and we will absolutely make use of the new features where it makes sense.


> And guess what, they probably won't be converted to Rust either, but they will be better due to the new standards.

What we need is memory safety, and new standards don't provide that. Maybe in the future they will, but I have significant questions about how suitable the ISO Core C++ lifetime checker will be.

> Just ask the Python and Perl teams.

Python and Perl were not backwards compatible with anything when they were released.

Rust is not a new version of C++. It is a new language.

> And this is how C++ was able to build marketshare, by offering a transition path from C. What exactly is that path for C++ to Rust, rewriting everything?

No. It's a very comprehensive FFI, IMO among the best in any language. We even have (preliminary, quite hacky) C++ support.

We had to spend time developing this, because we actually use Rust for Servo and we had to develop incrementally, retaining large C++ components. It works great.


>> Just ask the Python and Perl teams.

> Python and Perl were not backwards compatible with anything when they were released.

I think the implication is that the teams will tell you that the upgrades to Python 3 and Perl 6 were problematic because they didn't offer backwards compatibility. If so, it's a bit silly, because the point of those upgrades was to advance the languages beyond would could be done with backwards compatible iteration. Asking the teams would probably yield a bunch of people that were adamant that the decision was sound, and it needed to be done, but there were missteps in execution. Asking the communities would likely yield a more polarized set of opinions.

That said, IMO using either of these items as examples to bolster an argument is almost always a mistake, since their own discussions are so large and polarizing, they rarely ever make make a point more clear.


Python and Perl were not backwards compatible with anything when they were released.

Perl 5 (and older) was highly backwards compatible from a programmer standpoint. Much of Perl is an agglomeration of preexisting Unix command line tools and little languages into one language.


Well, it's closest semantically to awk, but awk is not syntactically compatible with Perl [1]. That's pretty similar to the situation with Rust and C++: semantically compatible but not syntactically compatible.

[1]: http://www.arl.wustl.edu/projects/fpx/references/perl/learn/...


"semantically compatible but not syntactically compatible" would have been a more impressive way of expressing it.

If Rust is highly semantically compatible with C++, then automated porting of C++ to Rust isn't such a far-out idea.


Definitely it'd be interesting. But I feel like such a thing is going to have the same issues I worry about with the ISO Core C++ lifetime checker: existing C++ code just isn't designed for lifetimes, so you will have to rework the logic a lot to get the static analysis to pass. When you have that much manual intervention, I wonder whether you're porting or really just rewriting—and if you're rewriting, syntactic compatibility doesn't matter so much.


On the plus side 30 years of legacy laying around also includes a lot of mature and well maintained libraries.


Which don't follow the much professed C++14 guidelines and can introduce the errors that Rust is designed to prevent.

:.:.:

I don't have an issue with C++ itself, its just the meme that "C++14 will solve all of C++'s problems". It won't. C++'s problems are now locked in, and fixed. They've been known for 25-30 years, and we'll deal with them for another 50+.

Yes the tooling is great, there are amazing libraries, there is still virtual templated necromancy, there are still null pointers, and use after frees. There still will be tomorrow, and there still will be in 50 years.


It's funny to see how these languages topics always explode. I read it when it had about 20 comments and then it predictably devolved into a mess of hype and comparisons.

Anyway, I think your strategy is sound. Rust is a young language and I would be pleasantly surprised to see code written today still compile in five years. I got bit by this with Swift v1 in a little app. When 1.x came, it failed to correctly transform the source and the whole thing was a major PITA to update. Completely soured me from using Swift.

I can understand why Mozilla wants to use Rust. It solves a problem for them, they largely control how the project evolves and can plan for it. To me, the big questions are what will happen to Rust if Mozilla changes strategies (Persona, Thunderbird, FirefoxOS) and whether Mozilla will continue to be a successful organisation. Personally, I root for them, in spite of their recent missteps.

I think Rust might be a good choice now for language aficionados and early adopters which can afford to waste time in exchange for other potential benefits. I would absolutely not use it to bring a project to market or for an OSS project which has a long term vision. In 5+ years I expect such a decision could be worth revisiting.


> Rust is a young language and I would be pleasantly surprised to see code written today still compile in five years.

We have had a code stability promise for months now in 1.0. So what you are saying here is that you do not believe that Rust will adhere to what it very publicly planned to do. Do you have specific reasons for this?

> To me, the big questions are what will happen to Rust if Mozilla changes strategies (Persona, Thunderbird, FirefoxOS) and whether Mozilla will continue to be a successful organisation.

Rust has a core team and community that overlaps a lot with, but is very much not identical to, Mozilla. Probably most Rust users aren't even Firefox users.

Rust is not "Mozilla's language". It's the Rust community's language.


> We have had a code stability promise for months now in 1.0. So what you are saying here is that you do not believe that Rust will adhere to what it very publicly planned to do. Do you have specific reasons for this?

Since 1.0, Rust has in fact had multiple significant stable-breaking compiler changes, not to mention that, among other things, adding any new method to a trait or type in the standard library has the potential to cause breakage. The official code stability promise has not been violated by these changes only because it's riddled with exceptions.


> Since 1.0, Rust has in fact had multiple significant stable-breaking compiler changes, not to mention that, among other things, adding any new method to a trait or type in the standard library has the potential to cause breakage. The official code stability promise has not been violated by these changes only because it's riddled with exceptions.

This is essentially the same as any other language. Adding a new method to traits or types can also break JavaScript, the other example you gave.

In fact, I know of one such example where code was broken in the wild because of new methods being added, and we did not revert the change. Marijn Haverbeke's winning js1k entry relied on the 4th and 7th letters (or something along those lines) of every CanvasRenderingContext2D method being unique, because it compressed code by renaming the canvas methods with what was then a perfect hash function. A new Firefox version was released months later that added new methods and broke the code. That was deemed an acceptable piece of breakage, and Rust's policies are very similar.

Anyway, the vast majority of breakage that we've seen so far was due to the libcpocalypse. This was an issue with people not specifying specific versions of libraries they depended on, not the language.


Here's another example with less trivial breakage, which was reverted:

https://www.fxsitecompat.com/en-US/docs/2015/string-prototyp...

Unless you mean to refute the idea that JS doesn't break anything ever - which it clearly does, occasionally, but proof-of-concept golf code that intentionally takes fragile shortcuts isn't the best example.

...But JS has moved away from monkey patching builtin objects for good reason, whereas Rust tends to be relatively promiscuous in adding methods to other people's types via trait impls. This isn't bad per se, but I do think it creates bigger compatibility risks.


I don't think this phrasing is particularly charitable, yet we don't need to argue about semantics, because we have numbers.

  > Approximately 96% of published crate revisions that build with the 1.0 compiler 
  > build with the 1.5 compiler.
https://internals.rust-lang.org/t/rust-regressions-2015-year...


I've seen it, and if that rate of 4% of code breaking in half a year were to continue, in the above stipulated 5 years it would look pretty bad. Hopefully it will not, given that some of those regressions involved soundness issues and other things (integer fallback) that were "clearly wrong", which ought to be mostly weeded out before long. That said, the rate of new features in the language and especially libraries seems likely to increase a bit in the nearish features, given the size of the feature request backlog (or not - you'd know better than I).

But even at a rate of significantly less than 4%, there's still trouble. That figure is based on the number of root regressions, right? (If I'm wrong, disregard the rest of this paragraph, but I don't think so...) That is, the number of crates that break on their own accord, rather than those that won't compile because one of their dependencies is broken. If one application depends on dozens of crates (or is just really large), the chances of it breaking somewhere are much higher.

This can be partially mitigated by the crates themselves releasing upgrades, but look at Python 3 - there will eventually be a lot of code out there that's completely unmaintained, especially within the "dark matter" of internal or closed source applications. Also, a fix in the latest version of a crate will probably not be backported to long deprecated major versions of it - which means that if I dust off a crate that hasn't been touched in years and uses such a version, I'd have to either upgrade the dependency or do the backport manually.

And the ancestor post did say five years: last I checked on the lists, there was no real consensus as to when Rust 2.0 should be released, on a scale from "in a year" to "never". Maybe there is more consensus internally, but I'd be surprised if it didn't happen in that long...

edit: It's not an entirely fair comparison, but for what my ideal stability promise for a programming language would look like, one need not look very far - only as far as Mozilla's other programming language.


I'm convinced that the breakage in new C++ compilers is worse than Rust's breakage. New C++ compilers and language revisions break more than 4% of code. Likewise with many other languages, such as Ruby. We're just very up front about regressions when they do occur.

> edit: It's not an entirely fair comparison, but for what my ideal stability promise for a programming language would look like, one need not look very far - only as far as Mozilla's other programming language.

As a matter of fact, JS's stability promise is very similar. See my other post for an example in which JS was broken due to, essentially, one of the reasons you cited, and we lived with it. The JS stability promise is not "we will never break any JavaScript in theory with new browser versions", because that would prevent anyone from, as an extreme example, changing the sorting algorithm for Array.sort. It's "we will not break you in practice, except in a carefully delineated set of circumstances." (Actually, JS's stability promise is much weaker than that: its stability in practice is decided by browser consensus and market forces rather than a clear set of rules.)


You're correct about impacts of popular crates, this is why crater is so important. A nice benefit is that popular crates should also have more attention on them, since they're popular, so if they are hit by a soundness issue, it should be addressed quickly.

The core team feelings on a 2.0 is much closer to the "never" end of the spectrum than the "in a year" side.


My only reason is that experience and surprising developments have made me into a skeptic when discussing technology. I was not aware of any promises.


Are there any large deployments of Rust outside Mozilla? (I'm not being sarcastic, just wondering who with resources besides Mozilla is incentivized to put money into Rust development.) Thanks!


The largest is currently Dropbox, who re-wrote the way they store the bytes on disk in Rust. It was deployed in late December, they have promised a report after it's been in production longer than a few weeks.


thank you; I'll put a note on my calendar to check in a couple weeks

edit: Once you google for dropbox rust, you find some really interesting stuff; here's an overview

https://news.ycombinator.com/item?id=9724849


Author of that comment here, let me update it with some new links for you. :)

Putting Rust into production at Dropbox: https://www.reddit.com/r/rust/comments/3wrgl0/what_are_you_f...

Some preliminary feedback on using Rust at Dropbox: https://www.reddit.com/r/programming/comments/3w8dgn/announc...


"Rust is a young language and I would be pleasantly surprised to see code written today still compile in five years."

I'm myself willing to accept the promise of code stability, so I think you're looking at slightly the wrong problem. While I'd expect current code to continue to compile, I also expect what is regarded as good Rust now won't be in five years. That's not a bad thing, but it also means that hopping on the Rust bandwagon might be premature.

The worst case? How about Haskell, where new language features make old code look worse even as it still works? Or C++, with it's accelerating rate of style changes?


That's an interesting point regarding changing idioms/culture.

Can't speak about Haskell, however I believe that C++ changes can be incrementally adopted: we had C++98, then boost established itself, some parts of boost were included in TR1 and then were part of C++11. A similar process is taking place with other libraries such as asio and filesystem. I don't have the feeling that modern C++ style is in flux, its pillars continue to be RAII, generic programming and the STL.

I think Bjarne & co have explicitly stated that they want to grow the language while maintaining backwards compatibility as much as possible and from my experience, the code I write today builds on patterns and idioms that started to crystalize a long time ago.


Yep RAII was already a thing in the late 90's for example.


That is an interesting point. I'd like to see pcwalton respond to it. That people are having problems with current Rust idioms means they might [rightly] change them. So, starting with Rust now could reinforce bad practices that come with a retraining or software re-engineering cost later. So, this is a real risk that I overlooked in prior discussions.

Good catch.


According to TIOBE, Rust is now more popular than Go. Both have corporate backing. Go is definitely used in production, and I wouldn't be surprised if people are dipping their toes in the water with Rust in production as well.

Even if it remains a small niche, I think we've seen that Rust is popular enough that using it long-term is fairly safe. Maybe you'll be one of only a few thousand shops who use it -- that's enough. The benefits of writing secure code vastly outweigh to drawbacks of using something unpopular.


TIOBE's good for trivia, but probably not for picking a specialisation path or choosing which tech to build a project on.

When choosing a language to learn, I consider:

* what a particular industry or project of significance is using

* job availability

* compatibility with my existing toolbox

* the power/fun factor

But that's far from the whole story, because programming languages are relatively fungible and investing in domain knowledge, people skills, architecture and many other things will bring bigger benefits than learning that brand new programming language!

And yet here we are, discussing how Rust will deliver us.


> * what a particular industry or project of significance is using

> * job availability

That just boils down to popularity once again. I prefer to pick languages based on suitability for the problem domain. For me, Rust is suitable in a way that C++ is not, due to memory safety among many other reasons.


I think we are mostly in agreement. If an industry is standardizing on a language, it's probably a good match for the kind of problems that they solve, i.e. for their problem domain.

Job availability is not just about number of jobs, but also about project quality, companies and personal preference. E.g: I don't have an interest in containers, and most Go jobs involve containers, so this is a minus for Go.

This works in the other direction too. Let's say I want to work for Mozilla, I notice they use mostly C++ & Js, so I decide to invest time brushing up my skills in those areas.


> programming languages are relatively fungible

I agree with that regarding learning them, but I disagree strongly when it comes to the end result.

I make a lot of my money fixing/replacing PHP projects. As has been pointed out many times before, there are issues with PHP that are guaranteed to cause bugs. The same is true of many languages.

But there is a spectrum! Some compilers/interpreters scream at you and terminate if you contradict yourself. Some of them even recognize security vulnerabilities.

I agree that people skills, domain knowledge, and architecture are hugely important, but great teams are still less effective when they use slow or confusing tools.

You might be the greatest contractor in the world, but if someone asks you to build a skyscraper with plywood and super glue, you're going to end up with something dangerous, fragile, and guaranteed to fall apart as soon as someone tries to use it.


>>many other things will bring bigger benefits than learning that brand new programming language!

The other narrative is also true. Using newer technologies is the best way to be a part of new projects and avoid working legacy maintenance kind of projects.


It's interesting that a synonym for "legacy maintenance kind of projects" is "successful".


"And yet here we are, discussing how Rust will deliver us."

So why are you here, debating about Rust, if you already know your time would be better spent investing in domain knowledge, people skills, or architecture?


Spending let's say 1h on HN is not at the same level of time investment as learning a programming language. ;)

I am not really discussing Rust per se, as I am not familiar enough with it, however, when I see biased or misleading statements I do sometimes feel obliged to offer a counterpoint.


IIRC, doesn't TIOBE have a problem with indexing Go because they only search for "Google Go" (or something restrictive like that)?


According to a post from a few years ago, they were using "Go programming", "Google Go", and "golang"[1].

You'd have a similar problem with Rust, Ruby, Python, Java, Ada, Julia, and probably tons of other languages. It must also be really difficult to differentiate between C, C#, and C++, considering most search algorithms will ignore the # and + characters.

1. https://groups.google.com/d/msg/golang-nuts/TtPzOvhG6bM/-XCr...


  >  I would be pleasantly surprised to see code written today still compile
  > in five years.
That's the goal, modulo soundness fixes. We don't currently have any plans for a 2.0.

  > what will happen to Rust if Mozilla changes strategies
Well, we're just starting the process of integrating Rust code into Firefox. Yes, Mozilla has killed off a lot of things, but I don't think Firefox is going away any time soon.


>Rust is a young language and I would be pleasantly surprised to see code written today still compile in five years.

Without changes to the code? Has there even been a language where that has been true?


Many, I'm sure. Most Java from 5 years ago will still compile on newer javac versions, for example.


Not just most, I'm betting the overwhelming majority of Java code will compile. But all popular languages break small things with every release (and this is probably a good thing in the long run), so look hard enough and you're guaranteed to find code that won't work.

For instance, here's the compatibility guide to Java 8: http://www.oracle.com/technetwork/java/javase/8-compatibilit...


C#. Yesterday i rebuild a project (with latest compiler version) that i've created 12 years ago.


Note that C# famously broke backwards compatibility when it added generics. I say "famously" because it opted not to follow Java's type-erasure approach (which was chosen so as to explicitly maintain backwards compatibility), and history seems to agree that the minor amount of pain back then was well worth the improvements that it brought to the language (especially relative to Java).


They also changed the for loop var semantics in C# 5.

There a few other little breaking changes as well.


BASIC. Fortran. Pascal. Oberon. Any language designed to be simple and map well to pseudo-code. Even the source-to-source converters tend to have few problems with those. One of reasons I recommended them for certain projects in the past where longevity and talent were concerns.


There was a time not too long ago that nobody ever said "C++ remains", what they said was "Nothing can challenge C++". It was the "business programming" language. The fact that it's now "surviving" and "remaining" is a testament to how far it's fallen, not how successful it's become. It didn't rise from 0 to "used" over the last 5 years, that's something a new language Go can claim with pride; no - it went from "used everywhere by everyone" to "still used, somewhat". C is still more popular, at least according to TIOBE.

Java stole it's crown for "enterprise" programming and Microsoft even pretty much abandoned it in favor of .NET. I'm not sure their massive push for "C++ is back!" that happened 'round C++11 has really met with that much success - the world moved on, honestly.

There's fewer and fewer niches for it - the web could care less for it, heavy-duty "enterprise" programmers aren't giving up their safe and GC-based Java/.NET languages and giant 3rd-party eco-systems, and even video games now can be written in a higher-level language where only a small core 99% of people never interact with is in C/C++.

The problem is, IMO, that it's too late for C++ to be a decent language. If the first iteration of it was C++11, then sure. But there are vast code bases out there written in the horrible mid-90's or late 80's dialect of C++ that aren't going anywhere. Even something like Qt isn't true "modern" C++, and don't get me started on MFC. Look at Google's official C++ guidelines - they barely allow any modern C++ into their code-base.


  > Languages come and go but C++ remains. I wonder if this one will be different.
As I said the other day, unfortunately, the only way to get an old language is to start with a new one, and then let time pass.

You're not _wrong_, exactly, but with this logic, no new programming languages should ever be made. There are certainly valuable things about using a truly mature ecosystem, but we also need to build better mature ecosystems.


> but with this logic, no new programming languages should ever be made

Well I mean after LISP it was all down hill. /s


Languages come and go but COBOL remains. I wonder if C++ will be different.

I'm not trying to be a jerk. Just voicing my concerns. Learning a new language and producing code in it is a huge commitment. It better be around in 50 years. Will it still be as clean and simple, or will it have grown complex.

Note: That the C++ and COBOL counterpoints look identical and laughable never gets old for me.


I actually laughed out loud. Well played.

Taking the comparison seriously though for a moment. In my opinion the difference is that C++ isn't just a better Cobol. Its not just the same thing re-imagined or cleaned up. There is a revolution between them. You can do things in C++ that you cant in Cobol.

Im waiting for that revolutionary new general purpose language that goes mainstream. I think it will be a language that scales much better and easier to hundreds of cores. Maybe it will be a functional language.


"I actually laughed out loud. Well played."

It's a new meme of mine. Glad you had fun with it. :)

"Im waiting for that revolutionary new general purpose language that goes mainstream. I think it will be a language that scales much better and easier to hundreds of cores. Maybe it will be a functional language."

I am too. I think Rust is a nice alternative to C++ for doing that sort of thing. Far as next step, I'm waiting too. I encourage you to check out Julia because it seems to have some of those traits. It's a LISP on the inside with a productive, imperative skin on the outside. Lots of potential for such a hybrid.

Also, check out ParaSail if you're interested in languages designed for easy parallelism. It's an Ada superset with some interesting design decisions. Might be worth factoring into the next, ideal language. ;)


They do come and go. 90% of a language's success is legacy apps that use it and can't switch off. The other 10% is how powerful the hype train is for getting new apps written in it, which then form that language's own 90%. Rust can actually get that done. The community has the power, and they have the hype. It's happening.


The thing is, it's quite hard to ban unsafe practice from C++, so long as people (rightly) insist on backwards compatibility. The checkable guidelines sound great if people can be persuaded to use them and make them stick - but people aren't necessarily already using those features that exist.

C++ isn't going to go away any time soon, but it might gradually fade into the background.


> compile time reflection (no more manual serialisation)

Are we? I've been hoping but this wasn't on the last C++17 meeting in Kona, Hawaii

https://www.reddit.com/r/cpp/comments/3q4agc/c17_progress_up...


Which is quite interesting for someone like me.

Around 1994 C++ seemed the path forward from Turbo Pascal, given that I favoured type safety, C was a meh when compared with Turbo Pascal, but Turbo Pascal was a PC only thing.

Back in those days C++ was regarded like Rust and other C++ wannabe replacements are nowadays.

We were the hipster of the 90's, with C devs targeting home systems slowly accepting that not all functions needed to be inline Assembly wrappers.

So it is interesting for a grey beard like me to see C++ being described just like C and Pascal vs C++ compilers were back in the mid-90's.

Nowadays my area of work is dominated by JVM, .NET and the native languages of mobile OSes.

I wish Bjarne et al succeed pushing "C++ Core Guidelines" forward, but they will not change the mentality of those that program C with C++ compiler, which is what I usually see at the typical corporations.


Learning a new language and producing code in it is a huge commitment. It better be around in 10 years.

10 years is a long time; a few weeks to learn a language is not.


    > no autoconf
Mentioning this in the same breath as "no segfaults" seems odd, does Rust use some other build configuration tool, or does it just not have the need for compile-time configuration detection? If it's the latter how does that even work?

Are there no optional libraries? No outdated libraries whose version you have to detect? No obscure platforms whose architecture you have to detect for unsafe inline code etc.?

Because if you can have any of those or any number of other cases where you need compile-time detection of configuration options you're going to need something like autoconf.


> does Rust use some other build configuration tool

Yes. Rust ships with cargo, which is probably the best build configuration tool ever written, and I've tried a lot of them, for a lot of different languages.

It has clean, non-hacky solutions to all of the cases you listed, and a lot more besides those.


Oh, far from that. Last time I checked Cargo didn't understand the difference between downloading things (dependencies) and compilation. Even its building process didn't have that separated, and happily downloaded a random pre-built ELF binary to build itself.


I'm not sure what you mean by not understanding it. It is true that if you don't have dependencies, "cargo build" will go get them, but if you don't, then it shouldn't. There have been some bugs that caused it to over-download, but those were bugs.

  > happily downloaded a random pre-built ELF binary to build itself.
Yes, Cargo builds itself with Cargo, so you need to get a previous Cargo. The binary is very much not 'random'.


> It is true that if you don't have dependencies, "cargo build" will go get them, but if you don't, then it shouldn't.

The main problem is you can't download dependencies separately from compiling your code. It should never be the same step. Even pip had it mostly correct, by providing an option to disable network communication.

It is somewhat mitigated by documentation mentioning where the downloaded dependencies are put, so you can do all that manually. This sucks balls heavily.

> Yes, Cargo builds itself with Cargo, so you need to get a previous Cargo. The binary is very much not 'random'.

It's fetched outside of regular downloading you do on tarball. You have no control over where is it downloaded from. It means it's pretty much random for your purposes. Heck, it would be even unexpected, if I wasn't used to developers not understanding the difference between building and downloading software.

But this is not the main point. Cargo should not need already compiled Cargo to build itself. This is ridiculous. I understand that every good compiler is built with the very same compiler you're building, but look at how they do that: by bootstrapping. Not so much with Cargo. Who ever thought it as a valid idea?


  > It should never be the same step.
This is just going to be a preference then. I much prefer not having to type two things to do what's conceptually one: please build my code.

"cargo fetch" means you don't have to do the download step manually. IE, you _can_ do it in two commands if you'd like.

  > It means it's pretty much random for your purposes.
We are just going to have to agree to disagree on this one. First of all, you _already_ need an equally 'random' binary: rustc itself. Which is also bootstrapped. Second, it's not clear to me why bootstrapping is valid for a compiler, but not for any other kind of project. Cargo is a build system, written in Rust, so it uses Rust's build system.


Related: we're wondering how to use Rust because currently all non-Rust dependencies in our org are pulled from:

- corporate source code control systems

- corporate central repositories

- caching/proxying immutable repositories

These ensure all projects are built from known sources. We _know_ we can get consistent builds.

When using Cargo:

- Project owners update projects and don't bump the version. New bugs / security problems could be injected even though we haven't changed a thing internally.

- crates.io isn't always up.

- Trust: we legally can not trust (PCI compliance violations - 2015 rules (viral) ) the public crates.io repository. Besides PCI compliance, it's not possible for crates.io to guarantee perfect security (so many reasons, obviously).

* I'm hoping folks who have addressed this issue (or are addressing it, or are planning on addressing it) would comment.


> Project owners update projects and don't bump the version. New bugs / security problems could be injected even though we haven't changed a thing internally.

You can't update a crate on crates.io without bumping the version. Once a version is published, it cannot be removed. (It can be "yanked," but even yanking it does not make it completely inaccessible.)

> These ensure all projects are built from known sources. We _know_ we can get consistent builds.

Cargo isn't coupled to crates.io. You can run your own registry index. (Note the `[registry]` config section: http://doc.crates.io/config.html) --- All of the code that powers crates.io is open source. On top of that, crate dependencies can be specified via git URLs or locations on disk. Repeatable builds are well supported IMO.


Ahh PCI compliance :)

I think Burntsushi gave a good answer here. We want these features! It's a matter of getting the requirements correct, and then helping build them. We have some of this stuff already, and are working on what Firefox needs, which is very similar, but would love for anyone who has a stake in this to help tell us about what they need, specifically. If that's you and or your org, starting a thread on http://internals.rust-lang.org/ would be quite helpful.


You can use git or path deps if you want; or set up your own registry (there is code for it, but I don't think it's easy to set up yet. IIRC it's planned, thoughts welcome!)

> Project owners update projects and don't bump the version.

You cannot update code in a crates.io dep without bumping the version. And new versions only get pulled in when you do a `cargo update` or you update a package which bumps the version number of its dependency.


> This is just going to be a preference then.

No, it is not.

* It's not always that you have internet connection when building your code.

* It's not always that you have access to the website code is downloaded from (websites break, and companies have different policies about internet), and even if you do, you don't always have it direct (e.g. proxy).

* And at the last, people will build code for various purposes, including building packages. Building a package should be a repeatable operation and should always use only clean source (no artifacts from previous builds), and downloading random things on every build from clean source is very easy way to break the process.

>> It means it's pretty much random for your purposes.

> We are just going to have to agree to disagree on this one. First of all, you _already_ need an equally 'random' binary: rustc itself.

No. I downloaded Rust compiler myself (or had it installed from a package). I controlled that, it didn't hit me in my face unexpectedly.

> Which is also bootstrapped.

s/also//. Cargo is not bootstrapped, because it needs pre-built Cargo to build itself. Compare this to Rust compiler: you start with nothing but C(++? I don't remember) compiler, and end with Rust compiler. No intermediate download involved.

> Second, it's not clear to me why bootstrapping is valid for a compiler, but not for any other kind of project. Cargo is a build system, written in Rust, so it uses Rust's build system.

It's not invalid, quite the contrary. It's just Cargo doesn't boostrap itself out of clean code. It's that simple. I wouldn't have as big problem with it if its build process produced an intermediate, crippled Cargo binary.

I would then complain about requiring external dependencies to be downloaded (in contrast to included), but that would be difference on strategy preferences.


> * It's not always that you have internet connection when building your code.

Cargo does not require this.

> * It's not always that you have access to the website code is downloaded from (websites break, and companies have different policies about internet), and even if you do, you don't always have it direct (e.g. proxy).

Supporting local proxies is on the Cargo roadmap.

> Compare this to Rust compiler: you start with nothing but C(++? I don't remember) compiler, and end with Rust compiler. No intermediate download involved.

This is incorrect. Rust has been self-hosting for years. Before it was bootstrapping, it was written in OCaml.

> It's not invalid, quite the contrary. It's just Cargo doesn't boostrap itself out of clean code. It's that simple. I wouldn't have as big problem with it if its build process produced an intermediate, crippled Cargo binary.

Cargo is basically just part of the Rust compiler. Rust needs Rust to bootstrap itself, like tons of other languages. So Cargo needs Cargo to bootstrap itself. This really isn't a problem.

The fact that you didn't even know that Rust is self-hosted is proof that it really doesn't matter to the user—it was so invisible you didn't notice it!


"Before it was bootstrapping, it was written in OCaml."

That's been my exact recommendation for getting compilers started in robust way. You people keep surprising me in pleasant ways. :)

Curious, did Ocaml's clean syntax and ease of decomposing functions make the transition to Rust easier vs a language like C++ or Java? I predicted ML languages should have that benefit but I couldn't test it on small projects.


Back when I was in the university (90's), Lisp, Prolog and Caml Light (OCaml's percursor) were forbidden to be chosen in compiler design classes, although we used them a lot in other classes.

The reasoning for the rule, was that they made our compiler assignments too easy.

On my case, we ended up using the recently introduced Java with JavaCC and NASM macros for the generated bytecode.

Nowadays I would only use something like C++ for the runtime library, or reuse of existing tools, e.g. LLVM.


That's so funny. More support for my recommendation. Since you mentioned it, my main recommendation today if someone wants to get somewhere is to use Ocaml but target LLVM. I'd like to see LLVM re-implemented in Ocaml and parallel developed. Doubt that will happen but using Ocaml to generate LLVM code seems quite doable.


>> Compare this to Rust compiler: you start with nothing but C(++? I don't remember) compiler, and end with Rust compiler. No intermediate download involved.

> This is incorrect. Rust has been self-hosting for years.

Incorrect in that Rust actually downloads something, as steveklabnik pointed. I admit that I expected more from Rust, and I feel disappointed the same with the compiler as with Cargo.

But your premise that self-hosted language automatically means that bootstrapping can't be done using C or C++ is false. Compare that with OCaml and GHC.

> The fact that you didn't even know that Rust is self-hosted is proof that it really doesn't matter to the user—it was so invisible you didn't notice it!

It could be tried to read as that, if it was a fact. I did expect Rust to be self-hosted, just it is a custom that bootstrapping compiler is written in something else.

Look at how OCaml or GHC (Haskell) are built. Both allow to use C compiler to compile them (even though they generally advise to use pre-compiled compilers).


> Both allow to use C compiler to compile them (even though they generally advise to use pre-compiled compilers).

How many people actually do this, though?

We had this conversation early on in Rust's life and the consensus was that we could make a non-bootstrapping compiler in theory, but that'd be asking us to do a huge amount of work (writing a separate compiler!) for something that very few people are going to use in practice. There are so many more important things that we could be (and are) working on than something that's basically just for purism, because most people just "apt-get install rust" or "brew install rust" and don't care how it's built.


> How many people actually do this, though?

Enough to warrant this being kept after eight years after I've seen it for the first time.

> We had this conversation early on in Rust's life and the consensus was that we could make a non-bootstrapping compiler in theory, but that'd be asking us to do a huge amount of work (writing a separate compiler!) for something that very few people are going to use in practice.

Note that it's not necessary to have fully blown compiler. Just a subset of language would be just enough, if it allowed for compiling the compiler. And how many of the Rust features are embedded in rustc, anyway?

> There are so many more important things that we could be (and are) working on than something that's basically just for purism, because most people just "apt-get install rust" or "brew install rust" and don't care how it's built.

So you basicaly say "fuck you" to distribution developers and all the sysadmins that care about their systems, am I right?


> So you basicaly say "fuck you" to distribution developers and all the sysadmins that care about their systems, am I right?

Your aggression, insulting tone and condescension (throughout your comments in this thread) is not appreciated. Please stop.


> And how many of the Rust features are embedded in rustc, anyway?

All of them. This is not an exaggeration.


Where are you getting your C compiler from?


A few things:

1. If an offline build does not work, that's a bug. This use case is one that we explicitly want and need, and do already support.

2. You are incorrect about Rust. Rust is written in Rust, not C++, and so building Rust involves downloading a binary of a previous Rust. That's what 'boostrapping' means.


I thought that rustc was boostrapped from the old c based compiler(allowing you to build clean from source) multiple times(C -> Rust 0.x? -> Rust 1.0)?

From what I last saw of cargo there's not a clean path that lets you build from source that doesn't involve pulling binaries(and is really painful if you want to use cargo on a non-binary platform like RPi.


There's never been a C-based compiler. Rust started out as OCaml. A true bootstrap would take weeks of non-stop compiling. You'd end up compiling rustc itself about 900 times, ignoring all the different LLVM builds, I think. Rust bootstraps from a binary snapshot of itself, and so you would have to work through the list of every snapshot ever (about 300, but you need to build rustc 3 times for each snapshot to properly bootstrap that copy). The snapshots are listed here: https://github.com/rust-lang/rust/blob/master/src/snapshots....

In the past the snapshots were taken almost weekly as massive language churn dictated. Now they're quite rare.


The situation with Cargo would be exactly the same: before Cargo existed, it couldn't use Cargo, because Cargo didn't exist. You can boostrap your own Cargo from that time period if you'd like, but that doesn't mean it's easy. But it's no different than rustc; where you can bootstrap your own snapshots from the OCaml compiler.


> I thought that rustc was boostrapped from the old c based compiler(allowing you to build clean from source)

What's with all the baseless assumptions of the initial Rust compiler being written in either C or C++ in this thread?

Contrary to apparently popular belief, C and C++ aren't some primordial lifeform compiler language which all language implementations have to be initially written in. Believe it or not, you can write a compiler in any (general purpose) programming language! Amazing, I know.


1. Funny. Last time I built Cargo it insisted on downloading Cargo, and I've never seen even a hint that it can build itself without that. I needed to download dependencies manually, compile them manually, and compile Cargo's code manually as well.

> 2. You are incorrect about Rust. Rust is written in Rust, not C++,

And how that renders me incorrect? GHC is written in Haskell, but you can build it with C compiler (and then recompile again, now with Haskell compiler).

But indeed I was wrong. I remembered incorrectly that I built Rust cleanly and without network activity. I actually downloaded sources and issued compilation command, without any regard for packaging, so it could do any stupid stuff like downloading things when compiling.

> and so building Rust involves downloading a binary of a previous Rust. That's what 'boostrapping' means.

No. "Bootstrapping" means building a compiler using a compiler when you don't have that compiler. As I said elsewhere, compare that to OCaml and GHC, which can be built without downloading OCaml or GHC, respectively.


I don't think we're going to come to any sort of consensus here, so I'm going to drop off. Have a good day!


Going to have to agree with dozzie here, he's right on almost all accounts.

There's a ton of things I like about Rust but the points he outlined above are things that I've seen very much impede the adoption of Rust into some environments that have similar restrictions(no download on build, build from source without binaries) for security and reproducibility reasons.


> There's a ton of things I like about Rust but the points he outlined above are things that I've seen very much impede the adoption of Rust into some environments that have similar restrictions(no download on build, build from source without binaries) for security and reproducibility reasons.

They're all on the roadmap for the very near future, because Servo (and Rust-in-Gecko) needs them. There's nothing in the design of Cargo that prevents these from being solved.

Personally, I love Cargo. Autoconf is miserable and I would hate to go back to it.


As I said to the parent, if offline builds do not work, please file bugs. Integrating Rust code into Firefox requires the same restrictions that you're talking about, and is a use case we explicitly want to support, and already do, to a certain extent.


Reproducibility is the thing that bothers me most about auto-downloading build systems. On the other hand, Cargo (IIRC) doesn't do what Maven does: if you say you want version X, you get version X or anything newer (unless you jump through hoops that I've never seen anyone jump through).

[Security just scares the piss out of me in general; downloading binaries is probably the smallest problem.]


Cargo follows semver by default. Specifying version "1" means "1.x.x, <2".

Pinning versions is simple. e.g., "=1.2.3".

You can read more about it here: http://doc.crates.io/crates-io.html


And even when you have a range, Cargo.lock means that you get reproducable (in this sense) automatically.


No autoconf seems like a given. The whole reason for autoconf is to deal with the myriad different versions of C compilers/environments, a problem a brand new language with only a single compiler will not have.

We will see what happens when a vendor write their own Rust compiler that has its own idiosyncrasies.


Rust build scripts for crates that rely on external (ie. C ABI) libraries are completely naive, and rely almost exclusively on either the automake tools, cmake or (most often) simply that the right version of the library dependency is already magically present on the system due to apt-get or some other user configuration.

Compiling pure rust relies on the single compiler toolchain which is assumed to be present by cargo, and universal on all platforms. That's pretty solid.

...but rust = no autoconf is somewhat misleading; rust that uses c libraries has to solve all the same problems that anything else using c libraries has to; but unlike python or node, there is no help (ie. gyp or distutils) for you if you need to do this.

Perhaps eventually something clever will come along to fill the gap, but it's something of a pain point at the moment, especially on windows.


In addition to what eatsfoobars said, you can also use attributes for conditional compilation depending on a number of things (pointer size, platform, etc.) e.g., https://github.com/BurntSushi/walkdir/blob/master/src/same_f...


The system Rust uses is closer to imake than autoconf: the compiler knows what system its building for and sets attributes to let the code know what it should use. (The imake style is a little less flexible than autoconf, but no less insane. Six of one, half a dozen of the other.)

For example:

    cfg_if! {
      if #[cfg(windows)] {
        mod windows;
        pub use windows::*;
      } else {
        mod unix;
        pub use unix::*;
      }
    }
The 13 struct stat's in libc: https://github.com/rust-lang-nursery/libc/search?utf8=%E2%9C...

Cargo supports versioning libraries (and seemingly does so sanely!). Obscure platform support is obscure.


Build scripts[1] written in Rust are used for this.

[1]: http://doc.crates.io/build-script.html


I like C++ a lot (minus the C parts) and was once upon a time a C++ hipster (if that would be a thing in mid-90's).

However I enjoy even more memory safe system programming languages.

What I love in Rust, besides its ML influence, is making younger generations aware that systems programming languages don't have necessarily to mean unsafe by default or the primitive toolchains from C and C++.


Did you not find that C++11 fixes many of the shortcomings of older C++?


Only for skilled developers that care about safety.

You cannot take C out of C++, so how much help is C++17 if some cowboys in the team keep doing C style coding?

Yes, I know all about static analyzers and code reviews, but the sad truth is that in many companies there are lots of developers and management, that don't care a second about them.

This is specially true across companies whose main product isn't software development.


Ugh tell me about it. My colleague(s) and superior(s) all write C++ like it is C from 1989. Coding standards exist, but they basically enforce indentation rules.

Const-correctness, use of references ("those things with ampersands by them...." said colleague A), exceptions are all unknown to them.

Object-slicing is a risk ("what's object slicing??").

Pointers are thrown around, added to multiple lists, no RAII, no initialisation of variables, use of memcpy everywhere, no copy constructors, C-style casts EVERYWHERE (including casting to siblings and their descendants...). Arrays are used everywhere, occasionally vectors are used but used like arrays (which isn't so bad), but they're used alongside arrays instead of replacing them; naked new/delete, use of malloc and realloc periodically; for loops are using instead of STL algorithms.

And this is a NEW codebase.

I am desperate to leave.

Does anyone need a C++ developer? Please, someone help!


Rust is still miles ahead of C++11 and even C++14 in terms of safety. On the other hand, C++ still has the upper hand in the type-level programming department, especially now that Hana exists and will be added to the next Boost release: https://github.com/boostorg/hana .


The issue I think is that C++11 doesn't fix those shortcomings, it does provide better alternatives (though without much of the compiler support Rust does, at least without adding more tools on top), but you have to know about the issues with the originals and find out the alternatives exist.


I see your point. I suppose C++11 and onwards did it quite well in that it is backwards compatible for the most part, and didn't orphan old codebases.


> no gdb

Just in case it's taken the wrong way - I don't know what the author meant, but gdb works very well with Rust. Actually it "just works" for the debug builds.


They probably meant not having to use it to debug any of the common C/C++ problems.


"Rigorous portability"?

There's an old adage: there is no portable software, only software that has been ported. What is the status of Rust on Windows? I understand there are ARM versions; are there any other supported architectures?

I like Rust a lot. It's my go-to systems language at the moment. But let's do keep the hype train somewhere near the tracks.


See Pascal/P. Wirth wisely started with a stack machine (P-code) that could map efficiently to about any architecture and be re-implemented by non-compiler experts. He then targeted a Pascal compiler to that. I believe he targeted a standard library to it. The result was Pascal/P applications ran on 70+ architectures without modification outside of system-specific code like I/O.

So, it's definitely achievable. Was done by Wirth at least twice (Pascal/P & Lilith) plus System/38 (later AS/400) at microcode level. Just takes modularity and a good intermediate language. Your adage is still true in that most screw up portability by using ineffective practices or not using effective ones. I rarely believe it when I see the word in a feature list.


Rust works well on Windows. You can find details about platform support here: https://doc.rust-lang.org/book/installing-rust.html#platform...


Cross-compiling to MIPS and PowerPC Linux? Cool!


I haven't really been keeping tabs on it but I've heard from a former coworker that cross-compiling for ARM Cortex-M parts is fairly easy too.


  No...gdb.
Enjoyed all this this in Haskell for almost 18 years now. 10 years for sure, I don't remember when --make switch in ghc was introduced.


ADT -> Abstract Data Type for those who wonder.

It means no need to explicitly define the type of a variable, the compiler infer it based on context.


In this case, I think they meant algebraic data type, since they mention it next to "pattern matching".

An algebraic data type is generally made up of two things: 1) structs or tuples, which are useful but in most (but not all languages); C/C++ has them for example; and 2) tagged or disjoint unions, which are different from C unions in that you can tell which one of the union choices is being used (A variant is a particular tagged union).

Tagged unions are super-useful for pattern matching, so you get code like:

    match rv {
        Ok(_) => {
            None
        }
        Err(CacheError::KeyNotFound) => {
            Some(Resp::NotFound)
        }
        Err(ref err) => Some(from_cache_err(err))
    }
which has a nice syntax, is type-safe and can be compiled reasonably efficiently.

The algebraic part means that these two features can be used in any combination with each other, and also refers to the link that, roughly 0 is the empty type, 1 is the unit type, structs correspond to multiplies (and gets called the product type), and unions to plus (so is called a sum type).

Why? Because if I have a tuple of a union of A or B or C, and another union of X or Y, I have 6 possible values in total, and (1+1+1) x (1+1) = 6.


You probably also want to complete your toolkit with functions which are then exponentiation.

Also, in any ADT system 0 should be exactly the empty sum and 1 exactly the empty product. This is what makes things hang together properly.


0 and 1 are just idiomatic though, correct? The important part, from my knowledge of abstract algebra, is that there exist an identity for any operation function () such that for all x in your domain, x () identity = x.


Oh sure, you can call them whatever you like. But if you have an ADT system which has a unit type that isn't quite the empty product then you'll be in for an interesting time at least.

E.g., this is true in Haskell and it causes there to be some "extra structure" which often has to be smoothed over. Lots of library functions exist just to do this smoothing.


And differentiation, which is useful for functional update of persistent data structures. https://chris-taylor.github.io/blog/2013/02/13/the-algebra-o...


That's nice, but I wouldn't call it particularly core to ADT systems. It also requires quite a bit more machinery to embed directly since you need to be able to talk about functors/containers.


In this context, ADT probably means algebraic data type: https://en.wikipedia.org/wiki/Algebraic_data_type. And I don't think your definition for abstract data type is right either: usually the language feature that automatically determines types from context is called type inference.


Besides what evanpw already answered.

Abstract Data Type were introduced by languages like CLU, Mesa and Modula-2 among others.

It means having modules that only export the type names, but not their definition, while exposing functions/procedures to operate in such types.

It is not the same as the ADTs in functional languages, although they are quite similar.


They're not similar at all. Algebraic data types are finite sums and (tensor) products. Abstract data types are existential types, which are infinite sums indexed by a kind.


The purpose of my remark is how they look like in terms of usage for the humble programmer.

They being similar how having a limited data structure (non extensible) with a set of functions that operate on it. Depending on the FP language ADTs can also partially hide their implementation like the Abstract Data Types in modular languages.

But I can also happily discuss the math theory and the denotational semantics about them across languages, but I don't think it will help for many readers.


What makes them weak?


When you unpack a weak existential type repeatedly, every time you get a fresh new type. When you unpack a strong existential repeatedly, you always get the same type.

Anyway, my original comment was wrong: in ML, every time you project a type component from a module, you always get the same type. So it's more like a strong existential. My bad.


Ah, gotcha. Thanks!


There are two conceptual errors here:

(0) Rust doesn't really have much support for abstract data types. What Rust has is something not too unlike Haskell: you can hide `struct` fields [though not `enum` constructors], but the `struct` remains concrete. With a bona fide abstract data type mechanism, as in ML, you can give a module a signature that reveals the existence, but not the representation, of its type components. I'm guessing this much representation hiding isn't fully compatible with Rust's goals, which require knowing the sizes of types statically, unless they're behind a layer of indirection.

(1) Abstract data types have no relation whatsoever to type inference. [Nor do algebraic data types for that matter.] An abstract data type is an existential type: a dependent pair consisting of a type `T`, and a value whose type is `F T`, for some type function `F`. The closest thing Rust has to existentials is trait objects, but these can't do everything that existentials can do.


I'm neither an ML nor Rust expert, but wouldn't Rust trait objects (which are basically type erasure wrappers over traits) approximate abstract data types?


With the caveat that there can only be one instance of it.


I'd like to echo the author in that Cargo is the best language package manager/build system I've had the pleasure to use.

However, there is one serious problem with Cargo's packages: They simply aren't authenticated. Cargo's developers acknowledge this issue and there is a proposal to amend it by adopting the TUF [1]. but this issue doesn't seem to enjoy the attention it deserves.

As a person living in a country that increasingly edging towards going the Kazakhstan route where all secure traffic will be MITMed, simply relying on transport security as a way to secure crates is not sufficient. and that's ignoring compromises due to server security failures. At present, my only options are to include the entire dependency tree in crate or avoid dependencies altogether.

Even ignoring my problem (which wouldn't effect a large number of rust developers), Mixing transport security with authentication and integrity shouldn't happen in vital infrastructure like a language package system.

There are other problems like not representing system libraries that a crate depends on in the project manifest, but unauthenticated crates are a ar more serious problem.

[1]:https://github.com/rust-lang/crates.io/issues/75


Is there a language package manager that you would consider a good role model in this respect?


Not the OP, but both rpm and deb have a pretty good validation story. Deb repositories (== package hashes) are gpg-signed for verification, but are usually provided over http for better caching. Nothing stops you from putting them on https either. Rpms can be signed at the package level as well.


I'm not aware of any language-specific package manager that gets close to having developer-package authentication, but chromium's add-on system comes pretty close as it requires add-ons be signed by the developer's private key (at least, that was the case before I stopped developing for it). I believe android's packages are secured in a similar fashion.


IIRC Common Lisp's asdf-install did use GPG signing from developers; IIRC-again, most people disliked it. Now quicklisp exists, which does _not_ use GPG; a quick google didn't reveal any info on package authentication. Maybe a lisper can chime in?


npm allows using git+ssh URLs which would work with ssh keys, but I'm not sure if that scales to whole organizations.


I'm working with Rust 2 months. In my opinion, pluses:

+ Compiler with very helpful error messages;

+ Cargo is the best package manager in the universe;

+ Result<> is the most correct way to return results from function, I dreamed about it;

+ Community of smart and friendly people.

Minuses:

- Functions are not first class citizens - you can't return function as result;

- Violation of "explicit over implicit" rule in methods declarations: argument self/&self is implicitly omitted in call, but exist in signature. Because of this, for example, you can't use method of class as request handler in Iron, you need closure wrapper with same order and count of arguments. Pure ugliness;

- Often you need just add lifetime definition to satisfy compiler, without doing anything with this lifetime anywhere else in code;

- Extremely verbose syntax;

- Rust has exceptions (panic), and you can't catch them. And with promised panic::recover you'll have limitations of supported structures;

- Errors handling encourages copy-pasting;

- Syntax is not intuitive, especially when traits and lifetimes comes into declaration - in such cases you can look at own code and think "I hope at least compiler can read it";

- Nested structures is pain;

- Lack of optional and named arguments, lack of predefined values in structures;

- Too few modules for web. Just 1 module for MySQL (thanks to it's author - actively maintained);

- Usage of safe code can lead to out-of-bounds errors, with thread panic (hello to vec macros).

And I still think it's best language we have:) Biggest advantage - most of errors can be found on compilation step. But price is all of these minuses.


> - Functions are not first class citizens - you can't return function as result;

Of course you can: http://is.gd/YASF8T

One issue is that closures don't really have their own types, so to return closures you need to use trait objects, which means dynamically-sized types, which means you have to box them

- Rust book on the subject: https://doc.rust-lang.org/book/closures.html#returning-closu...

- Example of returning a boxed closure: http://is.gd/pJIqiu

> Violation of "explicit over implicit" rule in methods declarations: argument self/&self is implicitly omitted in call, but exist in signature.

That's a debatable assertion, in a method call, the self/&self is the subject, the part before the dot.

> Because of this, for example, you can't use method of class as request handler in Iron, you need closure wrapper with same order and count of arguments.

By "class" what do you mean? Because AFAIK it works on struct methods http://is.gd/xu6HRc

> Rust has exceptions (panic), and you can't catch them

That's because Rust doesn't have exceptions, it has panics, they're not a general-purpose error-reporting or control-flow mechanism.

> lack of predefined values in structures

http://doc.rust-lang.org/std/default/trait.Default.html

> Usage of safe code can lead to out-of-bounds errors, with thread panic (hello to vec macros).

Safe rust doesn't mean no panics. `panic!` is safe rust.


println() is not Iron request handler, you gave wrong example. Signature matters.

I don't see how you can declare field if structure to have default value 65535, even with Default trait.

I posted opinion not to argue and really frustrating to see downvotes for opinion. Make discussion more constructive.


> println() is not Iron request handler, you gave wrong example. Signature matters.

println is not relevant here it only shows the result of the call, the point is that you can use a method as a function, I expect the issue is elsewhere (but not having used Iron, I wouldn't know where).

> I don't see how you can declare field if structure to have default value 65535, even with Default trait.

Implement Default then call it when instantiating struct: http://is.gd/2YSoKd

> I posted opinion not to argue and really frustrating to see downvotes for opinion. Make discussion more constructive.

Er… I didn't downvote you? I just replied to some of your objections which I assume is what you expected?


> Implement Default then call it

It's far from "default values in struct declaration". I mean this:

struct X { i:u32 = 5, j:u64 }


> the point is that you can use a method as a function

And still: no. When signature is declared strictly (and in reality it's all cases except println and format), you can pass only function with matching signature. self/&self changes signature.


> That's because Rust doesn't have exceptions, it has panics, they're not a general-purpose error-reporting or control-flow mechanism.

Oh, that's another minus: love to redefine well-known terms. Panics are exceptions, they behave exactly as exceptions, just unhandled. All other is just casuistry


> Oh, that's another minus: love to redefine well-known terms. Panics are exceptions

The point of calling them panic is so they're not interpreted as exceptions and people used to exceptions-as-an-error-reporting-mechanism don't try to use them so.

> they behave exactly as exceptions, just unhandled

The "unhandled" part is exactly why they're not called exceptions.


Redefining well-known terms gives nothing except confusion. And https://github.com/rust-lang/rfcs/pull/243 is another step in this direction (beside "?" idea). If usual errors handling will be called "exceptions", then we will have full mess with "Exceptions" term in Rust.


I don't think they've redefined any terms. When someone says a language "has exceptions", I think most people assume they can be caught as a way to recover from errors. Rust's panics are not that, so they used a different term to avoid confusion.


That someone just need to say "Rust has exceptions which can't be handled". Ridiculous enough, but it's true. They made bigger confusion, in my opinion. When even standard lib's code use panic as way to report about error, we can't say "nobody use panics as exceptions".


We use panics for logic errors. This is an important distinction. We will never throw if a file couldn't open, or if a string couldn't parse, because these are expected errors that you should explicitly handle.

Catching panics should never be part of an application's normal control flow in Rust, unlike e.g. Java and C#. These languages use exceptions to indicate any error condition, and therefore writing correct error handling code requires try/catch.

Exceptions in Rust are used as a "soft abort". A real abort tears down the entire process, and the OS handles releasing all associated resources. But if you're writing a program that has logical tasks that resemble processes, the OS can't individually tear those down for you. When an exception (panic) is thrown in Rust, the task has hit a situation that cannot be recovered, and tears itself down so the rest of the application can reclaim the resources.

The places in Rust where exceptions are thrown are usually places where the programmer has chosen not to handle a logic error -- usually indexing out of bounds, integer overflows (sometimes), or forcing an optional value. Things that would be RuntimeExceptions (unchecked exceptions) in Java.


Exceptions that can't be caught are conceptually different, and different in practice, than exceptions that can be caught. To avoid confusion, it helps to call different things different names. I'm not sure why you are so adamant that two things that are similar one one aspect, but absolutely different in another aspect, need to be called the same thing.


Not only me :) I admit it's arguable question and I think people with different opinions about this question will never understand each other. So let's agree to disagree. Rust FTW :)


Fair points, though many of these aren't as bad as they seem.

> Violation of "explicit over implicit" rule in methods declarations: argument self/&self is implicitly omitted in call, but exist in signature. Because of this, for example, you can't use method of class as request handler in Iron, you need closure wrapper with same order and count of arguments. Pure ugliness;

You can call methods with explicit self too.

http://is.gd/dcLAZq

In this case you would pass `Foo::bar` down to iron. Assuming the arguments are in the right order.

> and you can't catch them

You're not supposed to. Panics are to be used as less as possible; only for irrecoverable things or for application-level panicking.

> Nested structures is pain;

Could you expand on this point?

> Usage of safe code can lead to out-of-bounds errors, with thread panic (hello to vec macros).

It's idiomatic to use iterators as much as possible, or use the monadic indexing methods (which return Option).


I'm not sure why this was dead, but I vouched for it. Nothing here seems remotely worthy of flagging IMO.


> You can call methods with explicit self too.

Point is "signatures are different". You can't use this function in place where you need signature without &self.

> You're not supposed to

It makes life of web-servers harder.

> Could you expand on this point?

To parse JSON with nested objects you need to declare a lot of structures. When APIs are difficult, people even use code generators.

> It's idiomatic

It's not an argument. What work should work. Idiomatic or not - another question.


> Point is "signatures are different".

The signature is fn(&Foo, u8). If that's not in the right order, you can always put it in a closure -- this is not a Rust-specific problem.

> It makes life of web-servers harder.

I don't see why web servers need exceptions. You do have catch_panic if you want to prevent the application from ever crashing; that's what it's for (rewrapping fatal errors at the top level or preventing panics from leaking through FFI)

> To parse JSON with nested objects you need to declare a lot of structures.

Fair enough. I think there was a nested hashmap based JSON library somewhere?


> you can always put it in a closure

That's where I started. And it's Rust-specific problem because of implicit "self" argument, needed to send link to instance.


No, the implicit self can be made explicit in the caller. Or you can have a non-self function.

This is no different from having a function which takes parameters in a different order from what's expected, and is not specific to Rust.


No, caller doesn't care about "self" and should not. It's Rust-specific ugliness and "different order of arguments" is absolutely not the same as "implicit first argument".


I think I really need to see an example of what you're talking about.

"implicit first argument" can be made explicit, we've established that. If the signature is still different that's an issue with order of arguments.

If you don't want it to take self at all, well, it's a method, that's how methods work, they have a receiver. In all languages.


I really wonder how it can be unclear. Iron expect method with signature "Fn(req: &mut Request) -> IronResult<Response>", it means you just can't use any method of instance, because all of them have crappy "&self" in signature (otherwise there is no point to make them methods of some object). And yes, you can add closure wrapper, add "move" keyword and satisfy all "move" requirements. But it's exactly what I'm talking about.


Yes. And this problem is isomorphic to the problem where you have a freestanding function; i.e.

struct Foo {..}

impl Foo { fn handle(&mut self, req: &mut Request) {..} }

is isomorphic to

struct Foo {..}

fn handle(foo: &mut Foo, req: &mut Request) {..}

And of course it is. The handle function needs a `Foo` to store state or whatever. If it didn't, you wouldn't have implemented it as a method at all, you would have implemented it as a non-self function (`impl Foo {fn handle(req: &mut Request)}`, which can also be passed directly as `Foo::handle`). Where would the Iron function get this receiver from? You have to pass the receiver down with the function, so you need a closure (anything which is callable and has an associated state, i.e. more than just a function pointer)

This has nothing to do with `self`. It has to do with the fact that methods need a receiver, which is a general fact of life with methods.

So your handler needs a receiver with some extra state that you're using. You would need to pass down the receiver regardless, and the signature doesn't support arbitrary receivers.

This is a problem in Rust and Python, as you correctly noted. It's a worse problem in Java where the stdlib callable doesn't even allow arguments, so you have to implement an interface (FWIW this solution is available to you in Rust too, either directly implementing `Fn(..)` or if Iron decides to accept a special trait instead), which is much more typing. PHP needs you to construct an array thing to make a callable closure, which is arguably just as 'bad' as Java.

C# has type safe callbacks which allow you to specify arguments, however methods don't work. You need additional boilerplate for it (http://stackoverflow.com/questions/4015451/pass-a-method-as-...)

C++ has function pointers, but methods don't work as function pointers (since you still need there to be a closure under the hood, and C++ doesn't do that kind of implicit stuff). C++ Concepts seems to have a "Callable" concept, but I didn't look into whether or not it comes with auto-closureifying syntax.

So really the issue has nothing to do with `self` being used for method signatures, it has to do with whether or not `object.method` is automatically turned into a closure. Rust tries to avoid such magic, so it doesn't have this feature. It could, with relative ease, without getting rid of `self`.

Two languages which do seem to do this are Javascript and Go. In Go (http://play.golang.org/p/jsKtkbD7vr), `foo.handle` is automatically turned into a `func(*Request)`, which is even more magical, since IronRun expected a `func` and not an `interface Callable` or whatever.

Javascript does the same, `obj.handle` will auto-create a closure for you.

Yes, it would be slightly neater for Rust to auto-closure `obj.handle`, but this sounds a bit too magical for Rust (which tries to be low level and make costs explicit). This has nothing to do with `self`, and this isn't a problem localized to Rust.


Thank you for detailed explanation. Any language has some magic. Some languages have too much of magic (like Ruby) and I hope Rust will no go this way. But when magic is about to save couple of lines - I think it's ok, I'm with authors of Java,C#,etc in this term. I prefer to call it syntactic sugar, and to be fair, Rust have examples of such sugar. So in my opinion "sugared" (or magic) "this" keyword inside of method body is more optimal way to send receiver to method - it allows programmer to save lines of needed closures boilerplate.


When you make a decision to include some magic, you have to look at the overall milieu of how it can be used and abused, not just the specific use case that it makes awesome. It may "save a couple of lines" but that's not all it does.

As far as auto-closureifying methods, you might have a lot of people doing things like `let x = vec.len` and getting confusing errors when trying to use `x`, and other errors due to the implicit moving or borrowing of `vec`. Currently when you make such mistakes (which is common) the compiler tells you exactly how to fix it because taking the value of a method is illegal. If this was legal, it may cause errors further down the line which are not clearly tied to this and you can't have reasonable error messages. Additionally there's no way to specify this moving or borrowing, so an arbitrary choice has to be made, which would be implicit and opaque. Explicit closuring is just a few more characters and not a bad price to pay for avoiding this class of errors.

The reason we have explicit `self` (which again, is different from the above and completely orthogonal to the Iron issue) is due to ownership/borrowing again. You want to be able to specify how a method takes its receiver (self/&self/&mut self/Box<Self>). It could be done via extra syntax but that gets uglier, really, and in a language like rust where ownership is important, you don't want ownership implications to be kludgy and confusing. Having a syntax which mirrors argument syntax is really clearer.


> but that gets uglier, really

is an argument. Maybe I just can't imagine how ugly it would look without "self" argument. Thank you for explanation. At this point I almost agree with you. Well, I agree, but still can't say "oh, I love that closures boilerplate!" :)


Again, the self thing and the closure thing are completely orthogonal. Unrelated. Each sugar could exist independently of the other.


There's nothing Rust-specific about methods having a self argument. All languages with methods have such a thing. I'm really unsure of what your specific complaint is here.


All languages? Maybe Java, or JavaScript, PHP, C#, Go? Or just Rust and Python?


Yes, all languages, though in Rust and Python they are explicit rather than implicit at the definition site. And note that although Rust makes them explicit for technical reasons (allowing you to control the manner in which the receiver is taken), their explicitness in Python is only because Guido wanted people to understand that methods require a receiver parameter in order to operate, a misunderstanding which appears to be the root of your confusion.


Have you compared with a modern typed functional language e.g. OCaml, F#, Scala, Haskell? I'm pleased that Rust is making a lot of these functional things more mainstream, but for normal application code where GC is acceptable Rust adds a lot of unnecessary overhead.

(Personally I use Scala, but using Maven for dependency management rather than SBT. It doesn't tick all your boxes (in particular I'm thinking of one specific member of the "community") but it's close, and deals with a lot of your minuses)


You can pass methods as functions or closures by Type::method. The first argument will be the self argument.

I don't see much of an alternative to panic on bounds checked indexing error, short of not providing indexing at all or developing a system of proven indices that would be extremely unfriendly to newcomers, see this project: https://github.com/bluss/indexing


> I don't see much of an alternative to panic on bounds checked indexing error

Returning an Option. That's what Elm does:

    head : List a -> Maybe a

    get : Int -> Array a -> Maybe a
It tends to drive people towards not indexing collection (which is often good).


Rust's std::slice has a `get` method that does exactly that: http://doc.rust-lang.org/std/primitive.slice.html#method.get


Idiomatically you should be using iterators or monadic indexing (i.e. the methods which return an Option). Regular indexing is exposed for convenience, but I don't think it should be used as often.

Perhaps we should have a clippy lint for this.


Lol, you must live in a different world, lots of code is using regular bounds checked indexing and it is completely idiomatic. So idiomatic in fact that there is no nonpanicking slicing method for slices or strings, for example.


You don't need one, since you can use `.chars().nth()` (indexing a string needs you to iterate over it anyway, because utf8). Though I do feel that having a direct get() API would be nice

I have seen quite a bit of regular `[]` indexing, however it's almost always in very obvious situations, and most of the places where you might index in C/++ (iteration, or with a bounds check) I've seen people using iterators or the monadic way.

But yeah, it's not that unidiomatic. It's still used a lot, just not to the extent it's used elsewhere.

(For example, we have thrice as much indexing in our DOM binding generation python code (`components/script/dom/bindings/codegen`) in Servo than we do in all of the non-test rust files (`components/`) in the main repo combined)


Don't. This would be a terrible lint. It's perfectly reasonable to expect an index to be in bounds for various reasons.


It could be a pedantic lint (off by default).

There may be ways to lint in case of an `if x.len()... x[...]` though. That should be done monadically; given that panics aren't supposed to be caught in most situations, we should minimize the number of ways programmer error can lead to a panic.

Looking through servo's code a lot of the cases are where we use fixed-length arrays, or when dealing with graphicsy buffers. One use case which needs indexing is when we use an index as an "id". No easy way to fix that without introducing Rc or something.


Dependent typing, dude. It's the way of the future.


That alternative is already present, yet panicking indexing is listed as a drawback. It remains the alternative to not provide indexing then.


No, you can't do it when you need &self.


You can, e.g.

  struct Foo;

  impl Foo {
      fn bar(&self) {}
  }

  fn main() {
      let x = Foo;
      Foo::bar(&x);
  }
What you may be encountering is you can't pass a `Foo` when a `&Foo` is expected (why I needed to write &x), because the types differ. Calls with method syntax don't see this because they automatically borrow the receiver if necessary, but explicit function calls do not do this sort of borrowing and so the types need to line up right.


You have me guessing you want a bound method where self is implicitly supplied. Sure, there's no sugar for that.


The "return function as result" is being worked on: https://github.com/rust-lang/rfcs/pull/1305


Technically it already works, but you have to either return an actual function, or return a boxed closure (because trait object)


Looks like my information about returning functions is outdated (although issue in tracker isn't closed). Thanks to all who pointed it. Now I love Rust even more and sure with time all other minuses will be polished out :)


[deleted]


Integer overflow is different than out of bounds access, which still panics in release.


I stand corrected thanks


Violation of "explicit over implicit" rule in methods declarations: argument self/&self is implicitly omitted in call, but exist in signature. Because of this, for example, you can't use method of class as request handler in Iron, you need closure wrapper with same order and count of arguments. Pure ugliness;

Maybe that's ugly, but it's explicit. The method is statically compiled into the program to take the receiver as an argument. What you want to do is specify the receiver argument on the caller cide and the remaining arguments on the callee side, which is impossible to do. The closest you can get is to create a closure on the caller side and pass that to the callee. Languages that avoid this, like say, python, are just implicitly creating that closure. Creating a closure implicitly goes against the design goals of rust.

At least rust doesn't have javascript's abominable semantics.


I never found the borrow checker to be that hard to work with. For a start, if you're developing a systems language in a modern way, (say C++11/14) you're already used to thinking in terms of move semantics, const refs and shared vs unique pointers.

Secondly, Rust's compiler gives INCREDIBLY detailed explanations of what the borrow checker doesn't like, where the conflicting scopes begin and end, and helpful suggestions of what to do to make it work.


> For a start, if you're developing a systems language in a modern way, (say C++11/14) you're already used to thinking in terms of move semantics, const refs and shared vs unique pointers.

I think this is the difference when coming from a primarily GC'd (So I suppose "non systems") language. Coming from C# I never even thought about the difference between storing a concrete type in an array vs storing an abstract type or interface in an array. In C# so long as they are reference types, there is very little difference. Each entry would be a pointer to the heap regardless. Moving to a language that encourages stack allocation and structs on the stack instead of on the heap, and actually has you motivate each pointer is a huge difference.

The same goes for ownership: it's very easy to forget it with a GC'd runtime (this is just sloppy, but the GC will unfortunately help you be sloppy). Way too often the code you write in a GC'd language will have convenient pointers both up, down and sideways in any data structure, just in case. Writing a doubly linked list in C# is no harder than writing a singly linked list. In Rust the singly linked list is lesson 1, and the doubly linked one is much later.


Agreed. As a one time hardcore C developer who went soft and has used Java for well over a decade, I'm struggling with the borrow checker. I was hoping that Rust would allow me to switch back to developing native apps without the memory management headaches. However, the constraints of the borrow checker and are not coming easy to me.

I'm sure I'll get it with time (if I don't give up first), but often I just don't understand what the error messages or the referenced help mean.

A big problem for me is that the language syntax alone doesn't map to valid code. You edit your code removing the compiler errors until it is syntactically correct, then the borrow checker kicks in and tells you that the whole thing is wrong. So you end up deleting a whole module and starting again. Errrh!

Also - with all the changes to the languages, most of the answers on stackoverflow should be tagged as out-of-date.


Feel free to come ask questions in the various IRC channels on irc.mozilla.org (#rust, #rust-beginners, and dozens more for specific topics). :)


On one hand it indeed feels like more strict version of move semantics in C++, on the other hand it also prohibits what is central idea of STL - having multiple mutable references to the same object (almost all algorithms operate on at least two iterators from the same container). It seems there is no place for STL-like library in Rust - which is quite regrettable.


Most algorithms that I use from the C++ stdlib take `container.begin()` and `container.end()`, and use the latter to check for the end condition in the main loop. Given that Rust has the `Iter` trait which allows you to iterate straight through any type that implements it, I do not see the point in having multiple mutable references. I mean, sure, it is useful sometimes, but it also brings in a lot of headaches with it :)


Right, in most cases you could replace begin(), end() with whole container / range, when considering arguments to the algorithm function. Though, there are some exceptions, like std::rotate, or just cases where you want to place result in the same container.

Looking from the perspective of implementation of those algorithms, it is no longer that simple and single iterator is rarely sufficient, consider: std::unique, std::reverse, std::partition, std::sort, std::inplace_merge to name a few, where there is much more to it than just checking for end.


Some of those do show up on iter.

https://doc.rust-lang.org/std/iter/trait.Iterator.html

But some of the more specialized ones don't work using just iter. But on the other hand, no iterator invalidation.


The next version of Rust has _significantly_ updated documentation here. A link for anyone reading before the next two weeks: https://doc.rust-lang.org/beta/std/iter/trait.Iterator.html


The crucial difference I had in mind, is that those from C++ work in-place. Returning a new collection as a result poses no problem for either of those languages. Writing specialized version for each collection is also possible, but with STL you don't have to do that.

Maybe it would be possible to implement those algorithms in Rust in terms of ranges like in D, instead of iterators? I will have to try and see.


The problem with Ranges is you can't trust anything they tell you about bounds, and they can't trust you about bounds, so every access has to be checked. This hurts in algorithms like binary search and sorting (which are some of the few algorithms that don't work for iterators).

Ultimately though, there just hasn't been a lot of demand for ranges. Iterators and slices do most of the work people care about. We actually had tried to make iterators more like ranges back in the day, but we tore it out because no one cared. To this day you can't sort a VecDeque, and no one has ever bothered us about it.


I did indeed look a little bit about previous attempts at collections in Rust, but didn't find too much about ranges. What would be good keywords and place to look for, any hints?

It seems to me that you can go quite far with mutable but mutually disjoint ranges - which is more or less what slices do for vectors currently. They are also safe, because you can only split them into subslices (which borrow ownership), but not extend them (what could potentially create overlapping ranges). Moreover as long as there is any range to given container, you can't perform operations that could invalidate derived ranges (things that reallocate vector, etc.)

Range checking don't seem that bad, because in most cases it is not about trusting what a range tells you, but range trusting itself which is fine. Take you example of binary search, if range would know how to split itself in the middle, it wouldn't really have to do any bounds checking. For sorted ranges maybe operation like lower_bound or upper bound would be great primitives, or single operation that encompasses both situations:

  let (lower_range, equivalent_range, upper_range) = range.split(&value);
Of course, in general as you point out, you would have to pay a price of bounds checking for random access.

As far as I can see, this should be sufficient to implement algorithmic part. What I still find hard to do, is to modify collections based on resulting range. For example, how to write equivalent to following C++ code, that first moves consecutive duplicates to the end of array, and then erases them from container:

  std::vector<int> v { ... };
  v.erase(unique(v.begin(), v.end()),
          v.end());
I have a few ideas, which mostly boil down to following: create a description of operation to be executed, but is executed only after all ranges have returned their ownership over collection. But so far this interface have not been fully satisfactory.


Multiple mutable references can be problematic though: http://manishearth.github.io/blog/2015/05/17/the-problem-wit...

(Rust has Cell/RefCell if you need this, though)


Thanks for link. In C++ ensuring whether something is safe is indeed sometimes quite non-trivial, my favourite, but non-practical example is non-empty std::list<int> x, used in following way:

  x.remove(x.front());
Comment for non-C++ programmers: front method returns a reference to the first value in the list, and remove takes a const reference to value and removes all elements that compare equal to it. Problem is of course that after comparing first element with provided argument, it would compare equal and be subsequently deleted, making the reference invalid. As a side comment, I will note that above is in fact required to work, though it is probably not something you should write.


I like this example a lot :)


The submission and expanded discussion at /r/rust: https://www.reddit.com/r/rust/comments/40d8ca/two_weeks_of_r...


In 2016 I would like to see a Rust frontend for GCC, with LTO support and all the list of target backends GCC support.


There used to be gcc front end during Rust pre-1.0, but it seems that the language was too unstable for it to keep up.

A gcc front-end would be great, if only to show that the language spec is mature and complete enough that other implementations can be built on it.


> with LTO support

rustc already supports lto (lto=true in cargo manifest, or -C lto in rustc)

> all the list of target backends GCC support.

Which targets are you interested in specifically?


I'm a little skeptical after first reading "Disclaimer: I'm digging Rust. I lost my hunger for programming from doing too many sad commercial projects. And now it's back. You rock, Rust!" and then seeing comparisons to Python in the text where it seems like the author might prefer writing in Python, especially with all of the Rust cons listed.

Is the author saying Rust is a breath of fresh air from C++ or from all other programming languages? Ruby was my breath of fresh air for a while, but lately I've not been enthusiastic about any language or framework, really, so I curious how great Rust really is these days.


As somebody who comes from the Python community just like the author, I can relate to some of his points:

"In Python I try to avoid even having dependencies if I can, and only use the standard library. I don't want my users to have to deal with virtualenv and pip if they don't have to (especially if they're not pythonistas)"

This rings so true to me. Package management in Rust is much better than in Python.

Also, Rust feels a bit like "pythonic systems programming". I mean that it just feels like the right tool for the job. And it might feel a bit verbose in the beginning, but in reality it's explicit and precise. It feels pretty ergonomic just like Python.

Isn't it fantastic that people start comparing Rust to more high level languages like Python and Ruby instead of C++ or C? It says a lot about the language design and its goals.


Am I the only one who doesn't find Python's dependency management a problem?

"pip install -Ur requirements.txt" and you're done. A virtualenv is one extra line.


Very easy when you're in control of the machine (and are a developer). A while back I had written a tool in Python purely for myself (mistake #1), and relied heavily on dependencies which needed to compile C extensions (mistake #2), and then it turned out to be very useful for others on my team (mistake #3), which consisted of mostly non-developers (mistake #4) using Windows/Mac (mistake #5). I've since left, and while they supposedly have a dev who can help them install the tool, I'm still running it for them on a regular basis because I don't want to be mean and they haven't sorted it out.

Python's dependency management is great for devs who have a GCC toolchain, but it really sucks when you need to share code with non-developers.


It's good to be nice like that but you should still be getting a consultancy fee.


> but lately I've not been enthusiastic about any language or framework

That's sad... There are so many languages out there, and not even one seems interesting or fascinating to you? Take a look at my list[1], maybe you'll find something you'd like ;-)

Anyway, Rust and Python are too dissimilar to compare them and decide which is more of a "breath of fresh air". It would be more meaningful to compare Rust with C++ and D. I wish someone did a blog post about it.

[1] https://klibert.pl/articles/programming_langs.html


I also tried rust for a couple of weeks, but holy hell is it ugly. Why improve on c++ and not improve the syntax or containers? It's like python never existed.


If by that reference to Python you mean you want significant whitespace, I think that would make it unnecessarily difficult to work with different scopes.

  fn something() {
      foobar();
      {
          bar();
          foo();
      }
      quux();
  }
That is much easier to understand, and easier for tools to work with (they can actually automatically indent your code without breaking it), than:

  def something():
      foobar()
          bar()
          foo()
      quux()

edit: And how would you even add a scope right after another block (like if)? That would require you to have a keyword anyway.


> It's like python never existed.

Whitespace / indentation used for scope delimiting in Python is a bad idea. No, thanks. I'm glad Rust isn't using anything like that.


I agree that whitespace scope delimiting is a bad idea, but I was thinking more about general syntactical clarity. Just because C++ uses every possible typographical symbol doesn't mean rust should. Simple example which had me WTFing:

    use std::collections::{HashMap, HashSet};
Are all those ::s and {}s adding to clarity?

Why are there no easy to use initializers for maps? It's 2015! I have to do:

    let mut map = ::std::collections::HashMap::new();
    map.insert("a".to_string(), "b".to_string());
    ...
or some such nonsense. Oh, right, I have to write a macro and basically learn another language. Also, using the macro system for basic things like println just seems bonkers to me; use macros when you really, really need to do something special, not everywhere.


You personally don't have to write the macro: cargo makes it super easy to use ones others have written, e.g. https://crates.io/crates/maplit/.


Yeah, I also miss initializer lists[1] in Rust. Not sure why they weren't implemented, but it's a very good feature which is pleasant to use in C++11 and higher. It's way neater syntax than writing different macros like for vector and other containers. Is it something that developers just never got to implement, or they didn't want to do it for some reason?

[1]: https://en.wikipedia.org/wiki/C%2B%2B11#Initializer_lists


Println is a macro because it needs to be: it does compile-time checking of the format string.


Do you think it's feasible to implement initializer lists in Rust, or there is some language limitation which makes it hard?

I.e. I'd prefer to write:

    let v: Vec<i32> = {1, 2, 3, 4, 5};
Instead of

    let v = vec![1, 2, 3, 4, 5];


Initialistion from initializer lists in C++ basically rely on the = operator being able to do arbitrary work, e.g. allocations and generally manipulating the heap in arbitrary ways. Rust tries to avoid implicit performance sinks like that, and so doesn't have constructors in the same manner as C++.

That said, in future, it is likely to be possible for things like `Vec::from_array([1, 2, 3, 4])` and `HashMap::from_array([[1, 2], [3, 4]])` to work.


Rust doesn't need to take the C++ approach to implement it, but it can still use the same syntax which is simply much neater. HashMap::from_array([[1, 2], [3, 4]]) looks horrible in comparison.

Exact implementation is another question. That's why asked, whether it's feasible or it's something hard to fit into the language.


I don't see how Rust can get the same syntax as C++ without having the implicit performance sink I mentioned: creating vectors and hashmaps needs to allocate, and so letting this happen implicitly is not great. In fact, the syntax was pretty much the whole meat of my comment.


Why can't Rust do what it does already with macros, except using the simpler syntax described above? I.e. that syntax can mean explicit allocation (it's just a short form for it).


I'm not sure, other than that because {} evaluates to a value, this already looks very close to valid Rust; I thought it was for a minute, but realized that it's not quite.

I'm not sure I personally feel that the first is particularly better than the latter, but it should have to be by a large margin to justify adding new syntax to the language.


I can explain why I think initializer lists style is better. You can have a standard / uniform syntax for initialization. And it's using already defined syntax for types to actually indicate what you are creating. Compare it to the current one. It's necessary to create a new syntax for each type (that you want to enable for such initialization). I.e. a macro. This macro also is different from already existing syntax which indicates the type (i.e. you have Vec and you have vec!). That's why I think current approach is more cumbersome and initializer lists style is more streamlined and elegant.

> it should have to be by a large margin to justify adding new syntax to the language.

C++11 put some effort into simplifying some cumbersome syntax constructs. It makes things easier to read and write which pays off.


There's always Nim if you want nice syntax but then you're working with a garbage collected language without a borrow checker.


If syntax is still verbose (it's ok for me) at least it is clear. There is no ambiguity unlike c++.


How does Rust interoperate with C++? For example, would it be easy to integrate V8 into a Rust project?


With specific C++ features there is still a lot of work to do, however the C FFI is solid and simple to use if that is enough for you (see e.g. https://doc.rust-lang.org/book/ffi.html).

Interfacing Rust and python, ruby, node can be quite simple in fact (micro demo for node: http://calculist.org/blog/2015/12/23/neon-node-rust/)


Not well. Interoperating with C++ is extremely difficult as the calling conventions are complicated.

See this from the "D" documentation: "Being 100% compatible with C++ means more or less adding a fully functional C++ compiler front end"

https://dlang.org/spec/cpp_interface.html


When it comes to shared objects (.so, dll, dylib, etc.) not even C++ interoperates with C++, when two different compilers are involved. Name mangling is not standardized, so everyone does their own thing. This breaks binary compatibility.


It doesn't. C++ is too complex. Rust interoperate with c only.


Aside from the other comments, https://github.com/michaelwu/rust-bindgen/tree/sm-hacks somewhat works for binding to more complicated C++ code.

C code, works fine, as always.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: