* A faster grep ( https://github.com/BurntSushi/ripgrep )
* A GPU accelerated terminal emulator ( https://github.com/jwilm/alacritty )
* A web browser ( https://servo.org/ )
* A containerization system ( https://github.com/tailhook/vagga )
* An operating system ( https://github.com/redox-os/ )
* An extremely fast text editor ( https://github.com/google/xi-editor )
And be faster and safer than C/C++.
I get that it may be hard to read if you're not familiar with the language, but so is * and & if you're not familiar with them in the context of pointers. Sometimes however, a language feature calls for a special symbol as is the case here.
The usage of apostrophes in English also doesn't make much sense for an outsider, but they're very much a necessary part of the language and very easy to use if you're an English speaker. Same applies for Rust.
I'm a beginner to Rust and this was my exact reaction (Too much complex syntax!!), but one conclusion I have come to since, is that one of the really nice thing languages like Python do, is that they just don't deal with a whole bunch of CS issues (eg: everything is a reference to an object) or are very opinionated about it (ownership/lifetime is bound to scope, no way to extend/change it). By doing this, not only are the language simpler, but they also just need less symbols.
What Rust does is track reference lifetimes at compile time, giving you certainty about who can safely "own" or "borrow" memory in every single line of code, without any runtime pointer indirections or other slowdowns. The language is built around this feature at every level, with "lifetimes" being a syntactic construct at the level of type and mutability.
Imagine if you wanted to safely parse user-submitted JSON, maybe cache derived products from that JSON, and then make sure that when the input buffer is released, you weren't holding any handles into strings inside it. The only safe way to do that in any other language is to proactively copy the strings, or rigorously reference-count the input buffer. But Rust has your back here. If you use zero-copy deserialization from Serde ( https://github.com/serde-rs/serde/releases/tag/v1.0.0 ) then the compiler will refuse to compile if you are using any of that data longer than the lifetime of the original buffer, and do so without needing to add any runtime bookkeeping.
Yes, it's an annoying language to learn because of that "fight with the borrow checker." I LOVE that the language designers and steering committee are so open to quality-of-life improvements for newbies, like that string warning. The language will only get easier to learn over time. It may never be what you use to make your next app, but if you're doing systems programming, it's the wave of the future.
Given this, I wonder if the (seemingly) added complexity of Rust could result in more attack surfaces of other kinds.
I don't know anything about Rust mind you.
Other non-garbage collected languages (ie those with manual memory management) lack Rust's memory safety semantics and are this subject to segfaults, buffer overflow exploits, etc. Rust is extremely "safe" since it prevents these types of errors at compile time.
It's just like typed and untyped languages. Typed languages require more up front work in that you must define all the types and data structures and which functions can accept them. This is more work than just creating them ad-hoc and using them as needed, but it prevents certain types of errors by catching them at compile time. The ownership and lifetime information for variables is loosely equivalent to that. It prevents certain types of problematic usage. It isn't perfect, and sometimes you have to work around its limitations, but the same could be said of most type systems.
There are plenty of primers on this feature of Rust, I advise you to take a look, you might find it very interesting.
(However, you're correct in that it's not undefined behavior to mess up locking in Rust, at least not without an `unsafe` block involved.)
You know all this, of course. I'm just commenting for others' sake.
Rust is like the safety mechanism on a sawblade that shuts off once it realizes it's cutting into your finger.
Clang is a competitor to GCC.
Rust's main benefit is in the compiler itself, not optimization and codegen.
If I understand, no languages offer the same assurances, I remember GodBolt is a nice way to explore how it's compile to assembly code you can compare.
 Many compilers have much more than two stages. For example, Rust has another intermediate representation called MIR.
So it's theoretically possible to express such a program in assembly language, but it's not something humans could realistically produce without tools such as Rust.
It's not a set law, but more expressive type systems almost always increase the class of properties that can be "easily" proved in a language. I work on a verification tool for C/C++ programs and we constantly struggle with the languages. Pointer arithmetic and aliasing dramatically complicate any possible analysis, and these problems are only exacerbated at a lower level IR/ASM level.
I wouldn't recommend anyone write real code using LLVM IR, but it's not as bad as you'd expect.
Yes, the reason is things like garbage collection and language runtime. Every program does ultimately run some form of machine code, but the amount and type of code generated can vary very widely, not even considering things like VM where you have another couple of layer of abstraction that slow things down.
Of course you can use some other compiled language instead of Rust, I think the choice boils down to productivity and ecosystem.
EDIT: where I said "some other compiled language" I should have really said "some other compiled and non-garbage-collected language"
Node.js, Java, Go use Garbage Collection for use case where programmers do not have to manage memory.
If Node/Java/Go use GC (or VMs), then aren't they more safe than Rust?
Nope, it's largely depend on the language design.
Yes and No, memory allocation is one of the issue in GC. Go is generally more safe on networking but not security where Rust can manage it securely.
In fact, we don't have to bother much with GC because it's depend on programmers and job availability. GC was created based on the idea that manage memory is hard for large scale project. It work well for Azul Systems, and they have recently advertised for LLVM engineer to bring more performance where JVM could not.
Go has the advantage of memory safety (via GC), plus better concurrency safety, which is lacking in Rust.
There are concurrency safe languages, but not mentioned in this thread.
This is not borne out by practical experience. While we've not experimented much with Rust, our Java deployments are significantly larger than other native languages like Go (and I expect Rust would actually be a bit smaller than that since it requires less runtime than Go).
JVM deployed binaries are large, not especially because the bytecode is large, but because you have to ship all the bytecode for all your code and all its transitive dependencies; there's no linker and the semantics of the language make it essentially impossible to statically prove that individual functions or classes aren't needed. You can trim it down with tools like Proguard, but that's a non-trivial undertaking and prone to error, which again you won't know until runtime.
Plus the drawback that you need a relatively large VM to run a JVM binary, but you can run Rust binaries completely standalone (out of a scratch container if you want).
> Go has the advantage of memory safety (via GC), plus better concurrency safety, which is lacking in Rust.
I'm curious what you mean by "better concurrency safety". My understanding is that Rust attempts to statically prove that concurrent accesses are safe (e.g. https://blog.rust-lang.org/2015/04/10/Fearless-Concurrency.h..., especially the section on locks). Go does nothing of the sort - it provides some nice concurrency primitives and tooling to detect data races, but the compiler does nothing to actively prevent it.
Rust however prevents these kinds of errors at compile time.
Which things were you thinking of that Rust was lacking here?
Much better would be a proper type system to get rid of those at compile-time of course. Look at pony.
And a better memory-safety system than RC.
As for Pony,
> Pony-ORCA, yes the killer whale, is based on ideas from ownership and deferred, distributed, weighted reference counting.
That's it's GC.
More technical explanation:
"In practice, methods are not compiled the first time they are called. For each method, the JVM maintains a call count, which is incremented every time the method is called. The JVM interprets a method until its call count exceeds a JIT compilation threshold. Therefore, often-used methods are compiled soon after the JVM has started, and less-used methods are compiled much later, or not at all. The JIT compilation threshold helps the JVM start quickly and still have improved performance. The threshold has been carefully selected to obtain an optimal balance between startup times and long term performance."
Are there any examples you could list?
Rust is a modern C/C++ replacement, or at least tries to be. That's no mean feat.
With that said, it's a very interesting project. Especially with regards to its rendering engine.
Those systems failed market adoption mostly due to politics and company acquisitions than technical hurdles.
Modula-3 is a good example of how to support all scenarios required by a systems programming language with GC, unfortunately Compaq buying DEC followed by HP buying Compaq, killed all the work the SRC group was doing.
For embedded work check the Oberon compilers sold by Astrobe for ARM Cortex-M3, Cortex-M4 and Cortex-M7, as well as, Xilinx FPGA Systems.
Rust should be beating the majority of those languages in well-implemented comparisons.
Refcounting is often one of the slower ways to implement a GC. It also has other issues, like long GC pauses when a large structure goes out of scope.
For fairness you need to compare Rust to D or Pony or SBCL, which are also close to C++ (even faster), plus added concurrency safety or memory safety, which can be circumvented in Rust.
In idiomatic Rust code, you typically have very few reference increments/decrements, making it faster and more efficient than both, GC and traditional RC-based approaches.
The reasons for this are that objects are often allocated on the stack, passed by reference (a safe pointer), and directly integrated into a larger structure, requiring fewer heap allocations and very few – if any – refcounted objects. In Rust, unlike C or C++, you can do this safely and ergonomically because Rust enforces clear and well-defined ownership semantics.
Unsubstantiated criticism usually gets downvoted on HN, Rust or not.
Please elaborate. In particular, what RC and concurrency problems?
To witness, https://github.com/ggreer/the_silver_searcher (aka "ag") is about as fast as ripgrep, but written in C.
This is not to say that Rust does not have benefits. But the benefit is not "speed", but "speed plus security".
But nobody said otherwise? I don't understand your point. Speed + safety is indeed precisely the point. I would implore you to do your own comparative analysis by looking at the types of bugs reported for these search tools. (I can't do this for you. If I could, I would.)
> What makes ripgrep fast (AFAIK) is mainly using mmap() instead of open()/read() to read files,
I think you're confused. This is what the author of the silver searcher has claimed for a long time, but with ripgrep, it's actually precisely the opposite. When searching a large directory of files, memory mapping them has so much overhead that reading files into intermediate fixed size buffers is actually faster. Memory maps can occasionally be faster, but only when the size of the file is large enough to overcome the overhead. A code repository has many many small files, so memory maps do worse. (N.B. My context here is Linux. This doesn't necessarily apply to other operating systems.)
> and relying on Rust's regex library that compiles regexes to DFAs which can run in linear time.
There's no confusion that such things can't be done in C. GNU grep also uses a lazy DFA, for example, and is written in C.
The "linear time" aspect doesn't show up too often, and none of my benchmarks actually exploit that.
There's a lot more to the story of how ripgrep beats the silver searcher. "About as fast" is fairly accurate in many cases, but to stop there would be pretty sad because you'd miss out on other cool things like:
- SIMD for multiple pattern matching
- Heuristics for improving usage of memchr
- Parallel directory iterator (all safe Rust code)
- Fast multiple glob matching
And yes, I could have done all of this in C. But it's likely I would have given up long before I finished.
 - http://blog.burntsushi.net/ripgrep/
 - https://github.com/mysql/mysql-server/blob/5.7/.gitignore