Hacker News new | comments | ask | show | jobs | submit login
Rust 1.17 (rust-lang.org)
551 points by steveklabnik on Apr 27, 2017 | hide | past | web | favorite | 319 comments

Hey, folks! I have never used Rust but all the buzz around it made me curious, so I may ask you to forgive my ignorance on the matter. I have a few honest questions about it and I'm pretty sure these questions were answered before, so feel free to point me to this resources, I'd appreciate very much.

First of all what's the point of Rust? OK, it's a systems language, but what does that even mean? You can write a system in any language right? Python is a scripting language and lots of people use it everyday to write systems. What problems is it trying to solve? Also how does it compare to Go? (I haven't used either one or another). It seems to me that both Go and Rust are mainly used by C/C++ folks that want a new shiny language that somehow resemble the syntax or structure or funcionality of the former and at the same time add some modern features (I might be totally wrong by presuming this).

From the perspective of other languages I know a little bit such as Ruby, Python or JS its syntax seems really bloated to me with all those specials symbols. Do you feel more productive writing software in it than in let's say JS? Please keep in mind that I really don't know Rust (nor I've never tried a statically-typed language) and I don't mean to offend anyone.

Thank you guys!

Because you can build things like:

* A faster grep ( https://github.com/BurntSushi/ripgrep )

* A GPU accelerated terminal emulator ( https://github.com/jwilm/alacritty )

* A web browser ( https://servo.org/ )

* A containerization system ( https://github.com/tailhook/vagga )

* An operating system ( https://github.com/redox-os/ )

* An extremely fast text editor ( https://github.com/google/xi-editor )

And be faster and safer than C/C++.

Offtopic: using symbols make it hard to read for an outsider


These symbols are in fact very important in Rust and reading the book makes their usage very clear.

I get that it may be hard to read if you're not familiar with the language, but so is * and & if you're not familiar with them in the context of pointers. Sometimes however, a language feature calls for a special symbol as is the case here.

The usage of apostrophes in English also doesn't make much sense for an outsider, but they're very much a necessary part of the language and very easy to use if you're an English speaker. Same applies for Rust.

To add to that point...

I'm a beginner to Rust and this was my exact reaction (Too much complex syntax!!), but one conclusion I have come to since, is that one of the really nice thing languages like Python do, is that they just don't deal with a whole bunch of CS issues (eg: everything is a reference to an object) or are very opinionated about it (ownership/lifetime is bound to scope, no way to extend/change it). By doing this, not only are the language simpler, but they also just need less symbols.

Is there a reason why all the above software cannot perform as "fast" or "safe" as Rust when written in other programming languages? After all, every program compiles down to machine code/assembly.

As soon as you need to be able to access low-level pointers for performance, you run into a problem: you can easily end up holding onto a reference to memory that's been cleared, or otherwise invalidated, by some other piece of code. This type of bug is insidious, and can easily be missed by the most stringent unit tests and even fuzzing. Of course you can try to get around this with abstractions on top, as every higher-level language does, and as you can do in C++ if you're willing to build reference-counted pointers around absolutely everything... but these are not zero-cost abstractions.

What Rust does is track reference lifetimes at compile time, giving you certainty about who can safely "own" or "borrow" memory in every single line of code, without any runtime pointer indirections or other slowdowns. The language is built around this feature at every level, with "lifetimes" being a syntactic construct at the level of type and mutability.

Imagine if you wanted to safely parse user-submitted JSON, maybe cache derived products from that JSON, and then make sure that when the input buffer is released, you weren't holding any handles into strings inside it. The only safe way to do that in any other language is to proactively copy the strings, or rigorously reference-count the input buffer. But Rust has your back here. If you use zero-copy deserialization from Serde ( https://github.com/serde-rs/serde/releases/tag/v1.0.0 ) then the compiler will refuse to compile if you are using any of that data longer than the lifetime of the original buffer, and do so without needing to add any runtime bookkeeping.

Yes, it's an annoying language to learn because of that "fight with the borrow checker." I LOVE that the language designers and steering committee are so open to quality-of-life improvements for newbies, like that string warning. The language will only get easier to learn over time. It may never be what you use to make your next app, but if you're doing systems programming, it's the wave of the future.

So, I recently saw a study claiming that memory management-related issues constituted 25% of attacks, the rest was other vectors. I may be misremembering, but in any case, simply getting rid of memory faults does not get rid of all attack vectors.

Given this, I wonder if the (seemingly) added complexity of Rust could result in more attack surfaces of other kinds.

I don't know anything about Rust mind you.

Rust is "fast" because it runs close to the bare metal, like C or C++. It doesn't feature garbage collection, many of which stop the world to clear memory.

Other non-garbage collected languages (ie those with manual memory management) lack Rust's memory safety semantics and are this subject to segfaults, buffer overflow exploits, etc. Rust is extremely "safe" since it prevents these types of errors at compile time.

So why does C, for example, lack Rust's memory safety semantics? Is it something to do with the design of the language itself? Can Rust predict user input, whereas C cannot?

Because Rust (the language) makes you reason about and define the concepts of ownership, borrowing and lifetimes of variables, and enforces this to the degree that certain classes of bugs are not possible. C does not require (or natively support) that, so this information is not available to the compiler.

It's just like typed and untyped languages. Typed languages require more up front work in that you must define all the types and data structures and which functions can accept them. This is more work than just creating them ad-hoc and using them as needed, but it prevents certain types of errors by catching them at compile time. The ownership and lifetime information for variables is loosely equivalent to that. It prevents certain types of problematic usage. It isn't perfect, and sometimes you have to work around its limitations, but the same could be said of most type systems.

There are plenty of primers on this feature of Rust, I advise you to take a look, you might find it very interesting.

There is a lot of "undefined behavior" (UB) in C, including straightforward stuff like overflowing signed integer addition. More insidiously, multithreaded code can be quite hard to write in C, because it's very very very easy to trigger UB in your multithreaded code. For example, if you have a shared variable that's protected by a lock, it's pretty easy to accidentally forget to lock the lock (or lock the wrong lock) before accessing the variable, and now you've invoked undefined behavior. Rust doesn't allow you to make those mistakes.

To be clear, Rust's model of locking data rather than locking code is really lovely, but that doesn't mean that it's not possible to mess up locks: Rust only prevents data races, not race conditions in general.

(However, you're correct in that it's not undefined behavior to mess up locking in Rust, at least not without an `unsafe` block involved.)

True, you can certainly deadlock in Rust or do other logic mistakes. What I was trying to get at is you cannot access data protected by a lock without holding the lock, and you cannot leak any references to that data past the unlock either, so you cannot stray into undefined behavior by accessing the same data from multiple threads without appropriate locking/synchronization (like you can do oh so easily in C). At least not without an `unsafe` block.

You know all this, of course. I'm just commenting for others' sake.

They're different languages with different philosophies. C provides a set of very powerful and potentially dangerous tools (direct memory access and management, for instance), and does not police how you use them. Rust want you to carefully explain what you want to do with those tools via its ownership system, unless you opt for "unsafe".

Rust is like the safety mechanism on a sawblade that shuts off once it realizes it's cutting into your finger.

Not sure which C compilers you're referring to. If you mean Clang which is also based on LLVM. https://clang.llvm.org/

Clang is a competitor to GCC.

I did not mention compilers in my comment. Do you mean that if I use LLVM compile a C program then I get the same assurances as when I compile a Rust program?

LLVM doesnt speak C, it is an optimization and codegen layer. Both rustc and clang output to LLVM.

Rust's main benefit is in the compiler itself, not optimization and codegen.

LLVM is a compiler construction, not a compiler.

If I understand, no languages offer the same assurances, I remember GodBolt is a nice way to explore how it's compile to assembly code you can compare.

https://rust.godbolt.org https://gcc.godbolt.org https://go.godbolt.org

Help me out. What is compiler construction?

Most compilers can be broken up into two steps, which is what we call a front end and back end. [1] The front end of a compiler does syntactical (parsing + lexing) and semantic analysis (type checking, etc). The back end of a compiler takes in an intermediate representation of the code, performs optimizations, and emits the assembly language for a target CPU. Clang is an example of a front end and LLVM is the back end for Clang. Clang and rustc both share LLVM as a back end, meaning they both emit LLVM IR.

[1] Many compilers have much more than two stages. For example, Rust has another intermediate representation called MIR.

Many thanks, kind stranger.

If your team of programmers tried to build something like Servo in assembly, you would never complete the task because the task is so laborious and error prone. Finding and correcting all the bugs in your hand-written assembly isn't feasible.

So it's theoretically possible to express such a program in assembly language, but it's not something humans could realistically produce without tools such as Rust.

It might be able to, but there are no guarantees. That is the real benefit of strong and expressive type systems: It can prove properties of the program at compile time. An equivalent program could be written in c, brainf*ck, or even a Turing machine, but it gets harder and harder to prove properties like memory safety the less structured the language.

Let's say you are using LLVM to compile a Rust program, and an "equivalent" C program. You can compile both of them down to IR, and then enforce type safety at the IR level. Doesn't that ensure that you can prove properties about the program at compile time?

Possibly, but types can encode far more than just the structure of data. Rust, for example, uses types to encode lifetime and ownership information. Haskell uses the IO monad to encapsulate non-determinism. Neither of those have equivalent concepts at the IR level.

It's not a set law, but more expressive type systems almost always increase the class of properties that can be "easily" proved in a language. I work on a verification tool for C/C++ programs and we constantly struggle with the languages. Pointer arithmetic and aliasing dramatically complicate any possible analysis, and these problems are only exacerbated at a lower level IR/ASM level.

You can't enforce type safety at the IR level. LLVM IR has very little in the way of type information.

I've been manually writing some LLVM IR recently to prepare for a project involving JIT compilation, and LLVM's type system is actually shockingly expressive. The majority of the problems I run into are the fact that you have to copy-paste more often and that leads to errors.

I wouldn't recommend anyone write real code using LLVM IR, but it's not as bad as you'd expect.

The hard part is getting the "equivalent" C program. :)

Just to the same links to your question.

https://gcc.godbolt.org https://rust.godbolt.org

> Is there a reason why all the above software cannot perform as "fast" or "safe" as Rust when written in other programming languages? After all, every program compiles down to machine code/assembly.

Yes, the reason is things like garbage collection and language runtime. Every program does ultimately run some form of machine code, but the amount and type of code generated can vary very widely, not even considering things like VM where you have another couple of layer of abstraction that slow things down.

Yes, in theory all programmes in all languages eventually run machine code. However some programming languages (e.g. Python) will do things that you can't get away from, like reference counting. So you'll be running extra machine code and you can't "turn that off". The designers of that language have made valid trade offs to result in that.

The GP comment mentioned Ruby, Python and JS which are not suitable for this kind of things.

Of course you can use some other compiled language instead of Rust, I think the choice boils down to productivity and ecosystem.

EDIT: where I said "some other compiled language" I should have really said "some other compiled and non-garbage-collected language"

Rust is based on LLVM just like Swift, Pony, Crystal, etc to compile to native code. Java uses bytecode which will need to translate to machine code in JVM.

Node.js, Java, Go use Garbage Collection for use case where programmers do not have to manage memory.

You could use LLVM to compile any language to LLVM IR, and then to machine code using a backend. Does that mean every language has the same properties as Rust?

If Node/Java/Go use GC (or VMs), then aren't they more safe than Rust?

> You could use LLVM to compile any language to LLVM IR, and then to machine code using a backend. Does that mean every language has the same properties as Rust?

    Nope, it's largely depend on the language design.
> If Node/Java/Go use GC (or VMs), then aren't they more safe than Rust?

    Yes and No, memory allocation is one of the issue in GC. Go is generally more safe on networking but not security where Rust can manage it securely.
This is a good read on Java vs Rust: https://llogiq.github.io/2016/02/28/java-rust.html

https://news.ycombinator.com/item?id=14173716 https://github.com/stouset/secrets/tree/stack-secrets

In fact, we don't have to bother much with GC because it's depend on programmers and job availability. GC was created based on the idea that manage memory is hard for large scale project. It work well for Azul Systems, and they have recently advertised for LLVM engineer to bring more performance where JVM could not.

Parts are completely untrue. The JVM has the advantage to selectively jit-compile hot or small functions, whilst Rust has to compile all at once. Rust binaries are thus big, whilst you don't ship JVM binaries, you ship small bytecode.

Go has the advantage of memory safety (via GC), plus better concurrency safety, which is lacking in Rust. There are concurrency safe languages, but not mentioned in this thread.

> Rust binaries are thus big, whilst you don't ship JVM binaries, you ship small bytecode.

This is not borne out by practical experience. While we've not experimented much with Rust, our Java deployments are significantly larger than other native languages like Go (and I expect Rust would actually be a bit smaller than that since it requires less runtime than Go).

JVM deployed binaries are large, not especially because the bytecode is large, but because you have to ship all the bytecode for all your code and all its transitive dependencies; there's no linker and the semantics of the language make it essentially impossible to statically prove that individual functions or classes aren't needed. You can trim it down with tools like Proguard, but that's a non-trivial undertaking and prone to error, which again you won't know until runtime.

Plus the drawback that you need a relatively large VM to run a JVM binary, but you can run Rust binaries completely standalone (out of a scratch container if you want).

> Go has the advantage of memory safety (via GC), plus better concurrency safety, which is lacking in Rust.

I'm curious what you mean by "better concurrency safety". My understanding is that Rust attempts to statically prove that concurrent accesses are safe (e.g. https://blog.rust-lang.org/2015/04/10/Fearless-Concurrency.h..., especially the section on locks). Go does nothing of the sort - it provides some nice concurrency primitives and tooling to detect data races, but the compiler does nothing to actively prevent it.

Go, in my understanding, is not memory safe even with its gc, and also doesn't have "concurrency safety" because of this. That's why https://golang.org/doc/articles/race_detector.html exists.

Rust however prevents these kinds of errors at compile time.

Which things were you thinking of that Rust was lacking here?

The go race detector is still miles better than enforcing manual mutexes and locking in concurrent Rust code.

Much better would be a proper type system to get rid of those at compile-time of course. Look at pony. And a better memory-safety system than RC.

You don't always need those; it depends on what you're doing. And furthermore, that _is_ the compile time system that enforces things. Go's race detector can only prove the presence of things, not their absense.

As for Pony,

> Pony-ORCA, yes the killer whale, is based on ideas from ownership and deferred, distributed, weighted reference counting.

That's it's GC.

What you mentioned JIT is exactly my thought when IBM explained why they like about Swift over Java in JVM.

More technical explanation: https://www.ibm.com/support/knowledgecenter/SSYKE2_7.0.0/com...

"In practice, methods are not compiled the first time they are called. For each method, the JVM maintains a call count, which is incremented every time the method is called. The JVM interprets a method until its call count exceeds a JIT compilation threshold. Therefore, often-used methods are compiled soon after the JVM has started, and less-used methods are compiled much later, or not at all. The JIT compilation threshold helps the JVM start quickly and still have improved performance. The threshold has been carefully selected to obtain an optimal balance between startup times and long term performance."

> Go has the advantage of memory safety (via GC), plus better concurrency safety, which is lacking in Rust.

    Are there any examples you could list?
One of Go GC issue is based on this stouset's discussion https://news.ycombinator.com/item?id=14174500

Java bytecode looks rather large to me.

Its only fair to also write "Also be slower but definitely more safer than C++".

Rust should not be slower. If so, it's a bug. Please file them.

Just like with Javascript, everything must be rebuilt again!..

Except Rust has incredible memory safety, without cumbersome and slow GC, and runs close to C++ speeds, unlike the heavy, wasteful, and slow frameworks you mention like Electron or HTML5.

Rust is a modern C/C++ replacement, or at least tries to be. That's no mean feat.

Plus alacritty looks great and is much faster and lighter than most other terminal emulators, ripgrep is a great replacement for grep and can be a serious improvement when you are searching through millions of lines on thousands of giant files. To mention just the two of them...

Alacritty is full of render issues and missing features. It's fast because it barely does anything, and doesn't even do it correct. Just check the ever growing laundry list of issues it has on Github.

With that said, it's a very interesting project. Especially with regards to its rendering engine.

I only use ripgrep these days.

Except that there are other languages like Go, Crystal, D, Nim that all offer memory safety ( thanks to there GC ), with a light GC ( like reference counting ). And based upon the same benchmarks that Rust in participates, they are close or even faster at times, with close or better memory usage.


GCs are nondeterministic amongst other issues. Baking one into the core of a systems language likely isn't a good idea.

It worked out alright for Algol 68RS, Mesa/Cedar, Modula-2+, Modula-3, Oberon, Oberon-2, Active Oberon, Component Pascal, Sing#, System C#.

Those systems failed market adoption mostly due to politics and company acquisitions than technical hurdles.

Some applications are easier without a GC. Embedded work comes to mind. Realtime guarantees for systems with GC are also somewhat tricky (though not impossible to achieve).

True, but just because a systems language has a GC doesn't mean one has to use it everywhere, it is the only mechanism to allocate memory or that it has to run all the time.

Modula-3 is a good example of how to support all scenarios required by a systems programming language with GC, unfortunately Compaq buying DEC followed by HP buying Compaq, killed all the work the SRC group was doing.

For embedded work check the Oberon compilers sold by Astrobe for ARM Cortex-M3, Cortex-M4 and Cortex-M7, as well as, Xilinx FPGA Systems.

Baking unsafe memory languages with limited type systems and blocking libraries for the hard parts is neither a good idea for systems.

Try these benchmarks instead: http://benchmarksgame.alioth.debian.org/u64q/rust.html

Rust should be beating the majority of those languages in well-implemented comparisons.

> light GC ( like reference counting )

Refcounting is often one of the slower ways to implement a GC. It also has other issues, like long GC pauses when a large structure goes out of scope.

I hope you do realize that a good GC is usually faster then refcounting, and comparing it to slow GC's does not prove anything.

For fairness you need to compare Rust to D or Pony or SBCL, which are also close to C++ (even faster), plus added concurrency safety or memory safety, which can be circumvented in Rust.

> I hope you do realize that a good GC is usually faster then refcounting, and comparing it to slow GC's does not prove anything.

In idiomatic Rust code, you typically have very few reference increments/decrements, making it faster and more efficient than both, GC and traditional RC-based approaches.

The reasons for this are that objects are often allocated on the stack, passed by reference (a safe pointer), and directly integrated into a larger structure, requiring fewer heap allocations and very few – if any – refcounted objects. In Rust, unlike C or C++, you can do this safely and ergonomically because Rust enforces clear and well-defined ownership semantics.


You haven't actually demonstrated any problems, just repeated that they exist.

Unsubstantiated criticism usually gets downvoted on HN, Rust or not.

> I do know rust, their RC problems, their unsafe problems, their concurrency and locking problems and their type system problems.

Please elaborate. In particular, what RC and concurrency problems?

Yeah, hard to imagine that things can improve over time!

try ripgrep, it'll blow your socks off.

What makes ripgrep fast (AFAIK) is mainly using mmap() instead of open()/read() to read files, and relying on Rust's regex library that compiles regexes to DFAs which can run in linear time. Those are things that you can do just as well in C.

To witness, https://github.com/ggreer/the_silver_searcher (aka "ag") is about as fast as ripgrep, but written in C.

This is not to say that Rust does not have benefits. But the benefit is not "speed", but "speed plus security".

Here's a much more thorough post on why it's fast. mmap is only a small part of it (a bunch of the grep implementations mmap)


That's a great blog post. The `linux_literal` section is particularly interesting re: mmap.

Ripgrep only uses mmap() when it is searching very few files. For anything beyond that it uses intermediate buffers.

I switched from ag to rg a while ago. The speed benefits are noticeable.

> Those are things that you can do just as well in C. [..] But the benefit is not "speed", but "speed plus security".

But nobody said otherwise? I don't understand your point. Speed + safety is indeed precisely the point. I would implore you to do your own comparative analysis by looking at the types of bugs reported for these search tools. (I can't do this for you. If I could, I would.)

> What makes ripgrep fast (AFAIK) is mainly using mmap() instead of open()/read() to read files,

I think you're confused. This is what the author of the silver searcher has claimed for a long time, but with ripgrep, it's actually precisely the opposite. When searching a large directory of files, memory mapping them has so much overhead that reading files into intermediate fixed size buffers is actually faster. Memory maps can occasionally be faster, but only when the size of the file is large enough to overcome the overhead. A code repository has many many small files, so memory maps do worse. (N.B. My context here is Linux. This doesn't necessarily apply to other operating systems.)

> and relying on Rust's regex library that compiles regexes to DFAs which can run in linear time.

There's no confusion that such things can't be done in C. GNU grep also uses a lazy DFA, for example, and is written in C.

The "linear time" aspect doesn't show up too often, and none of my benchmarks[1] actually exploit that.

There's a lot more to the story of how ripgrep beats the silver searcher. "About as fast" is fairly accurate in many cases, but to stop there would be pretty sad because you'd miss out on other cool things like:

    - SIMD for multiple pattern matching
    - Heuristics for improving usage of memchr
    - Parallel directory iterator (all safe Rust code)
    - Fast multiple glob matching
In many cases, this can make a big difference. Try searching the MySQL server repository, for example, and you'll find that the silver searcher isn't "about as fast" as ripgrep. (Hint: Take a peek at its .gitignore file.[2] This has nothing to do with memory maps, SIMD or linear time regex engines.)

And yes, I could have done all of this in C. But it's likely I would have given up long before I finished.

[1] - http://blog.burntsushi.net/ripgrep/

[2] - https://github.com/mysql/mysql-server/blob/5.7/.gitignore

A lot of people describe Rust as a systems language (and it is!) and often that's followed up as a "better C". But it's also more than that!

Despite Rust looking a lot like C++ and often occupying a similar role as C, it's heritage is just as much that of an ML, and it shows in the semantics and type system. So while a lot of people do use it as a safer C, I often find it to be a "pragmatic Haskell".

My company does much of our back-end in Rust, and we've found it to be an excellent application language. A lot of this is the type system and the tooling. Cargo, the package manager and build system is excellent, builds are far more reproducible and I've spent far less time debugging builds (as opposed to code) than just about any other language I've used. Rust's type system is great if you like that kind of thing, and I find Rust to be particularly easy to debug and refactor. RAII and destructors/drop semantics come in handy more often than you'd think, and the "fearless concurrency" thing is real.

That said, lifetimes do occasionally cause some pain, the learning curve isn't the kindest, and the patterns are different from a lot of other languages. Personally though, I've found Rust to be very productive after getting over the initial learning hump and understanding the borrow checker more. This is gets more true the larger the project gets, as Rust tends (in my experience) toward the trade off of putting a fair bit of complexity up front in exchange for a slower accumulation of complexity as the project increases in size and scope. It's a trade off that I'm often happy to make, but is always a judgement call.

May I ask what kind of backend you're developing? I've always been curious about development speed for Rust/Go in terms of JSON APIs, i.e. if it's remotely feasible to say replace Node.JS with Rust. I'd expect a drop in development phase but an increase in stability & correctness.

npm has started writing new backend services in Rust instead of node. (Not all new services, but they have multiple ones in production now.)

You'd want to check out https://tokio.rs/ and the (released very soon, but not quite yet!) hyper 0.11 that integrates with it.

I've been developing a small self-contained web app to serve as a scoreboard for roller derby. It consists of a server written in Rust that mostly serves a bunch of static pages and javascript files, as well as a JSON API that the javascript calls into to provide the dynamism. Thus far, serde-json has been an amazingly low-friction way to design JSON APIs, and Rocket has been a relatively convenient way to organize web apps. The closest competitor to my app is written in Java and needs about 200 MB of memory at steady state just on the server. My version runs with an RSS of about 3 MB and considerably better performance.

If you're looking for a replacement for node, I'd take a much deeper look at Elixir/Erlang than Rust (first up at least). You get easy concurrency, full functional language, good libraries (Phoenix is a great web framework), stability etc. Rust looks super cool, but there are high-level languages which can also offer good web performance.

This is true if you're agnostic to or prefer dynamic typing. If you're using Node, that's probably true, but at the high level I tend to think of Haskell, OCaml, F#, and Scala as the comparisons.

As someone who likes neither dynamic typing or Ruby's syntax, I do agree that Elixir looks very cool, and have considered it for jobs that favor development speed over compile-time correctness.

Not having a decent type system (dialyzer is slow and does not have parametric and bounded polymorphism) is a massive strike against Elixir.

Sure! This is us: http://www.litmushealth.com/

Our APIs are JSON and Avro based, if that's what you're asking.

The company does two major things: connect to sensor systems and collect data, and provide ML/Data Science/(other buzzwords for math) tools for clinical trials and medical studies. In many ways this means Rust is a "reasonable" but not necessarily "ideal" fit -- we could certainly afford the overhead of a GC, so the no-GC aspect of Rust doesn't really buy us anything major, though it does provide some nice performance and memory usage in the parts below us in the stack (Iron/Hyper, soon Tokio). That said, the more we get into the "big data" and ML stuff, the more value there is in having a good C FFI and good performance profiles.

We get a lot of value from the other aspects of Rust though. Traits (typeclasses) are powerful and flexible. More importantly, they promote that old, but good, Java ideal of "programming to interfaces" without all the other baggage. Rust's support for FP is quite good (though the lack of HKTs means there's no monads, at least not "actual" monads ala the Haskell Monad typeclass). The fact that Rust closures are actually traits means you can make some really smooth interfaces that transition from just a static function to a closure and finally to a 'proper' object.

The Macros 1.1 release recently and the Serde library hitting 1.0 have been a big deal. JSON APIs are trivial to write and get all the benefits of strong typing and Option/Result on the deserialization side.

One mistake I think people make with Rust is thinking that because you can do things in a really efficient style, you should. This leads to thinking that Rust isn't as good for "applications" work because you have to think about heap vs. stack allocations and lifetimes and such. But if you were going to use a GC, you can probably afford dynamic dispatch and some extra allocations anyway. Relax, clone the data/box the trait, and come back later if the performance matters. If you wrote the thing in Python, you weren't going to get a choice about boxing things and optimizing would be way harder anyway. Basically, if you treat Rust like an impure (and strict) Haskell or OCaml/F#, it'll actually work pretty well once you learn some idiosyncrasies.

Sorry, bit of a long tangent, but yes, like I said in the last comment, Rust asks a little more up front and in return you get better stability, correctness, and refactoring ability. Testing is easy, especially with unit testing built into the compiler flags and Cargo. Cargo really helps with the "works on my machine" issue and deployments. Pulling in libraries is painless, and while the ecosystem is still small compared to (for example) Java, it's growing, enthusiastic, and so far has covered what we need most of the time.

Hey I know I'm late with this response but I just wanted to say thanks for the thorough response!

Could you please explain why and how the Rust compiler is better than compilers for other languages? After all, every language compiles down to machine code/assembly. So, the reason that one language is "better" than the other has to do with the fact that the compiler for one language is "better" than another.

It's not that the compiler is better, it's that the language requires you to specify some things other languages don't (or infers a default if not defined), and uses that to enforce specific conditions. It's very similar to using types in languages compared to not having types.

In the same way that type information can be used by the compiler to optimize out some operations, the type information and the variable lifetime and ownership information/rules allow for specific optimizations that are harder to assume in C.

That is, you can do the exact same operations in C and Rust, and because you were forced to define more constraints in Rust (or defaults were inferred without you defining them), the compiler can make stronger assertions about what is being done, so it allows for the possibility of more optimizations.

Well, one major advantage for the Rust compiler is that it compiles Rust code, which I happen to have a decent bit of.

Joking aside, I'm not arguing that the Rust compiler is "better" than any other compiler. The Rust team seems to have done a great job with it, but I'm neither a PLT guy nor a compiler writer so I can't profess an informed opinion there. What I am interested in though is the Rust language semantics (which the compiler enforces, just as every correct compiler enforces the semantics of the language it compiles). Rust-the-language provides benefits that we (my company's engineering team) finds to be valuable for writing our software -- a strong, flexible, static type system; Option/Result types; the ownership model that promotes good practices and prevents a lot of concurrency bugs that a GC would not help with; drop semantics which allow interesting and powerful techniques with smart pointers. As a bonus, Rust-the-ecosystem includes some excellent and ever growing tooling that lets us focus more on solving development and business problems.

As for Rust-the-compiler, others have already done a better job than I think I can, so I'll let those comments speak to the issue.

Other people accomplish great things with other tools. Rust works really well for us though, and I like to share that.

C/C++ compiler will compile your code if it is correct. Rust will not compile code if it can detect that it will not work or it is not safe even though the syntax is correct. It can guarantee the program is memory safe (double free, user after free, null dereferencing, using uninitialized memory, array bounds checks).

This makes large refactorings so much easier. It also makes working on large projects with lots of people much easier.

I have to say that, despite working with only a few compiled languages, errors and warnings emitted from rust's compiler are far more helpful than any other I've used.

The rust compiler rejects more programs which helps prevent wasting time testing at runtime.

Man, I really didn't expect all those nice and thoughtful responses. Thank you so much for everyone who replied it. It was really kind of you. Now I have a much better idea on what Rust was designed for and its strenghts. I've learnt so much here. Also thanks for the articles and videos you shared, I'll definitely take a look at them all.

I'm glad I didn't let the fear of been bashed to keep me away of asking. I'm not sure if I'll ever need to use Rust but it surely has the coolest people around it. :) Thank you guys!

You will rarely get bashed in HN, and those cases will be either when you are disrespectful or when you make a joke a la reddit. Otherwise, people are more than happy to help, at least in my experience over the past four or five years or so.

Even jokes aren't necessarily frowned upon, as long as you contribute something meaningful as well. A joke prefixing or postfixing a comment of substance (even if it's just a question) doesn't garner many down votes, in my experience. I think people are wary of this turning into reddit where you sometimes get chains of joke comments swamping discussion. Which is fine, because we have Reddit for those that want that, and HN for something different.

"Systems language" usually refers to lower-level programming. For example: an OS kernel or its drivers where you're poking at hardware and need control over memory layout, a library to be called from many other languages that thus needs to avoid a large runtime and garbage collector, or high-performance infrastructure like web servers or game engines. Generally things you've had to use C or C++ for, in the past.

Go is sometimes called a systems language, but in a much narrower sense than Rust. For example, it is garbage collected and has a runtime, so it's not great at being FFI'd into. It seems to be quite useful for distributed applications.

Rust, on the other hand, is closer to C++ in that it has no real runtime and no GC. On top of that, it brings compile-time memory safety and lots of other modern features to systems programming. One of its biggest applications is making Firefox safer and faster as part of the Quantum project (https://wiki.mozilla.org/Quantum).

Part of why Rust is more symbol-heavy than Python or Javascript is it has to express things in more detail, giving you the control required to do systems programming. So it's not necessarily a direct competitor to JS, since it can work in areas JS simply can't.

In some ways, though, that can make it more productive- static typing is like compiler-enforced documentation and tests, making it easier to keep track of a large codebase.

>> Rust, on the other hand, is closer to C++ in that it has no real runtime and no GC. On top of that, it brings compile-time memory safety and lots of other modern features to systems programming.

I think that's it in a nutshell.

I want to write a program in a language that allows modern programming construction... Cool: Python, Ruby, JavaScript (ES 6+), Haskell, Elixir

The language must be fast... OK, so maybe Java or C#?

I need to control the response times and can't have GC pauses. It needs to be really fast... Assembly!

Stop... OK, OK, you're kind of stuck with C or C++.

Sigh. I want a modern language that compiles to bare metal, doggone it. What should I do?

This is what Rust is. Memory safety without GC, and a modern language without an interpreter or runtime. Plus all the package management/build tool/library ecosystems that all those 1990s languages have. (Would you believe that Java, Ruby, and Python are all more than 20 years old and that Java is the youngest of the three?!?)

Think of Rust as a better C - when you need low-level hardware control like C enables, Rust gives you that + guarantees about never having memory segfaults or concurrency errors that C can't. If your Rust program compiles, it won't break for those reasons (unless you intentionally enable and use unsafe code, but not by default).

Think of Go as a better Python (scripting) or Java (application) language - simpler than Java, faster and with better concurrency than Python. Like Python and Java it's garbage-collected, like Java it's statically weakly typed so can't offer the level of guarantees that Rust can, but possibly faster to develop in as a result. Still too new and thus lacks the extensive libraries of either Java or Python, but growing fast.

I managed to crash compiled Rust programs in my brief experience with it.

What kind of crash? Rust's memory model guarantees memory safety, meaning you can not have: use after free; double free; buffer overflows; index out-of-bound issues; etc.

What it doesn't prevent is crashes. Rust programs will crash if your program tries to do any of the above, and-or blindly unwrap() Result or Option types.

Now Rust can segfault, etc., just like Java, Node, Go, Ruby, Python, PHP and every other "memory-safe" language when using unsafe or native extensions.

I think it was an index out of bounds type error. Can't quite remember.

To be clear, it was my inexperience with Rust, rather than a the language that was the problem. I found it amusing when I had read a few comments saying that "if it compiles it will run".

The person you were replying to said it will run without certain types of errors, not "it will run".

Yes, that would be a panic.

> index out-of-bound issues

You sure about that? You can still get Rust to panic if you try to index a vec past the end...

A panic is significantly different than an actual out of bound access; the latter is the cause of a lot of issues, but a panic is preventing those issues.

Ah I get you! You can't read past the end of an array, but you can get an error.


So isn't the memory model more to do with the compiler or operating system rather than the programming language? Do think it's possible to write a C compiler that checks for use after free, double free, buffer overflows, index out-of-bound issues, etc.?

Surely if one programming language can do it, another can?

A C compiler that checks for all of these things at runtime is Valgrind. It's used when testing.

A C compiler that checks all of these things at compile time would no longer be a C compiler.

Compile time guarantees require you to change the language and restrict what is allowed to compile. Rust does just that, it's a different language. You could write an extension to C like Cyclone (or the ISOCPPCore guidelines for C++) that make it safer via compile time checks. You would likely need more annotations and most existing C programs would no longer compile.

(There is the ergonomic benefit of being able to transition from a C codebase to a, say, Cyclone one, though)

It's not possible to write a C compiler that guarantees what Rust guarantees. There are just too many implicit assumptions about pointers. Even if you check most cases, weird stuff like XOR linked lists will trip you up. (And sure, if you changed the language you could make it safe -- but then it wouldn't be C anymore.)

> Do think it's possible to write a C compiler that checks for use after free, double free, buffer overflows, index out-of-bound issues, etc.?

No, due to the way the language semantics works.

There are static analyzers that do something like that, MISRA-C, High Integrity C, Frama-C, but you are literally using C with Ada semantics at that point, thus it is almost like another language.

Of course, and many programming languages offer similar features as Rust. I've not used Ada myself, but people claim it offers similar safety features.

From my experience though, Rust is the first language that I've had the pleasure to use which offers both the safety guarantees of Java (and more, no data races!) and the low level features of C.

It still leaves me in awe what has been accomplished with Rust, and I've been using it for more than two years now.

For arbitrary C code, I believe checking for those issues is equivalent to the halting problem. Or at least uncomputable.

Rust doesn't guarantee no crashes. It guarantees protection against certain errors e.g. use-after-free and out-of-bounds indexing, some of which can't be done statically at compile time, so if Rust didn't crash e.g. when you index an array out-of-bounds, it wouldn't be doing its job.

It'll also let you do whatever you like if you write `unsafe` code.

It's not actually "whatever" you want. It drops a number of restrictions, but still guarantees some others that C doesn't for instance.


And unsafe is tagged. You don't have to look at the entire codebase, just the few unsafe blocks. Those bugs are often subtle. Any way to restrict where to look is super helpful.

Sure - I bet you've seen a panic - but I'm relatively sure you've not seen a segfault, buffer overflow, or corrupted memory issue. A panic tells you where the issue is (or close to it) - the other issues could only show signs that they exist far away from where they were caused.

You are not alone. Being memory safe doesn't mean bug free or panic free. This does not invalidate any of the comments, it is a fairly good low level language with a compiler that helps you to catch nasty issues at compile time.

The main complaint imho is that despite the many speed improvements, it's still not the fastest of the compilers. But it's getting there!

One of the big gains they still have to make IIRC is providing partial compilation. That will be significant with the good developer practice of breaking source code into multiple files.

I personally don't think that compilation units should be driven by files. Too often have I been forced to keep something in the "wrong" place in C++ because of the compilation model. (This happens in rust too, due to other reasons)

Instead, the compiler should figure out what needs to be recompiled and do just that. This can mean benefits like not needing to recompile everything because you added a comment to a header file.

Unfortunately, doing that at a particularly interesting scale - so I could have, for example, automatic compilation as I type into my editor, meaning hitting "run" would be near-instant - means rearchitecting the compiler to behave closer to Roslyn, which is a huge task.

Oh, sure, it's kinda a pipe dream to get something like this for C++ (use ccache instead). But Rust is still forming its story here (we have experimental incremental compilation which does this!) and I see no reason to constrain it by tying it to files :)

Rust cannot protect you from all forms of crashes, even work only safe code. OOMs, and stack overflows, for example, are easy to create deliberately, and possible to create accidentally. Rust focuses more on safety, in terms of correctness, not crash proof.

Much ground has been covered here already, but let me add something concrete. I'm a scientist who lives in Python. However, sometimes I need code to be fast and precise. In Python we resort to things like numba or cython or weave to achieve that.

So why am I interested in rust? Well if you read around a bit in the literature, you'll find about a million ways to shoot yourself in the foot with C. I have been bitten by that in practice, with subtle memory allocation errors in C code I had to use, that manifested non-deterministiaclly and that I failed to debug without help.

I am very interested in a language that allows me to write fast compiled code, and do so safely, exactly because I'm not a C wizard or systems programmer who has a lot of experience writing mostly bug free code. I don't _like_ writing C code, it feels brittle. In python the python interpreter takes care of everything for me. When I go down to C (as cython or weave allow me to do) things become brittle. (And numba, while I love it and use it alot, is an exercise in constrained writing). Rust is not brittle.

In Rust, mathematics does for me what the python interpreter does on the python side. That's massive, and it's elegant.

It probably helps that I have written more papers including categories than programs involving C though.

> OK, it's a systems language, but what does that even mean? You can write a system in any language right?

When people refer to systems languages, they usually refer to languages that need to do lower-level interactions with hardware. For example, calling to your graphics card manually rather than using a high-level GUI toolkit. Or mmap'ing memory, or using internal JITs. This means that you need clear C and assembly interoperability and generally want to avoid garbage collection.

Or to put it more succinctly, deterministic behavior. Important for anything time(r) critical. You don't want a GC pause to hit at the wrong time...

> You don't want a GC pause to hit at the wrong time...

Military don't see to have any issue using real time Java to control weapon systems.

Aonix, nowadays part of PTC, has lots of customers. From missile systems to battle cruiser targeting systems.

So apparently Java real time GCs don't have any issue on the battlefield.

Battlefields are glacier slow compared to games, vr, trading, data processing, etc.

Tell that to a missile radar system that needs to evaluate in in real time what the missiles it is tracking are actually going to hit.

Example the NORMANDIE ballistic missile tracking.


I have other examples, like battleship gun turrets computer aided targeting systems. You don't want the computer to stop for thinking instead of live tracking what those turrets are supposed to hit.

I'll answer the "what's-it-for" question first: basically, a systems language (as loosely distinguished from a scripting language, or languages with a runtime) is, to me, a language that lets me take control of the bare metal. With Go, for example, I can't control directly whether a value lives on the stack or the heap. With Python, I can't get assurances that variable access in loops won't have a layer of indirection that may hurt performance. With Java, I don't necessarily control the layout of structures in memory, which may be important for things like managing memory pages. A systems language lets me manipulate memory and machine state directly.

With all the power that a systems language gives, comes great risk. To err is human, and most system languages do not prevent humans from erring in ways that can cause significant undefined effects. The usual suspect is, of course, memory safety, but this can also mean things like losing sight of an algorithm because you're carrying too much intellectual overhead when doing the aforementioned memory or system state manipulation. Computer science has lots of very well-studied approaches to solving particular parts of problems, but there's a gap between those approaches and the expressiveness of systems languages to implement them in a way that is fault-free, or doesn't make things like Hoare analysis too difficult to manage.

I am a neophyte using Rust, but it has me excited because it closes that gap in a really elegant way. I just published a libnss plugin that queries Azure Active Directory for user information[1]. Without going into too much detail, it's a set of functions that takes pointers to memory, and stuffs them full of data that it gets from an OAuth2-protected REST endpoint. That's a looooot of "range" in terms of function, and the surface area for bugs in a language like C is huge. On the other hand, doing FFI in runtime-based languages is usually very hard and can be relatively slow, which matters in something as basic as user information retrieval. Rust made writing it extraordinarily easy. I have to "speak C" on one end, but I can handle HTTP calls with the ease and low-risk usually only found in languages like Python. At the same time, if there is a bug that effects memory, I can trust the compiler will catch it, which frees me up to think about structure, maintainability, extensibility, &c.

[1] https://github.com/outlook/libnss-aad

Modula-3 is a systems programming language with GC and you can control precisely where the data goes.

There are lots of other examples.

Very cool! Are you planning on writing Pam authentication against aad as well?

Sorry for not responding more quickly. I don't have a use case at present, so probably not. Adding capabilities to NSS solved the problem I was having (if PAM authenticates a user that NSS returns ENOENT, OpenSSH refuses to permit login). I'm not opposed to it, though, and I'd be interested to hear what needs you might have.

In the previous discussion on Rust Fireflowers https://news.ycombinator.com/item?id=13272474

One of the successful stories with Rust I read https://users.rust-lang.org/t/success-story-new-rustacean-be...

In other blog, there are benchmarks that Go outperform Rust but I believe the Rust code didn't use buffer read. http://126kr.com/article/5cikidl5bhe

The code there is reading into a buffer; I'm guessing optimisations weren't turned on.

Good to know, indeed, the optimization wasn't turned on, it appeared to come close to Go timing. I have tweeted him.

Rust's goal is to make it easier to write fast, reliable software.

It's not just targeted at C/C++ folks. The Rust team often hears from people who have been pulling their hair out trying to get a particular component written in a scripting language (Ruby, Python, JS) to go fast enough, which entails getting very intimate with the runtime systems for those languages. They're often surprised that they can instead write that core component in Rust, without giving up any developer productivity (or sometimes gaining it, compared to hand-optimizing their code), and get something much faster than what they were able to achieve in a scripting language.

There was a post just the other day on the Rust reddit telling this story: https://www.reddit.com/r/rust/comments/67m4eh/rust_day_two_a..., with a choice quote: "I feel like I discovered a programming super power or something. I mean I did expect it to be faster, but not this much faster this easily!" You can also check out the videos here: http://usehelix.com/resources

Rust also makes it much easier to write reliable software. The best example of this is with multithreaded programming (which is something you might reach for for speed), where Rust gives you guarantees not found in any other major language: you can know for sure that data is owned by one thread and not accessible by others, and the compiler will check this for you. You can know for sure that you only access lock-protected data while holding the lock. Rust will check thread safety for you, and more. You can read more about the concurrency benefits here: http://blog.rust-lang.org/2015/04/10/Fearless-Concurrency.ht...

Relative to Go specifically, in addition to the reliability improvements, Rust supports a wider range of "systems" use cases. Unlike Go, it does not require a garbage collector or other runtime system, which makes it very easy to write small Rust components that can be used with a large script, or to write extremely performance-sensitive or low-level code like a browser or game engine, an operating system, or a bare-metal web server. One reason that Dropbox switched to Rust from Go for one of their core components is greater control over memory footprint, which allowed them to reduce their server memory needs by 4x, leading to significant financial savings.

The slogan I've been using for Rust personally is: Confident, Productive Systems Programming. You can write fast code. You can write low-level code. You can do it with high productivity, thanks to the work on modern tooling and ergonomics. But most importantly, you can do all of this fearlessly, because the Rust compiler has your back.

"systems" is an overloaded term and I wish Rustaceans would stop using it or add some qualifiers to manage the ambiguity. I mean, GoodbyeEarl says it right there:

You can write a system in any language right?

Sure, if you're a kernel developer (or rust user) then you might know what "systems language" means more precisely and that phrase makes it easier for you to communicate with other kernel developers, but it loses its' value as a "distinct meaningful element of speech" (define: word) when the context includes almost anyone else.

Hell, even the definition of "System programming language" on Wikipedia[1] is ambiguous enough that many GC'd or even scripting languages would fit it.

So lets stop using just bare "systems language" to describe rust. It deserves better. Newcomers faceplanting into the 10-story granite wall that is the borrow checker is bad enough, there's no reason they should be confused by the tagline on the homepage.

You can write a system in any language right?

[1]: https://en.wikipedia.org/wiki/System_programming_language

> Hell, even the definition of "System programming language" on Wikipedia[1] is ambiguous enough that many GC'd or even scripting languages would fit it.

And the definition of race car is ambiguous enough that you can take your 70's pinto onto the track if you want. In some contexts that might even be entirely acceptable. But if you look at the current uses of the term and what it applies to, and why, it becomes fairly obvious what class of vehicles it's referring to. There are, of course, more specific terms (Indy, Nascar, Formula 1, Rally) for more specific meaning. If someone says they are building a race car, you have a pretty good idea that they are building a vehicle to compete in a race, and speed and handling are important, but specific details might change depending on the type of racing.

> Newcomers faceplanting into the 10-story granite wall that is the borrow checker is bad enough, there's no reason they should be confused by the tagline on the homepage.

Then again, should newcomers come under the impression it's as easy as Python or JavaScript and when they hit that brick wall think it's just themselves, or that the language is poorly designed because it isn't as accessible? Systems language is probably the closest thing I can think of that approximates what to expect out of languages in this category. More explanation might be nice, but I think systems language signals the right things for people that know the term, or are willing to look it up.

> If someone says they are building a race car, you have a pretty good idea that they are building a vehicle to compete in a race, and speed and handling are important, but specific details might change depending on the type of racing.

Right, but there's nothing like that for a "system programming language". What does that tell me? I might assume it meant manual memory management or a language suitable for writing a kernel, but the Go people call that a "system programming language" even though neither of those things applies.

Embedded programming, kernel programming, high-performance computing. I view all of these as variations on systems programming.

> What does that tell me? I might assume it meant manual memory management or a language suitable for writing a kernel, but the Go people call that a "system programming language" even though neither of those things applies.

I'm not sure picking one relatively odd member of a category and concluding that since it doesn't match in some aspects that the category doesn't exist is useful. To stay with the theme, you might encounter 24 Hours of LeMons[1] at some point, and I don't think that makes the idea of race cars invalid or useless (there's a reason I said a 70's Pinto might be acceptable).

You can do some systems programming with garbage collected languages, or even dynamic languages in a pinch. It's not ideal in the vast majority of cases, but there are certain situations where it's not entirely unacceptable either (running the webserver in the same language as your dynamic site is implemented in is one).

1: http://www.cnbc.com/2016/06/19/how-an-amateur-launched-a-mil...

> I'm not sure picking one relatively odd member of a category and concluding that since it doesn't match in some aspects that the category doesn't exist is useful. To stay with the theme, you might encounter 24 Hours of LeMons[1] at some point, and I don't think that makes the idea of race cars invalid or useless (there's a reason I said a 70's Pinto might be acceptable).

Is Go an odd example then? What's a typical example? Better, out of the 10 or 20 most popular programming languages, which are "systems languages" and which are not? I genuinely don't know how to tell, other than maybe "no GC". As far as I can tell the term mainly just means "language I like".

In the current context of contemporary languages I would consider system languages, having GC definitely defines one as abnormal, and Go is abnormal for a systems language. It was apparently much less uncommon to have GC system languages in the past (but I suspect those languages were well adapted to having fine grain memory control coupled with GC). Go is an interesting mix of low-level and close-to-the-metal with a few aspects which are not low-level (channels and GC).

I've always argued that Go is wrong for calling itself a systems programming language. Unfortunately, entities with enough clout can single-handedly redefine language that has specific meaning to a minority of people, but where most people have no need for that language.

I wouldn't disagree with this. But the golang.org website doesn't mention "system" on the homepage, or documents page. You have to go all the way to reading the spec to see the first (and only) mention of Go describing itself as:

> a general-purpose language designed with systems programming in mind

And I'm not sure this definition is actually unrepresentative.

Contrast this with rust-lang.org where "system" is the fourth word on the homepage. People like to blame Go for people being confused about what "systems language" means, but it seems to me like this is just poor branding on Rust's part.

Go rolled back a lot of their use of the systems programming label, E.g. the first version of the website http://web.archive.org/web/20091111073232/Http://golang.org has it in the heading.

Sure, but before even the release of Go 1.0 [1] (which also doesn't mention "system") it's gone. Surely you wouldn't attribute the entire confusion around what a systems language is to a version of a pre-release language's website that made that statement for less than a year [2], almost seven years ago.

I don't want to think about the mental gymnastics it would take to come to that conclusion.

[1]: https://blog.golang.org/go-version-1-is-released

[2]: http://web.archive.org/web/20101001012723/http://golang.org/

When Go was announced, the creators made a huge deal around it being a "systems programming language", which many of its users - who have never had a reason to need a systems programming language, and so have no idea what one is - have parroted ever since. Just because they no longer have it on (most of) their site doesn't mean it didn't become a buzzword within the language's community.

I would say that the term "race car" is recognized by the entire population, plus it's self-descriptive if you know the meaning of both of the component words, which are also very well known and easily defined.

It's self-evident that this is not the case for "systems language", else we wouldn't be talking about this. By itself, "system" means almost nothing and it doesn't help one understand the phrase "systems language" at all without prior knowledge. Take a moment to dwell on the mental context of someone that would say this and what it means to them:

You can write a system in any language right?

> should newcomers come under the impression it's .. easy

Newcomers that don't understand the term at first glance come with no impression because it's ambiguous.

> More explanation might be nice

If you have to reference a footnote to describe the word every time you use it, the word has lost its' value as as word. You might as well call it a "peloozoid programming language" [1]. Just use a better word. Or find a qualifier to tack in front of it to make it more clear. And before you dispute the phrase "every time you use it" I would like to take a second to reference nearly every Rust post on HN and the forums where people are confused by the term "systems"; this theme might be more common than the (tiring) rust-vs-go theme.

This whole paragraph almost comes across as intentionally exclusionary. The Rust community is making huge efforts to make Rust more approachable and ergonomic, especially to non-systems programmers†. This effort can extend past the language itself into the way it describes itself.

I'm not delusional, there's no chance that the 4th word on rust-lang.org will change because I made a comment on HN. But I can wish.

† This is one of the great strengths of Rust! You don't need years of experience with manual memory management to not footgun yourself, Rust has your back. With it's FFI and safety story it's in a great position for dynamic language devs to write extensions for their language with minimal experience and investment.

[1]: http://www.wordgenerator.net/fake-word-generator.php

> Take a moment to dwell on the mental context of someone that would say this and what it means to them: You can write a system in any language right?

I understand the point, but not every concept is easily explainable or should be known to everyone as common knowledge. That's wasteful. If someone is confused about a term or how it applies, or if it seems like it's being used in a different context, then they can look it or up ask, like they did here. If I had seen a suggestion that better encapsulated Rust, I would be on board with you in saying it should change, but I haven't seen that yet.

> If you have to reference a footnote to describe the word every time you use it

You don't, unless you are exclusively talking to to people about it that have no industry knowledge or training, and then I think the term is probably either not something they care about or something they should learn about. We don't ask Doctors to change their industry jargon for non doctors, and doctors that don't know it are expected to learn it. Communication is a two way street.

There's a middle ground between making up new words and dumbing down all communication to the simplest words in the language. Neither is particularly useful in the long term. I don't know if Rust has hit a good middle ground, but I do't have a better solution than what I see them doing, and I don't see what I consider better solutions being presented.

You're not wrong; we've always been unsure if "systems" is a good way to describe Rust. It has a lot of advantages, but also a lot of downsides, many of which you discuss here.

Different languages are targeted for different purposes. Most people don't like JS as a language, but necessary to take full advantage of what a web app can do. This is because today browsers run JS and not much else. The sometimes chose to write their sever in it too to decrease the number of languages they need to maintain in their code base, and to be able to reuse code for the backend and front end of their webapps.

One of the ways languages differ is how much of the hardware they abstract. Assembly is literally a list instructions to manipulate registers (a component of a processor) containing 1 number at a time, and to jump around those lines. Resisters and you memory just hold ints, so the programmer has to know what ints are numbers what are addresses pointing to an array or other number(pointer), etc. C abstracts a ton of those instructions, so for example you have to keep track of what memory you're using but not which register has that memory's address and how to allocate registers etc. Static typing is necessary to remember how to interpret those ints. Python abstracts even more so you don't have to keep track of you memory and its very close to abstract logic. However, in exchange there is a large bit of code that has to run with you python code to figure out how to use the registers and assembly instructions to compute your function. OK it doesn't do that directly, but indirectly that's the point of what is called an interpreter.

There are advantages of more abstract languages like python. You don't have to think about as much and there are generally more safety guarantees. However, sometimes you can't afford to the time or space overhead of the interpreter. This is a commonly cited reason people move away from ruby. Sometimes you need the minute control over you data. For example, if you are writing a OS or a device driver, you need to give very specific instructions to the processor. Languages that are well suited for the latter cases are usually called systems languages.

Rust and Go are examples of languages that try to fit between C and Python on this spectrum. They want to give some of the abstractions and security grantees that python provides without the overhead and while enabling some precise control of the hardware if you need it. They also provide some abstractions that may speed up you code because the language designers have thought through some complex algorithms and code better than you may be able to.

Hope that helped, and let me know if you have any questions.

Languages that bind you to a garbage collector aren't always well suited for "systems programming". You might not want to pay the memory or latency cost.

It's not always clear cut though. I think Go is a great language for writing system tools, and yet it has a gc.

> You can write a system in any language right? Python is a scripting language and lots of people use it everyday to write systems.

Systems here means things like an operating system, a device driver, video games or a web browser, things that need fast execution, direct access to hardware and the ability to manually maintain memory. Stuff that C/C++ are normally used for.

Could you do it in Python? Not really, since Python has garbage collection, a very high-level of abstraction and is extremely slow for these kinds of tasks.

> What problems is it trying to solve?

It's trying to make it possible to write programs that require manual memory management, but that are checked for correctness of that manual memory management automatically, so that you can avoid many bugs that are related to memory and are commonplace in programs written in C/C++, like referring to a certain object that is in fact no longer present etc.

> Also how does it compare to Go? It seems to me that both Go and Rust are mainly used by C/C++ folks that want a new shiny language that somehow resemble the syntax or structure or funcionality of the former and at the same time add some modern features

Go is not really used by C/C++ programmers, as it does have a garbage collector, just like Python. Unlike Python however it is way faster and unlike Java, it is compiled to a native executable, thus it is used by many Python/Ruby/Java programmers who want a faster/native language with similar capabilities.

Go is mainly used for writing web applications and command line apps, just like Python.

> Do you feel more productive writing software in it than in let's say JS?

Rust targets a very different market from JS, but it has a very strong type system and catches a lot of errors at compile time, which to me makes it a lot more productive than JS is, just by not having things explode on me at runtime.

As for the syntax, it actually feels very neutral once you spend a couple of days with it and read the book + apart from the 'lifetime annotation there aren't really any other weird symbols for the category of language Rust is.

For example things like https://github.com/redox-os/redox or https://github.com/servo/servo would be very hard to write in both Python & Go, because they require direct access to hardware, fastest possible execution and manual memory management. They could've been written in C/C++, but would then contain many memory management errors, (given its manually done by the programmer and programmers are fallible humans), which could lead to crashes or serious security problems. Rust aims to prevent these drawbacks of C/C++, while still maintaining all of their benefits, i.e. speed, manual memory management etc.

Here is a good overview of how Rust is by design fixing some problems which other languages don't.


It's using somewhat historic version of Rust, but ideas explained are quite applicable.

> You can write a system in any language right? Python is a scripting language and lots of people use it everyday to write systems.

It's a systems language in the respect that you get full control over the resources that are used or not used, and you can reason about exactly what code runs (or check the generated output). You can write a webserver in Python, but unless you are serving a dynamic site implemented in Python, you are likely paying a steep cost for that, at least in respect to a language with much finer control such a C/C++ or Rust (or many others that are in this category).

> Also how does it compare to Go?

My understanding of Go is that it aims to fit a similar niche, if not entirely overlapping. Sort of a C/C++ with some changes to make it more modern/easier to use, some language functionality that makes concurrent programming easier, and memory management to ease the pains of manually dealing with memory. Sort of a hybrid of C and Java, where Java's VM is reduced to be very simple and embedded with every binary you create. At least that's how I understand it, not really having done much in it.

> From the perspective of other languages I know a little bit such as Ruby, Python or JS its syntax seems really bloated to me with all those specials symbols.

In many cases it's hard to compare from a dynamic language to a low-level compiled language. It's sort of like comparing a Cessna to a Fighter Jet. If you just need to get from point A to point B and time isn't too much of a factor, a Cessna is fine, and comfortable, and doesn't require much training or diligence in its use. When you need to get there fast, and you might also need to fight a battle or two, a Cessna is not sufficient, but you might make do if you bolt a few missiles and guns on it if you don't expect much of a fight. That's probably the equivalent of writing your own module or library in C/C++ and calling to it from Python.

The symbols allow for greater control and specification of the problem, and when your goal is already to make a very robust system, are something you might end up needing anyway. Imagine you are telling a friend how to go pick up your drink at Starbucks, and he's unfamiliar with the area. You can give very general directions and trust your friend is smart enough to figure it out, and in many cases they will, after taking some time to reason out the ambiguities they are presented with in the real world. Sometimes your drink comes back cold or wrong. Alternatively, you can write very detailed directions for exactly where to go, what to expect to see, and what to say. This takes time up front, but you have higher assurance that you will get your drink in a timely and correct manner. Having a shared shorthand you and your friend know might make that list of directions a bit hard to read from the outside, but it can add to accuracy and/or reduce the number of steps if it's a well defined shorthand.

>> Also how does it compare to Go?

> My understanding of Go is that it aims to fit a similar niche

Not really. Go is used to replace Python. Rust is used to replace C/C++.

> Not really. Go is used to replace Python. Rust is used to replace C/C++.

I think that's fairly simplistic. Go is often used to replace Python, but what it aims for might be slightly more ambitious. You generally don't hear about people or companies replacing services they've written in C or C++ with Python (implementing when nothing already existed, yet, but generally not replacing), but you do hear that with Go, because the performance is close enough that the trade-off makes sense in more situations.

On a scale of 1 = C/C++/low-level and 10 = Python/high-level, I would hazard that Go aims for the 3-8 range and Rust aims for the 1-5 range. Quite a bit of overlap, but they each go a bit farther in certain directions that makes them a bit closer of a fit in those areas.

No GC with power of C/C++ and safety guarantees. Personally I think if you have not dealt in a high performance system where every byte matters. You won't appreciate the rust that much. I can write Redis for example in Rust with a guarantees of no memory leaks! C/C++ can't guarantee that.

Rust doesn't guarantee no memory leaks any more than C++.

While it is not part of the memory safety guarantee of safe Rust, in practice it is much harder to accidentally leak memory in Rust than in C++. There is no counterpart to the C++ new operator in Rust; no commonly-used language feature or standard library function puts the burden of explicitly freeing memory on the programmer.

For those who are interested in why leaking memory isn't disallowed by Rusts safety guarantees, have a look at the documentation of the (AFAIK) only safe function in the standard library that leaks memory: https://doc.rust-lang.org/std/mem/fn.forget.html

Actually... the counterpart to the C++ new operator would be the perfectly safe `Box::into_raw(Box::new(whatever))` - along with its counterparts on other smart pointer types - and there may be a heap module one day[0] - but your point stands that this is not the "default" way of allocating.

[0] https://doc.rust-lang.org/alloc/heap/

it actually does in a certain way. if at some point the lone pointer ("pointer") to that memory goes out of scope then the memory is deallocated. happens a lot.

I'm well aware of that. Safe Rust, however, allows you to put that pointer in an Rc cycle, or call mem::forget on it, etc.

Hard to believe that this is the 17th minor release since Rust 1.0. :) Lots of good ergonomic improvements in this one, but for those of you who are itching for incremental compilation, a beta is now available for testing using the CARGO_INCREMENTAL environment variable: https://internals.rust-lang.org/t/incremental-compilation-be...

Why is it so hard to believe? Don't they release a new version every month like clockwork?

It's every six weeks rather than every month, and the fact that it's like clockwork is what's so impressive! I've actually implored the Rust devs before to slow the release cycle for their own sake, it seems to be quite brutal. They must be masochists, because at least some of them have told me they enjoy it. :)

There's a lot of "not finished so wasn't included in this release" in the release notes.

With, say, a 12 week cycle instead of 6 weeks, I'm waiting 3 months for My feature to land in the next release, and I feel a lot more pressure to get it done now than I would with a 6-week cycle where missing a release isn't a big deal.

Assuming the team has the release process we'll automated and cutting a release doesn't add a lot of work, then there's not much to lose with this cycle.

Missing a release should never be a worry. People can always wait.

having stable release with no regression > releasing fast

"should" is a nice ideal, but people will feel pressure, especially for tooling like a compiler that other teams depend on, where slipping schedule means slipping their schedule too. Shipping more often lessens this, by meaning a missed release costs much less time.

Empirically, Rust has done a pretty good job of shipping fast and not releasing regressions, so it seems like this isn't mutually exclusive.

I think that those few people who rely on the newest-possible features in the compiler will be compiling with a nightly build anyway (possibly pinning to a known-good version until the next release).

That's actually somewhat a meme within Rust, at least as far as my impression from the outside goes. There apparently were times where most interesting crates required some unfinished feature and thus a fairly recent nightly build of the compiler and/or Cargo. I believe that problem has decreased in the meantime, but as I said, I'm looking into Rust from the outside only and don't have much own experience.

According to the developer survey last year, most of the community is using a stable compiler in some form[0]. I can only imagine that the ratio has improved since then, especially with custom derives in 1.15. Additionally, I semiregularly see people say that they don't use nightly in production, even if they're using it for preview/experiments/testing.

[0]: https://blog.rust-lang.org/2016/06/30/State-of-Rust-Survey-2...

Maybe for a language but sometimes you need to release /now/. Sometimes we need to incur tech debt. It's a tradeoff between principle and reality.

You never need to release now.

Well, unless you made a huge regression that's killing your users... which ironically is a recurring issue alleviated by releasing slower and testing better, not by releasing faster :D

Actually, releasing faster might help avoid regressions since a faster release cycle requires you to focus more on continuous testing practices. With, say, one release every 2 years, you can easily put off testing until a few weeks before the release, at which point regressions are found and panic ensues. If you release every few weeks, you need to be testing all the time to find regressions in time, so you will be forced to automate it.

See, for example, this recent article on why web browsers switched to a more rapid release schedule: https://arstechnica.com/information-technology/2017/04/mozil...

It's easy to release often when the releases are small. And smaller releases encourage adoption. You don't want a Python 3 situation where your release is so big your user base refuses to upgrade and you get stuck in a hellhole of perpetual legacy maintenance.

The size of the release is not the issue. The issue is that amount of incompatible changes. They basically dropped the language and made a new one.

That seems fairly hyperbolic, as it's my understanding that some Python 2 modules were able to run on Python 3 with no change, and quite a lot more were able to run with some small set of changes. That doesn't sound like making a new language to me.

Six weeks, but your parent has been involved with Rust for a very long time, possibly longer pre-1.0 than post-1.0. For those of us in that category, it can feel like time flies! While 1.0 was almost two years ago, it feels like yesterday sometimes.

I guess it's just because it took a long time to reach the 1.0 milestone, so 1.17 "feels" hard to believe.

Because managing to output features and documentations while maintaining a high level of quality at this speed is amazing.

I don't know with what geniuses you work with, but the team I workin are generally incapable of something remotly close. Or maybe I'm just a bad programmer amazed at an ordinary day of work.

> String concatenation appends the string on the right to the string on the left and may require reallocation.

I don't really know Rust, but isn't that weird? In most languages I know, I would expect for example Vector3(1, 2, 3) + Vector3(2, 3, 4) to return a brand new Vector3(3, 5, 7), with Vector3(1, 2, 3) unmodified? Unless I do vector1 += Vector3(2, 3,4), at which point I expect vector1 to be modified without creating a new vector?

Why doesn't rust create a new String and concatenates using this new owned String? I guess that's why beginners get it wrong - I would

Because the left-hand side is consumed by the + operator, so its allocation can be reused. Most languages don't have the concept of ownership, so it works differently in other languages.

If you want to get the semantics of other languages, just clone the left-hand side with ".clone()".

This is kind of exactly why Rust is worth having. There's plenty of other languages that do the same thing as Java, and it's the right choice for a lot of use cases. Rust, instead, goes for the memory efficient design that doesn't create junk on the heap. Not always the right choice, but well worth having the option.

Java is a bad example since it could (theoretically) do all kinds of optimizations based on runtime profiling under the hood. It also has features like deduplication, so your string might use no additional memory at all.

> Java is a bad example since it could (theoretically) do all kinds of optimizations based on runtime profiling under the hood

Isn't that true for every language?

No offense intended, but this strongly reminds me of the "Sufficiently Smart Compiler" [1]

The Java community had more than enough time to implement that, especially compared to a young language like Rust, so there's probably a reason they don't have it yet.

[1] http://wiki.c2.com/?SufficientlySmartCompiler

They have implemented it.

javac compiles chained string concatenation to a sequence of StringBuilder calls, and HotSpot knows how to specifically optimize those:


That said, I'm sure there are common cases it doesn't catch.

Not really, you've identified something where the compiler spots a special case pattern and rewrites it, something available to any compiler writer who is sufficiently chummy with the standard library. I'm not saying it isn't useful, but it's nothing like the runtime optimisations the OP was talking about. It can't even handle a for loop...

Closest real world example to a sufficiently smart compiler is GHC, and that can be plenty dumb at times.

Another example is the query planner of a good relational database (PostgreSQL, etc.).

Here, SQL can also be viewed as a lazy evaulated programming language, due to its declarative nature and the fact that the execution order is not defined in the code, but instead an (hopefully optimal) execution order is determined by the query planner.

> Not really, you've identified something where the compiler spots a special case pattern and rewrites it

Which, string handling being specifically optimised by the compiler, is exactly what was being discussed upthread?

What was being discussed upthread went a lot further than what you're talking about. The string concat thing is a specific piece of ordinary static analysis that any language with a compile stage can perform. It's also pretty trivially defeated in ways that crop up in real code. I'm not saying it isn't useful, I'm saying it's nowhere close to what would be required to render the performance costs of doing strings the Java way equivalent to the Rust way. Therefore, to go even further upthread, Rust still provides a meaningful choice, trading Java's convenience for Rust's performance.

It is possible that adding two str slices like ("a" + "b") could be implemented in the future to produce a new owned String, but there is a technical obstacle to doing this in the near term:


The basic problem is that str slices exist in the "core" library, which can be used on targets without an allocator (for example bare-metal embedded systems), while the owned String type exists only in the "std" library, which requires an allocator. Rust has rules that prevent one library from stomping on another library's types in certain ways that could conflict with future API additions, and those apply to the standard libraries here. (One solution would be to allow the author of libraries A and B to declare that A is allowed to treat types from B as though they were its own.)

It's simple, just let string literals possibly refer to Strings if type inferences deems it necessary. Failing that, it should consider them 'static &strs. Kind of what happens for numerical literals right now.

Rust's + implementation consumes the String on the left, so the default behavior doesn't do any allocation unless the string needs more capacity. This makes it impossible to "get it wrong" in the way you seem to be suspecting.

The part beginners tend to get wrong is trying to add a &str to a &str, which just doesn't have a + implementation to begin with, so they get lost.

This is actually being discussed on /r/rust, see https://www.reddit.com/r/rust/comments/67rjii/question_about...

Turns out my assumption (like C++, make expensive things more verbose) isn't the complete picture. The big blocker is a technical issue preventing `"" + ""`

See https://github.com/rust-lang/rust/issues/39018#issuecomment-...

So I should preface this by saying that the Add impl was generally controversial and almost not even implemented, but ended up happening.

In a systems language, allocations matter. As such, people are generally really against hidden allocations in Rust, moreso than in other languages. This totally could have been done, but by requiring a String on the left, you are already in "this is heap allocated" land, and that is much more palpatable for Rust's audience.


=palpable + palatable. Pedantry aside though, I like the word. Maybe it describes something can be palpably palatable.

Ha! Nice catch. Good thing I'm not like, a person who's paid to write words or anything...

You wrote a word! A new word!

or, palpatable could mean 'able to be palpated', which would mean able to be touched carefully with the intent of inspection, especially in medicine.

For someone coming from C++, it will feel similar to std::string("foo") + "bar", which one gets used to pretty quickly.

Because &str is just a string view. If you add vectors, you are adding owned types where you own the allocations. &str is not owned, but String is.

We could make the addition operator implicitly allocate here. It's a bit of a philosophical question if implicit allocations like that are desirable in rust. I lean towards being okay with the implicit allocation.

You could maybe introduce a lint saying "possibly unnecessary allocation" when people `+` `&str`s together. Then beginners can still use the syntax that they're familiar with (and that basically every other language uses) but there's still notification somewhere that they can likely structure their code to be more efficient.

If I understand the ownership rules correctly, it would actually violate those rules to have the "familiar syntax". Also it seems to be Rust's thing to make things explicit, e.g. `.clone()`, not having `++`-style operators etc.

In this case, due to the rules, it would be possible; the call to clone would be inside the method call, so you wouldn't see it. (With other signatures it may not, but Add returns a value, not a reference.)

In other words, it would violate the conventions, but not the rules.

(Okay so there's a _really_ obscure rule here that sill prevents this but it only matters because it's the standard library. It's not actually the signature)

I would definitely side with it being okay. I like how string are modeled, but it doesn't mean it isn't painful sometimes. Making some common string operations like &str + &str defined but linted is a good balance of ergonomics and efficiency (that'd be a rare bottleneck, but the lint would make it visible).

It is fine, because you can't accidentally use modified string on the left.

Ah yes, this is what I was a bit confused about. Rust's ownership system though makes it so, what would be a very subtle and annoying bug I'm sure, can't happen. The error is really helpful though.

  fn main () {
      let hello = "Hello".to_owned();
      let world = "world";
      let hello_world = hello + world;
      print!("{}, {}", hello_world, hello)

  4 |     let hello_world = hello + world;
    |                       ----- value moved here
  5 |     print!("{}, {}", hello_world, hello)
    |                                   ^^^^^ value used here after move
    = note: move occurs because `hello` has type `std::string::String`, which does not implement the `Copy` trait
The docs are also pretty awesome as well [0]. They also point out most of this and list out all the types which implement the copy trait.

> Copies happen implicitly, for example as part of an assignment y = x. The behavior of Copy is not overloadable; it is always a simple bit-wise copy.

> Cloning is an explicit action, x.clone(). The implementation of Clone can provide any type-specific behavior necessary to duplicate values safely. For example, the implementation of Clone for String needs to copy the pointed-to string buffer in the heap. A simple bitwise copy of String values would merely copy the pointer, leading to a double free down the line. For this reason, String is Clone but not Copy.

> Clone is a supertrait of Copy, so everything which is Copy must also implement Clone. If a type is Copy then its Clone implementation need only return *self (see the example above).

I haven't really used Rust at all, but the more I look at it the more I want too. I probably will not be given the opportunity anytime soon though. Such is life in the world of frontend development.

[0] https://doc.rust-lang.org/std/marker/trait.Copy.html

webassembly is our hope. compile rust down to browser bytecode.

a man can dream.

It's because you don't have a String to begin with, just two const memory values(which most of the time are baked into the program binary in C/C++, not sure about Rust).

which most of the time are baked into the program binary in C/C++, not sure about Rust

Pretty much the same, string literals are of type &'static str, which is a string slice with a static lifetime. Since literals are not `mut`, you cannot mutate them.

Yeah, semantically you're right. I'm talking about actual binary representation. Some systems/runtimes copy them to the heap/etc while others let you reference the actual memory in the loaded binary.

Back when I did more embedded stuff we cared since it could increase your runtime memory by a non-trival amount depending on what the platform did.

In Rust's case, to be clear, they're stored in .rodata. That's where the &'static str points to.

The auto copy to the heap behavior is controversial, desired by many for ergonomic reasons, disliked by others for implicit heap allocation. We'll see if it gets added in the future or not!

Yeah, you'd find me in the latter camp for sure :).

Thanks for the details, it's appreciated.

This is how concat() works in the C library, except that in the case where the original string is too long it will overwrite random memory instead of reallocating the original string.

You probably mean strcat() (which should really not be used, since now there is strncat()), there is no concat() in standard C library. strcat() does not intercept syntax of operation that does not mutate its arguments in any other context (+), you also cannot do strcat("hello,", " world") since the first argument must be mutable.

I was interested in learning Rust as a safer alternative to C a few years back, and have been observing its progress from afar. I'm much less optimistic about it now, though I only have a layman's understanding of it. The language seems to be more cumbersome to write than many other languages due to its design choices. It may be safer, but not as "ergonomic" subjectively, which ultimately affects productivity. It's hard to think of a good reason to use it, other than the safety features.

I'm far more productive in Rust than I am in C. The amount of time I spend writing for loop boilerplate alone is worth the price of admission.

Pretty much all of the ergonomics improvements this year have been improving on things you can't do in C to begin with. Like automatic serialization (serde). Or const generics. Or error handling via ?. Or pretty much any reasonable string manipulation ever.

There's a legitimate debate to be had with the ergonomics of C++ vs. Rust (though I think Rust wins there too). But not with C. C, as a language that makes you jump through hoops to even get a hash table, doesn't even come close.

I suppose pretty much everyone here already knows, but when making this specific type of comment ("I'm far more productive in Rust than C"), can you please disclose that you are one of the core people involved in Rust/Servo?

It's almost like Chuck Moore coming on HN and saying he finds Forth to be the most productive language. Of course it is, for you. (Nothing against the productivity of Forth or Rust.)

I'm a relative beginner in both Rust and C and I'd like to say that, as much as the learning curve for Rust could be better, even for a beginner, Rust is light years ahead of C, especially if you've used a more modern language. And using a hash table as an example is actually pretty sharp. Someone used to python might not know how much of a pain any type of container is in C. You could reasonably make the case it's easier to use hash tables in assembly -- the C language does you essentially no favors.

Rust is quite amazing, though. Very slick, to the point where I use it and I don't care that much about things running fast or with low memory requirements, I just want something with the syntax and comprehensibility of python/ruby/etc (at least compared to erlang/elixir, which I'd otherwise use -- not having local mutable variables makes things a bit more puzzling for no reason like three times a day) and the sweet tools from haskell, and Rust fits the bill.

I've personally experience very tangible productivity gains once I got past the (very steep) learning curve.

The compiler will catch many errors that other languages consider logic errors; if your source code compiles, it tends to run as expected.

As for the disclosure: I'm a normal software programmer, and haven't written a single line of code for rust's compiler and tooling.

people write better c than c++ because they are less tempted to over-architect. We've all come across terrible meta-templated c++ code with abuse of OOP all in the name of "information hiding". The main problem against c is the puny standard library. It would be nice to have a few more built-ins.

I think c++ without inheritance would be a very nice language.

C has so many design flaws, from the lack of reasonable iterators to null pointers to the preprocessor to bizarre type syntax to switch fall through to terrible error handling to no generics, etc. The attempts to patch the more broken parts of its design (e.g. ergonomics favoring signed induction variables in for loops) have resulted in more problems (e.g. signed overflow being undefined behavior). As a language from 1978, it was good in its day, but it's not worth saving anymore. It should become a respected piece of history.

I am always startled by people who hate inheritance "lock stock and barrel". I have long-running memories of being utterly appalled by multiple inheritance - all the way back to having to learn how to answer those damn "how many pure virtual base classes can dance on the head of a pin?" type questions in my 3rd-year C++/soft-eng class.

But single inheritance, used judiciously, has really led to some vastly better designs in my experience and doing away with inheritance would generally mean those designs would just be redone with a clumsy "let's fake up inheritance" interface.

I think you can write bad code in any language. Bad C++ is especially egregious, as it's possible to make the bad code practically incomprehensible even at a surface level. However, I suspect that carving feature after feature away from C++ leaves a language that is now just as good for writing toy programs ("hey, look how elegant the language is now that pesky inheritance/template/exception/<insert hated feature of choice> is gone"), but will now allow you write verbose and shitty (but shallowly more understandable) code in place of the C++.

I do find the sheer size of C++ objectionable (the usual arguments about how nearly all C++ programmers seem to program in subset "house styles" of the language). I am generally very suspicious of efforts to replace C++ that come from people who don't seem to understand it at all, however. Generally I like Rust in this regard as the Rust approach seems to display a mature understanding of what was actually good about C++ (constrast: Go).

What do you want to use inheritance for? In my experience, it is usually better to use composition and sum types.

Inheritance is usually really great when you want to extend the type outside the context it was created in. GUI is a very clear example of that where composition may work but is greatly tedious (reimplementing the whole interface, exposing it, and rewiring), and sum types are right off the bat inapplicable.

C was already quite poor in 1993 when I got to compare it with Turbo Pascal 6.0 for MS-DOS, my main tool at the time.

With all these years the amount of UB, memory corruption issues and security exploits only got bigger.

We need a major IoT meltdown to get rid of it, unfortunately it won't happen until the industry lets UNIX clones go.

> We need a major IoT meltdown to get rid of it, unfortunately it won't happen until the industry lets UNIX clones go.

Rust isn't going to stop IoT makers from leaving ports open and default passwords set.

No, but it may protect (to some degree) those that do use good authentication practices from also being exploited from exploitable libraries.

Bad defaults is a problem with IoT devices, but far from the only problem.

No, but there is a whole class of errors related to UB, memory corruption and data conversions that just by avoiding C will never happen.

You don't even need Rust, Algol 68 would be enough.

> unfortunately it won't happen until the industry lets UNIX clones go

Redox is a Unix-like system.

UNIX like is not the same as being UNIX.

UNIX is married with C by design, just how browsers are with JavaScript.

Actually you are right in one thing, if Redox offers POSIX support then it needs to sandbox them.

In the context you said "Unix clones", it sounded pretty obvious you were including Linux in there. Linux is a "Unix-like" rather than an Unix.

I was including Linux there, as it follows a standard UNIX architecture.

There is a big difference in having POSIX support and being a UNIX.

Windows, IBM i, IBM z/OS, Unysis MCA ClearPath, Green Hills INTEGRITY OS, Genode and many other OSes have POSIX support, yet their architectures have nothing to do with UNIX.

But absolutely no one forces you to use inheritance.

Not using a feature X yourself means you might still need to deal with code written by somebody who uses the feature.

In lesser languages this tends to translate to "you need to deal with code using bizarre pre-processor coding and/or external code generators". glib*-stuff is a neat example for the former (although a lot of it's code and most of the architecture is overall bizarre, not just the pre-processor parts), things like Qt and some libraries parsing stuff using generated parsers / lexers are typical examples of the latter.

> The amount of time I spend writing for loop boilerplate alone is worth the price of admission.

This can be solved with a templating system, not a new language.

A templating system is a new language.

An extremely minimalist one that will transfer to an language, including creating rust boilerplate.

Obviously, I wasn't implying for loop boilerplate is the only problem with C.

> It may be safer, but not as "ergonomic" subjectively, which ultimately affects productivity.

For my first large project in Rust, I ran into several issues where the type system was complaining that I couldn't take the references of certain variables in certain ways. Most of the time, it was clear to me (eventually) that what I wanted to do was actually unsafe. It takes some time, but a lot of the pain involved with starting out with Rust is realizing that the patterns you've been using all along in C/C++ were fundamentally unsafe from the beginning.

The rest of the time, though, what I wanted to do was safe, but I couldn't figure out how to structure the code to get Rust in a position where it could guarantee that safety.

> It may be safer, but not as "ergonomic" subjectively

Perhaps what we really need for the engineering that really calls for this class of product (a systems language), is not more ergonomics, but an acceptance that some small sacrifice in ergonomics and possibly productivity is a small price to pay to make some real headway in the problem of actually providing correct software? Cutting of entire classes of errors and subsequent exploits is an enormous gain for such a small price.

Put another way, every bug you allow through that could have been found with more diligence is really you offloading development time, effort or cost onto your users. That's not to say unlimited time should be spent during development to find bugs, but there is a trade-off happening, and I would hazard that most developers spend less time than they should. In these terms, I think Rust is an amazingly productive language regardless of whether it seems harder to use and makes you feel somewhat less productive.

Which languages are your point of comparison? Its a different conversation to have to talk about C, C++, D, Swift, Nim, Crystal, or Julia?

Which ergonomic issues are your main concerns?

Some things I can think of

- Libraries focusing on optimal performance over ergonomics

- Lifetimes

- Results not being quite as ergonomic as exceptions

While it has rough spots now, I'm hopeful for the future.

The impression I've had watching Rust's evolution is they want to push the language to its limits, find the weak points, find ways of fixing it, and repeat. They don't want to speculatively throw in features. For example, we started off with try!() and later got .?.

Improving the documentation would also improve people's perceptions. I've gone through waves of feeling like Rust's error handling is close enough to the ergonomics of exceptions. When I had surface knowledge, it seemed great. I then started using it and running into problems. I then came across random blog posts and it became easy again.

For a further discussion on documentation, see https://www.reddit.com/r/rust/comments/67fb8e/rust_tribal_kn...

The next step is creating ergonomic libraries. ezstring is an example of people experimenting with this.

ezstring: https://github.com/Storyyeller/easy_strings

The language is hard to learn, so it is very tedious in the beginning. But once you "get" it, it's a very productive power tool.

It's sort of like SVN vs git. When you start you might wonder how can anybody manage to commit a line of code with it, but once you know it, you can't believe you've lived without rebase.

I've been programming in C for 17 years, and in Rust for 2 now. In the first year I had to get help from StackOverflow or IRC for every function I wrote. Now I'm past that, and I'm much much more productive in Rust. I have full control of memory, but no `free()` to write! Strings are easy! Dependencies are easy! I can iterate over BTreeMap with 1 line of code (which inlines to code as efficient as I'd write manually).

It's great, once you know it.

> It may be safer, but not as "ergonomic" subjectively, which ultimately affects productivity.

Tit for tat. When you gain particular categories of guaranteed correctness there are things you needn't worry about, and that saves you lots of trouble down the road.

How much ergonomical pains such a gain is worth is obviously debatable, but I think it's absolutely reasonable to say that trading guaranteed correctness for comfort more often than not is going to be the right thing to do in systems-level programming. And the lower-level you go, more so.

I just spent an entire week debugging a timing-dependent stack-smashing bug that Rust would have never allowed. On the other hand, closures in Rust are really awkward. So... maybe.

Personally, I give closures in Rust to be much less awkward than lambdas in C++. What issues do you have with them?

It's the combination of the lambdas and the borrowing system. You have to move all resources into them and then you can't use them from anywhere else.

What I mean is, it's not the closures exactly that are the problem. It's just that closures are one of the times it would be really really convenient if you could have multiple mutable references to something. You end up having to use `Rc<RefCell<>>` as described here:


This is also mentioned in the Tock PDF: https://sing.stanford.edu/site/publications/levy-plos15-tock... in section 3.2. I'm not really sure how they overcame that in Tock. I guess using RefCell?

Anyway it all seems very awkward.

Ah, I see. I think my perspective tends to be outside the norm when it comes to things like this, since I learned Rust before learning C++, so the Rust way feels "natural", for lack of a better word. I find myself wishing I could do things the Rust way in C++ far more often than vice-versa.

They invented some new interior mutability things, if you check the recent Tock discussion on HN via search you should see some things, and a link to Reddit about it.

I agree 100%. The Rust syntax is so different from C, C++, Go and Java that it's very difficult for me to use. I love the idea of Rust, but I wish the syntax was more similar to traditional languages. If it were, I think it would be a very easy transition for many of us.

That...seems very surprising to me. I've used almost exactly the same set of languages, and I found the Rust syntax incredibly familiar. There are times when (especially if you use a lot of boxing and ref-counting) that you can get a touch of operator soup, but that's not that often. Anyways, the languages feel very similar to me.

A lot of the compromises made in Rust syntax were to make it much, much easier for the compiler to reduce passes to compile into a syntax tree.

There are certainly some things that should just be obvious now, though. Using the C++ namespace separator of :: is just ludicrously obtuse for an extremely common symbol, using the & for refs is archaic (I mean... you could have just called it a ref. ref T instead of &T. And then just ref(foo) or foo.ref to get a ref to foo.) And the syntax for lifetimes is horrible, with the 'a and the fact that in practice its about as significant as naming parameters in a C++ function definition in a header (ie, pointless - the words are discarded by the compiler every time).

And I certainly would have argued for whitespace significance, at least as an option, instead of classic semicolon spam.

Given all that, Rust is still, IMO, the best programming language we have right now for native performance, period. We can criticize what we don't like but we should at least agree its the best we have.

> Using the C++ namespace separator of :: is just ludicrously obtuse for an extremely common symbol

Obtuse? It's sort of common. Multiple languages use it. Personally I like the namespace separator being distinct from the object/struct element accessor. At least they didn't copy PHP in using backspace...

> using the & for refs is archaic (I mean... you could have just called it a ref. ref T instead of &T. And then just ref(foo) or foo.ref to get a ref to foo.)

Please, let's not have more cases where things look like functions but aren't. Functions should look like functions. Operators should look like operators. You might have to learn a small set of operators for a new language. This is front-loading some confusion and learning pain for ease of use later. It may make the language harder to read for people that don't know the language, but anyone reading the source code with the aim to make some changes should probably know the operators at least. I'm not sure giving it a slightly more expressive name but making it look like something else actually makes life easier for those people.

> And I certainly would have argued for whitespace significance, at least as an option, instead of classic semicolon spam.

Optional whitespace significance? That sounds like a recipe for disaster, IMO. I can see a case for one or the other, but mixing of modes like that when it deals with something that is by nature somewhat invisible (or meta), seems prone to problems.

Also, examining why Python has had problems with functional style workhorses like map might shed some light on the negative aspects of whitespace dependent syntax.

Commonality doesn't make a shift-accessible character twice in a row less obtuse. It is like if < and > were the bitwise shifts and >> and << were the logical - it would make no sense because the more common behavior was using more characters.

Of all the glyphs used in the language, :: is probably in the top 5 for frequency of use (probably higher than even * in Rust, considering the infrequency of pointer usage and that multiplication isn't that common an operation in general code) but uses two glyphs instead of one.

It would have required different design constraints starting out to figure which glyph would be best suited instead of ::, but :: is still really ugly in all the languages that use it when its such a common symbol. I wouldn't argue overloading . is right, but I'm having a hard time seeing why a single colon doesn't work - foo:bar:baz. Maybe the parser finds scoping and explicit type names on definitions (ie let foo: i64 or in function definitions) require too many passes to distinguish?

Valid points. A single colon would suffice, but I suspect in C++ they wanted something a bit more distinct from the semicolon. For Rust, I imagine at least part of the reasoning was keeping with what was familiar wit h C++, and after that the type names probably did make it complex, if not intractable (without reserving specific names).

Lots of languages with whitespace significance have no ergonomic issues with functional style. Haskell and the ML family come to mind.

Actually, looking at Haskell, the optional whitespace (either indentation or braces, newline or semicolon) looks like it solves that problem quite well. I don't have much experience with optional whitespace, so I can't comment on whether it neatly solves the problem, or creates additional problems in readability as close to or greater than the original problem. My gut feeling is still that it might be messy.

Agree that making keywords look like functions is weird and bad, but I also loathe languages that insist on using special chars for things that are better expressed with words. C# is my daily driver and I like how it calls pass-by-reference stuff "ref". And likewise, while I can see the appeal of making namespace and member lookups distinct, I've never liked the readability of C++ :: syntax, and I learned C++ fairly early on in my career.

> C# is my daily driver and I like how it calls pass-by-reference stuff "ref".

`&` isn't a parameter annotation in Rust, it's two unary operators, two type annotations (general) and a pattern-matching operator.

The latter of which `ref` also is, incidentally, it means "capture a value by reference" in a pattern: https://play.rust-lang.org/?gist=30b436d83701f74b986aea978fa... (& meaning "capture a reference by its target" in that context, which is logically the structural inverse of the operator)

Backspace would be amazing in Rust. Imagine:

    use .\greetings\hello;
    use ..\foo;
    use \bar;
right now these are:

    use self::greetings::hello;
    use super::foo;
    use ::bar;
I think the PHP way would have been way better

Backslash is almost universally associated with "escaping" the next character in a string. The only place I've ever seen it used as a separator is in Windows, and my understanding is that it causes enough grief when used in strings that people dislike it there as well. The only reason Windows uses backslash is because DOS 1.0 didn't have hierarchies and already used slash for command switches, so they used backslash so it wouldn't conflict.

What is wrong with ::? It's visually distinct, isn't easily confused for other things, and unused for anything else, which pretty much hits all the criteria I care about for this feature.

I'm seriously asking, why do people dislike ::? Is there an actual basis, or do people just not like the look?

Those paths are actually valid Windows paths. I don't like `super` and `self` instead of `..` and `.` - it's just extra stuff to remember.

Also, because of how the metaphor for paths is broken, the relative paths in Rust are not intuitive. That's why you HAVE TO put `self::foo` because the default is not the same as what you'd expect if it were a path.

It's overly complicated compared to just letting you use actual Windows paths.

Except not everyone uses Windows. Why not a regular slash instead of a backslash since even on Windows they sometimes allow that since it's standard for the web? Because modules and namespaces don't always correspond to files. Having the namespace separator be the same as a file path separator may imply a connection to the the filesystem hierarchy which does not exist, or is far removed.

> Why not a regular slash

because how do you parse

    let x = a/foo();
is that a::foo() or a / foo()?

If you use \ it's unambiguous, a\foo() corresponds to a::foo() while a/foo() is division

Exactly. Forward slash has problems. Backslash has problems. Associating the concept with file paths too much has problems (depending on the language and the module implementation).

So, why again is :: such a bad choice, given all this?

Backslash does not have problems. It's a single character and looks like windows paths. :: is two characters and is only familiar for C++ programmers (and means something else in PHP)

Backslash is the single most problematic visible character to represent in lower ASCII. It's the go-to solution for representing the non-visible characters when you need them in a string representation in most languages I can think of. Many errors have been found in string handling routines because of incorrect escape handling. I wouldn't willingly use it as a special character in any language I had input in designing.

Even for windows paths, backslash has caused a lot of problems in various tools[1][2][3]. Just google "backslash path error" for many pages of problems. Even early DOS developers weren't happy with the need for backslash, and made regular slashes work in paths. That doesn't work everywhere in Windows, but it works in a lot of places. Following Microsoft's mistake because of an accident of history (they has already used forward slash for command flags before implementing path hierarchies so needed a different character) and dealing with the same problems they have is not a point in backslash's favor.

> :: is two characters and is only familiar for C++ programmers

It's familiar for whoever learns it for the language they are learning. That it does not necessarily have an association with concepts from other languages or systems that don't apply cleanly (such as the namespace corresponding to the path, which may not be true) is a positive.

1: https://pythonconquerstheuniverse.wordpress.com/2008/06/04/g...

2: https://github.com/rstudio/shiny/issues/1009

3: http://www-01.ibm.com/support/docview.wss?uid=swg21635233

That's \ inside of paths and strings. Nothing to do with \ used in a language. Nobody has an issue with \ used in PHP. The only problem is maybe pasting code on a blog or something, not really a problem when programming in PHP.

Everything eventually gets represented by a string somewhere. Paths aren't inherently strings, but that's how they are often represented in code. You won't be defining your namespaces as a string in normal usage, but some things do need oto represent source, so why choose a character that makes their jobs harder. There are parallels. It's a downside of using backslash. It's not guaranteed to cause a problem, but the potential is there and waiting for anyone that needs to deal with representing it in language.

By string, I of course speak of string literals. The only place where this would be an issue is writing Rust compiler error messages. I don't consider that a strong argument for why the path analogy should be broken. Because the path analogy is broken, modules in Rust have strange rules. For example,

    use foo::bar;
doesn't actually use a relative "path" from the current module

you need to use

    use self::foo::bar;
I think this is a confusing default, and it would have never ended up this way if Rust used Windows-path-looking namespace syntax

if you're writing a parser and manipulating actual arrays of bytes, you don't have an issue with backslash, it's just another character to you

> I mean... you could have just called it a ref. ref T instead of &T. And then just ref(foo) or foo.ref to get a ref to foo.

Ugh, no. There are references all over the place when I write Rust code -- it would be a serious bummer for me if annotating those types and making those references took up 4 characters for "ref " instead of just the 1 for &.

You can have both, but you use a lot of vocabulary all the time in rust (clone, let, mut, static) and nobody is harping on how the most common words need to be made glyphs to save 2-3 characters per use.

I'm curious: what is so different about the syntax? It seems very similar to me.

I think the thing that puts me off the most is the lifetime specifiers:

  let foo:'static str = "foo";
Whenever I see an apostrophe, my mind defaults to frantically searching for its counterpart at the end of the string they're enclosing. Just like for parentheses; imagine how confusing it would be if ' would have been ( instead.

  let foo:(static str = "foo";

This particular convention is inherited from the ML family, where it has been very popular for a very long time.

For what it's worth, there are many cases even in the natural language where ' is not supposed to have a "closing" counterpart - indeed, the "it's" in this very sentence is one such example. Then you have the use of ' to denote feet and minutes of arc. Now, technically these are all actually different symbols, with their own Unicode codepoints etc - but in practice, people use ' for all of them, and it doesn't seem to be confusing anyone. You could treat the ' in Rust as something along these lines.

> Whenever I see an apostrophe, my mind defaults to frantically searching for its counterpart at the end of the string they're enclosing.

In fairness, you should never be searching for matching apostrophe. Single quote, maybe. The fact that it's an apostrophe should also trigger that it might have something to do with possession... (or that something was omitted).

Granted, lifetime syntax has no analogue in the languages the comment I was replying to mentioned. That doesn't seem to be enough to qualify as "so different" though. But maybe it is?

The syntax is really on the same as C-Style languages, minus the extra lifetimes bits.

Most of the changes in this log, and past change logs I have read agree with you. They mention it often and much of the changes work on that.

That have chosen safety first and appear to be working on ergonomics without sacrificing that.

Ergonomic improvements are a major focus for the Rust team's 2017 roadmap, both in language changes and tooling. But they'll never sacrifice the core safety principles of the language to do it.

> I was interested in learning Rust as a safer alternative to C a few years back

Well, there you go. It's an alternative to C. Those tend to get messy (see also, C++).


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact