Hacker News new | past | comments | ask | show | jobs | submit login
Announcing Rust 1.7 (rust-lang.org)
483 points by steveklabnik on March 3, 2016 | hide | past | favorite | 131 comments



Not mentioned in the release, but in the extended release notes:

"Soundness fixes to the interactions between associated types and lifetimes, specified in RFC 1214, now generate errors for code that violates the new rules. This is a significant change that is known to break existing code, so it has emitted warnings for the new error cases since 1.4 to give crate authors time to adapt."

https://github.com/rust-lang/rust/blob/stable/RELEASES.md#co...

I think this is a fantastic example of the even-handed approach in the compiler stability promises/plans, and it's great to see one of the first real tests of those promises go so well.


To elaborate, we use a tool called Crater to estimate the potential impact of breaking changes. It attempts to compile all of the packages on crates.io with a new version of the compiler to determine any regressions, effectively using Rust's public package repository as an extended test suite.

Here's Crater's regression report for the linked soundness fix: https://gist.github.com/nikomatsakis/2f851e2accfa7ba2830d#ro... . It detected four root regressions, which means that there were four packages that were relying on unsound behavior. This isn't necessarily the full extent of the regressions, however, because if we ship a compiler that breaks those packages, then any other packages that have the formerly-unsound packages as dependencies will obviously also break.

Having a concrete list of regressions also allows us to go through the ecosystem and submit PRs ourselves to bring the affected packages back to building, which is usually quite easy. Crater is a really, really great tool.


> ...effectively using Rust's public package repository as an extended test suite.

This is one of the coolest and most practical things I've read in a while, seeing what happens to stable(ish) real wild code. So many practical applications and analytics are coming to mind in many areas.

Thank you. Sparks my interest in Rust again.


Should also have a look at http://stackage.org (quite similar but in Haskell, yet curated -- a "try to compile all" feature also exists IIRC).

Not sure how much this is used in order to test impact language changes.


It regularly is.


I believe Perl and CPAN originally coined this approach. I'm glad to see from the parent and other comments that this approach seems to have caught on in other ecosystems as well.


Scala also has something similar, called the community build: https://github.com/scala/community-builds It does not contain everything, but (open source) authors are encourage to add their library to the mix.



I think that something similar is also done by the Chicken Scheme folks:

http://wiki.call-cc.org/eggref/4/salmonella#introduction

http://tests.call-cc.org/


This has been one of the many benefits of quicklisp (a repository for common lisp packages); there is a test-grid maintained by Xach and he reports any breakages when there is a release candidate of a new sbcl.


Does Crater ever find unstable tests that don't pass consistently on a good baseline? If so, do you just blacklist those packages and/or feedback improvements?

NM: after reading more carefully, it's clear that at least in the case where rust causes the "regression" you feedback the improvements, so likely the same for unstable tests.


Well, the "tests" here aren't actually tests—they're whether the package compiles or not. If the compiler is being inconsistent, that's a pretty big bug ;)


Ah, I thought it was doing "cargo test" to see if the compiled code had a functional regression. But that would be challenging to investigate regressions (especially in the face of unstable tests).


For those that don't follow Rust very closely, I sent in numerous (well, something like "30") fixes to various libraries after this landed, and the vast majority of them involved adding ": Sized" in the correct place. So while it was true that it was breaking, it was easy to update.

We take breaking changes ridiculously seriously, and soundness is basically the only case where they make sense to do.


Why does Rust not use semver to clearly communicate when breaking changes do take place?


Disclaimer: I'm not a Rust author, but I actively follow the development process.

Rust does use semver. But as the concept of "breaking change" is ill-defined – as I've understood – in the definition, the Rust project is using the following heuristic:

* If it's a safety/soundness-releated bugfix, it's considered a minor (1.x.0-changing) change even if it makes some existing crates stop compiling. Because the crate was relying on broken/unsound functionality, that's acceptable.

* Impact of all such breaking changes are measured and assessed against the ecosystem using the Crater tool.

* Even if crates stop building, the changes should be such that it's trivial to fix them. (In the best case, one-liner type annotations etc.)

Btw. Rust sometimes deprecates APIs that have been supplanted by better alternatives. Deprecations are warnings, and they are allowed to go away only on a x.0.0 version change. Additionally, careful consideration for stability and breaking changes is done when designing new language features.


Rust does use semver and takes it very seriously. RFC 1105 for example documents exactly what is considered a breaking change to stabilized library APIs. The Rust team has an explicit allowance for breaking changes related to type soundness or memory changes without performing a semver bump. This is the first such change, there is one more coming in the future.

Both changes are very small; they do not impact most users and the users who are impacted can easily fix them unless they are exploiting the existing "bug" to do something unsound.

https://github.com/rust-lang/rfcs/blob/8e2d3a3341da533f846f6...


So, this is kind of a complex topic. The sort of TL;DR is, Semver itself does not actually specify things, it just says "compatibility". Specifically,

  > MAJOR version when you make incompatible API changes,
But does not define what "compatible" means.

In a statically typed language, virtually any change, including an addition, can be construed as a "breaking change". For any proposed change to Rust, I could write code that that change would break. Adding a new method is a breaking change, for example, because I could have written a method with that name previously.

Note that this is also consistent with other languages: Java, for example, regularly makes very small breaking changes, even though it's widely regarded as a language which takes compatibility extremely seriously.

So, with that definition of 'breaking' not being particularly useful, we've done what you're supposed to do with Semver: lay out what 'compatible' means. The primary documents for this are https://github.com/rust-lang/rfcs/blob/master/text/1122-lang... and https://github.com/rust-lang/rfcs/blob/master/text/1105-api-...

I quote:

  > This RFC proposes that minor releases may only contain breaking changes
  > that fix compiler bugs or other type-system issues.
Without a formal language spec, and with certain bits of the language being generally under-specified, you can language lawyer forever about a term like "breaking". That's why we went through all of the possible ways that we could change the language, and spelled out what kinds of versions we're allowed to make those changes in. I would argue that Rust is far more clear about its version guarantees than many, many other languages.

It's important to remember that semver is a _social_ tool to communicate roughly what has changed. If this blog post were about a Rust 2.0, you would rightfully assume that it's going to be tough to upgrade to. That's not the case here. The actual impact of this particular change to the ecosystem is virtually nothing.


> Adding a new method is a breaking change, for example, because I could have written a method with that name previously.

Then you should have used a way to namespace your method, as is done in Objective-C with category methods.


All methods are namespaced but there is sugar which can cause a conflict because it searches the namespaces in scope for methods.

Imagine the trait (namespace) `Baz` defines a method `bar()`, and `Foo` is a type which implements `Baz`. `foo.bar()` is sugar for `<Foo as Baz>::bar(foo)`. Imagine `Foo` also implements `Quux`, another trait in scope, and `Quux` is extended to also define `bar()`. Now `foo.bar()` is ambiguous, because it could also mean `<Foo as Quux>::bar(foo)`. As you can imagine, though, this happens very rarely.

You can't avoid a conflict unless you require specifying the namespace at every call-site, which is in my estimation a much greater burden than editing your code to use the explicit namespace form if a conflict arises because of a new method in a dependency.


We have a module system, but that doesn't help. You can still define methods on arbitrary types, subject to the coherence rules.

You can still write out a disambiguated form to distinguish the two, but it's not the default way. That's one of the reasons why adding a defaulted method to a trait is a minor change. See https://github.com/rust-lang/rfcs/blob/master/text/1105-api-... for more.


> Then you should have used a way to namespace your method, as is done in Objective-C with category methods.

Categories do nothing to protect against breakage due to adding methods.


It does.

Remember, any bugfix is a breaking change if someone was relying on the buggy behavior.


As mentioned in the OP, Rust really hasn't added any language-level features since 1.0, but looking forward there were two major features whose RFCs were accepted this cycle: impl specialization (https://github.com/rust-lang/rfcs/pull/1210#issuecomment-187...) and the `?` operator (https://github.com/rust-lang/rfcs/pull/243#issuecomment-1805...). The former will keep code from having to pay a de facto performance penalty for being generic (preliminary implementation at https://github.com/rust-lang/rust/pull/30652), and the latter is purely an ergonomic change to make our `Result`-based error handling more lightweight (preliminary implementation at https://github.com/rust-lang/rust/pull/31954).

I'm personally hoping the latter one manages to get into the beta release for the next cycle, so that I can use `?` rather than `try!()` for my Rust tutorial at OSCON this year. :)


Well, beta for the _next_ release is already branched; it would be the one after that, at the earliest.


1.8-beta has branched, yes, but if it stabilizes this cycle it would make it into 1.9-beta (which would progress to 1.9-stable the week after OSCON), and our beta releases have proven themselves stable enough that I'm perfectly happy to use them in tutorials.


I would be surprised if ? were stabilized in the next 6 weeks after how contentious the RFC was.


Yes, I expect it to move more slowly than an immediate stabilization as well.


Alas, you're probably right. :) A man can dream!


Nice! I haven't used rust yet, but I'm really excited about it. I think Mozilla Research is doing some of the most groundbreaking work in CS lately.

The best part about rust is seeing the equally incredible work being done by the Servo team. To get a glimpse into some of their best work, here's a video where they talk about using the GPU to get better performance rendering a DOM than most native GUI toolkits can achieve on a sophisticated layout:

https://air.mozilla.org/bay-area-rust-meetup-february-2016/#...


Yes, absolutely, if you haven't seen the WebRender video already you need to watch it. I haven't been this excited about a new technology in a while.


I've been devoting a lot of my daylight hours to writing Go, but with Rust at 1.7 and Go at 1.6 I may have to rethink that. ;)


Why not both? :) Dropbox has an enormous amount of infrastructure in Go, but some of their most performance-sensitive software is in Rust. There's no need to dedicate yourself to a monoculture.

In any case, let's not put too much stock on version numbers. :P


I second using both. Go has moments where it shines and same with Rust. No reason to pick a side and commit...not sure why so many feel they have to...

I don't think any one would argue with writing a web server in Go right now compared to Rust, given the maturity of the Go ecosystem around writing web servers.


It one really nice thing about Go, you have http stack and json encoder/decoder integrated in std so you don't have to worry which one you should use if you are happy with basic one.


But if you go to Rust, your Go skills will start to rust. :)


And if you go with Go, your Rust skills will go away?


Clever, you deserve an upvote for that :)


Sorry, attempted snark at the version numbering. In all honestly, I think Rust looks quite interesting it's just that I want more experience in Go before I look at anything else.


Why would you abandon Go for Rust without reason? Go does certain things better than Rust and vice versa.


Go Rust!


Also at http://rust.godbolt.org/ should you wish to see the assembly code generated too.


Thanks. It's interesting to see how the generated code changes across rustc versions!


llvm version changes too, so it is also that.


Thank you, Rust team! It's like Christmas every six weeks! :D


There was a recent effort by Google to speed up SipHash, it's called HighwayHash: https://github.com/google/highwayhash - it includes a compatible, optimized SipHash implementation (1.5x speedup), a variation on that that is not compatible but similar, SipTreeHash (3x speedup on top of optimized implementation) and a new algorithm, HighwayHash, that yields another factor of 2-3. I haven't yet used it myself but it looks interesting and might be relevant to Rust?

Edited to add that Google's implementation targets Haswell and later (AVX-2), I don't know about the performance of an implementation that targets older CPUs which would of course be very relevant for Rust


I believe this is being discussed here: https://github.com/rust-lang/rust/issues/29754


Oh nice, I wasn't aware that this is being discussed already. Thanks!


I wish Rust team invested more into tooling and maybe introduced officially supported IDE because at the end tooling is more important than the language itself. Java is not the best language but it obvious choice for many people because of its IDE integration.


> maybe introduced officially supported IDE

I would much rather they don't do that and instead offer tools that can answer IDE-like queries (e.g. "Who calls this function?", "Where is this identifier defined?", etc.) and let people integrate that into Emacs, Vim, Atom, and other editors and IDEs. This brings all the communities together and doesn't dictate one particular tool over all others.


They have :

https://www.rust-lang.org/ides.html

>We propose a new tool, an 'oracle' which takes input from the compiler, maintains a project-wide view of the code and it's type information, and provides data about the project to IDEs and other tools. The oracle is a long-running daemon and presents an API via IPC.

Which is great idea. But it is not implemented yet (AFAIK).

personal opinion : Rust with this pace, will be absolutely one of the best language in term of usage/tools in next 5 year.


Similar to that, I really like how in Intellij I can can use my Maven and Gradle build script as the IDE project's property file. I can now have my source checkout not include any helpful IDE specific files, but still have the project be instantly imported into an IDE.


Speaking from someone who uses Java everyday and has built a company that mainly uses Java I can tell you the Java tooling is fairly irritating (and I like Java).

Java doesn't have really good canonical tools that ship with the JDK nor is there a necessarily an industry preferred choice (Javadoc is probably the only exception). This of course includes more than just build tools. The fragmentation is huge.

For example Java doesn't have a defined format tool like gofmt or rustfmt nor does it have a preferred build/dep system like crate (there are several ... maven, gradle, ant). Even more annoying is that unlike code completion and refactoring tools (Rust racer) for Java are tightly coupled with the IDE and there are really only two choices: morbidly obese and broke: Eclipse or trust fund expensive but works: intellij.

With Racer you can use whatever editor or IDE you like. I wish I had something like that for Java.


> trust fund expensive but works: intellij.

I think calling IntelliJ "trust fund" expensive is rather extreme. My company pays substantially more for my Visual Studio license than I do for my personal JetBrains All-Products subscription, $150/yr (which breaks down to $12.50/mo) saves me far more than 5 hours of time between all the work I do in IntelliJ, DataGrip, PyCharm, WebStorm and soon Rider (7 seconds to open a solution that takes Visual Studio 30 seconds to load and without the constant lockups - now I just need it to be able to run/debug ASP.net apps and run NUnit tests).


Yes I was sort of exaggerating because it is the main complaint that people have of intellij.

We too use intellij as well jrebel and these tools pay for themselves fairly quickly.


IntelliJ is free as in beer and speech (Apache Licensed) for the Community Edition.


We do invest a lot into tooling. There just hasn't been a huge amount of work put into IDE integration yet. There's lots of other types of tooling: Cargo, rustfmt, clippy, Racer, good error messages…


The only "critical" thing missing from this list would be a rustfix. Sadly, this is probably only worth it once the AST/compiler plugins are stabilized.


Well, rustfix is useful when we're breaking code, but we aren't doing that anymore, other than deep soundness fixes in the type system which would be very hard to automate a solution to.


As others have mentioned, we have lots of tooling that isn't specific to any IDE. For more information about IDE support in Rust, see this page: https://www.rust-lang.org/ides.html

TL;DR: There exist Rust plugins for Eclipse, IntelliJ, Visual Studio, Atom, Emacs, Sublime Text, Vim, and VS Code.

We're also working on ways to improve the compiler to better support the IDE use case: https://github.com/rust-lang/rfcs/pull/1317


I have followed https://areweideyet.com/ and picked Atom and it is far from what I would like.

I will try to give another chance to Rust in half year and see if it improved.

But I would suggest to look at Swift SourceKit https://github.com/apple/swift/tree/master/tools/SourceKit or OmniSharp http://www.omnisharp.net/ SourceKit provides really nice Swift support in Xcode. If you can deliver same thing built on top of VSCode, Intellij IDEA with integrated debugger, I am sold.


What was missing for you in Atom?


I would like to see:

- Be able to see inferred type

- Be able to see documentation for types and methods

- Better auto completion was sometimes suggesting non senses and after selecting it placed a cursor on wrong place

- Integrated debugger

- Go to definition (including Rust std)

- Syntax checking

- Integrated formatter


The last three points on your list work in Atom :)

> - Be able to see inferred type

https://github.com/phildawes/racer/issues/304

> - Be able to see documentation for types and methods

https://github.com/phildawes/racer/issues/415

> - Better auto completion was sometimes suggesting non senses and after selecting it placed a cursor on wrong place

Seems like a bug, never experienced this myself though.

> - Integrated debugger

I'm currently using nemiver and atom-gdb to set breakpoints from within Atom and launch nemiver. But you're right: A great integrated debugger would be awesome :)

> - Go to definition (including Rust std)

This works with atom-racer: Ctrl+Shift+P and type "racer find definition"

> - Syntax checking

https://atom.io/packages/linter-rust

> - Integrated formatter

https://github.com/rust-lang-nursery/rustfmt/blob/master/ato...


Thanks for the reply!

> This works with atom-racer: Ctrl+Shift+P and type "racer find definition"

Nice, unfortunately it works only for my methods and not for std.

> - Syntax checking

Oh, you right, I got wrong path to Cargo so it didn't work out of box but still it is not like you have it in IDE and you don't see errors as you type but you need to save.

> - Integrated formatter

Cool, thanks!


> Nice, unfortunately it works only for my methods and not for std.

That's strange. It works for me for std, too. Have you set RUST_SRC_PATH?


> I wish Rust team invested more into tooling

Among similarly new language, I think the strength of the tooling is one of the notable positives of Rust.

> and maybe introduced officially supported IDE

While I get that some programmers prefer IDEs and that IDEs might actually be generally preferable for some Rust use cases, I think there is a lot more utility in the Rust team focusing on lower-level tooling (which IDE and enhanced-editor-plugin developers can leverage) rather than focusing on an officially supported IDE.


> I think there is a lot more utility in the Rust team focusing on lower-level tooling

Agreed, I hope that a good "Language service" API is on the horizon, so that tool authors don't need to reinvent the wheel in order to support parsing, error display, highlighting, completions and so on.

A step in the right direction is the possibility of getting machine-readable (json) error messages from the compiler.

Being able to just call the compiler as a library to produce AST's from source would be very handy, though it would probably complicate the compiler to be able to provide useful output from broken/incomplete source, which is critical for tool use.


That'd be great to have and I think we'll see one in the future. No idea about the timeframe though.


It's being worked on, but it will be a bumpy road if the compiler simultaneously is going to be made more incremental.

https://github.com/rust-lang/rfcs/pull/1317

Nim does a good job with their IDE support

http://nim-lang.org/docs/idetools.html


We have been working hard and investing a lot into tooling, it's just not done yet. https://www.rust-lang.org/ides.html is a rough outline of what's in the pipeline for IDEs specifically.


>tooling is more important than the language itself

What makes you say this? Many programmers don't use IDEs or anything.


You don't use any tools for package management, documentation, linting, testing and debugging?


I think the conflict here is that, for the "non-IDE" programmer, Rust's tooling is best-of-class (IMO). If your idea of an ideal programming environment is running vim/emacs/sublime/atom and maybe a terminal, Rust's tooling is great.

However, some programmers like an IDE more. Which is fine, but for this type of programmer, Rust still has a long way to go. There are some immature plugins for existing IDEs, and even a couple "rust-specific" IDE projects, but nothing near what (say) Java or C# have.


As a vim user I can attest to this. Rust tooling is better for me than any other language I have ever used. Only thing I ever wish for is a REPL, which is a big ask for a language like Rust.


While it's not official, rusti[1] by murarth is a functioning Rust REPL. IIRC, it's used to drive the Rust Playground[2] (don't quote me on that).

[1]: https://github.com/murarth/rusti [2]: https://play.rust-lang.org/


Rusti isn't 'there yet,' unfortunately. It is forced to recompile the entire project for every line entered. Aa far as I know I also can't do things like set 'breakpoints' that will pause execution and drop me into the REPL, like Ruby's binding.pry.

This is a very hard feature for compiled languages, its just a con of this execution model (which is has a lot of pros).


I believe the playground is just a well-sandboxed Rust compiler. It doesn't have any of the fancy features of a REPL, such as allowing you to refer to the state of prior runs.


Rust has tools for package management (Cargo), documentation (rustdoc), linting (clippy), testing (the built-in test runner), and debugging (gdb and lldb integration).

It also has tools for code completion (Racer), build orchestration (Cargo again), benchmarking (the built-in benchmark runner), compiler version management (multirust), and automatic source code formatting (rustfmt, though this one's a WIP).


This is true. I tried to do something with Piston, but I couldn't figure out what the type of anything should be, because I could inspect functions to see what their parameter and return types were. I could follow examples, but they tended to use inline functions, which don't need to specify types.

In Swift you can just Cmd+Click to do this (assuming SourceKit hasn't crashed) and it's very useful for that kind of exploration.


It took a long time for Java IDEs to not suck -- and complex GUI applications aren't exactly the open-source community's strong point.

The way forward on that front would require, IMO, some external actor to smell profit in a Rust IDE. That has been sloooowly becoming more likely as the stdlib inches toward stability.


There are new languages with quite nice IDE support e.g. Swift or Kotlin.


Both of those examples are languages designed by IDE vendors, and their real IDE support comes from those vendors (Apple in the case of Swift, JetBrains in the case of Kotlin). It takes a bit longer when you can't just task an existing IDE development team to build your language support :). That said, I think that Rust has made fantastic progress towards IDE integrations so far.


I'm not sure Javas is a bood benchmark to compare a young language against when it comes to IDE support...

That said, in my brief experience with rust I got the impression that their tooling is very high quality for something so new. Have you seen how pretty their compiler error messages are, for example?


So I don't know a whole lot about Rust. Can anyone give a summary of why/when I should use Rust?


Someone asked a similar question on /r/rust the other day, here's what I had to say:

Rust is a programming language. It's a general programming language, so it's good for a large variety of things. Different people have different reasons for using Rust. There are three large constituencies, as I see them: systems programmers, functional programmers, and scripting language programmers.

Before I get into details: all generalizations are false ;)

Systems programmers come to Rust because it's able to do the lowest levels of software: operating systems, device drivers, stuff like that. Where it improves on existing systems languages is "safety." The Rust compiler does a lot of compile-time checking to make sure you don't do things wrong. Existing languages force you, as the programmer, to double check your work. We have the compiler double-check your work, and force you to get it right.

Functional programmers see all that static checking, and it feels like home. But a home where they have significant speed gained. Functional languages can be fast, but not always C level fast. Rust is. It's easier to get that level of performance out of Rust.

Scripting language programmers come to Rust for two reasons: the first of which is to write extensions to their language for speed. You can write a Ruby gem in Rust, rather than in C. This is useful because this low-level stuff is not their expertise, and so the extra checks are really nice. Related: we have a lot of the nice tooling that scripting language folks have come to expect from their language. No need to write makefiles, we have Cargo. The second is sort of related, but these checks also mean it's easier to learn low-level stuff, so if they want to expand their universe, Rust is a nice way to do so. We're seeing a lot of people for whom Rust is their first systems language.

I didn't know where to put this one, but Rust focuses on a concept called "zero-cost abstractions." This means that a lot of our features that feel very high level are extremely efficient. You don't write manual for loops, you se iterators. You don't deal with pointers directly, but with references. Structures like Box and Vec do memory management, but you don't need a garbage collector, and you don't need to manually manage it, but you also don't need to explicitly call free().

These are very broad strokes. Lots of other people have other reasons too. But that's some of them.


Thank you for your insight. I have but one question.

What does Cargo the rust package management tool have to do with make, the declarative build system? It would seem they are unrelated to an outsider such as myself.


Part of managing packages is how those packages are actually built. For example, let's say that I'm starting a project in C. I'd write some code, and write a makefile to make it easier to build, so that just a 'make' suffices. In Rust, you use Cargo instead of make: "cargo new foo" will give you a Cargo.toml pre-filled, and a src directory to put your code in. "cargo build" will then build your project.

Work goes on, and how I want to use, say, a library for a hashmap of some kind. In my C project, I would probably be adding a git submoule, and then modifying my make files to build the dependency, and configure how to link it in with my code. With Cargo, I add a line to my Cargo.toml, and "extern crate bar" to my source file, and upon the next "cargo build", Cargo will handle the downloading, compiling, and linking of that library, as well as any sub-dependencies it has.

Now, it's also true that make is broadly more general than Cargo, and so in some circumstances, you'll use them together. I have a hobby OS project, for example, and so I have a Makefile which calls nasm to compile the assembly, cargo to compile the Rust, and ld to link the two together. Eventually, I would hope that Cargo can do it all, and I could probably make it work, but it's not inherently bad to use make either. Stuff like operating systems are the only case where I've felt that need; Rust itself is currently built with Make, because it pre-dates Cargo, but we're in the process of cargo-ifying our build as well.

Does that make sense?


Yes thank you for the clarification. I have been mostly pleased with the things I've seen and heard from the rust community and I really should dive in.

I am pleased to hear that Cargo, rather than attempting to be a rust-specific make replacement, is a tool that obviates the need for using make or make-like tools by automating the things you would have used it for, only.

Make and Cargo together sounds like just what I wanted to hear, and your hobby OS project sounds like fun.

Finally, I think it's great that Rust will be using Cargo itself soon, but I am curious if the reverse is possible -- if I want to put in the extra effort, can I build my own rust software in make alone WITHOUT Cargo?


You can absolutely use Rust without Cargo, you'd just call `rustc` yourself directly.


That's just what I wanted to hear!

Thanks again.


To elaborate on Steve's comment, Cargo fundamentally operates by calling rustc itself. You can pass the `--verbose` flag to Cargo to see the commands that it generates.


A lot of scripting language folks I've talked to like Rust not because it lets them write extensions or whatever, but because static typing is awesome (and their use case works perfectly well with Rust as much as it does Python). In the static typing world, C++ has its own set of problems (namely, having to worry about pointers), and there's not much love for Java amongst some communities. Both Rust and Go provide an interesting new alternative here. Some extra static typing and free speed; sign me up!


I feel like for many of those people something like OCaml/Haskell/F#/Scala (which have been around for many years) would be better options. It's very rare to actually need the very last ounce of performance you get by moving to a non-GC language compared to a fast typed GCed language, whereas there is a real cost to having to manage object lifetimes in Rust.


> there is a real cost to having to manage object lifetimes in Rust.

I disagree. For the most part, you don't have to worry much about lifetimes in Rust once you have programmed in the language for a bit. It's a learning curve, like any other (functional languages have their own learning curve). There are cases now and then when you do have to think about it, but mostly it's either quickly fixing mistakes caught by the compiler (like any other type error, and this reduces as time goes by), or writing the correct code from the get-go.

That doesn't make Rust a better choice than the functional languages, but it certainly doesn't make it worse.

(of course, there are other reasons as to why Haskell/etc might be a better option for these people)


This is a really excellent summary, I think it really says a lot more than the previous summaries I've heard which put a lot of emphasis on the safety bit. That is of course a very important aspect, but I feel like over emphasising it sells Rust short. I'm most excited about it for parts 2 and 3, part 1 is just a bonus.


Thanks. It's easy to get lost in the safety bit, because from a PLT perspective, it's the most interesting. But there's a lot more going on.

Sean Griffin, the current maintainer of ActiveRecord, has been writing an ORM in Rust. He specifically has been saying quite a bit that the safety aspect is almost irrelevant to him, he sees Rust as a "practical Haskell".

It's also true that there's a lot of stuff we have that's _because_ of safety but isn't interesting because of safety. For example, memory safety is at the core of our concurrency story, but the end result is "compile-time errors for concurrency bugs", which is much more interesting than "safety".

TL;DR: marketing is difficult. Messaging is tough.


Where do Java, Go and C# programmers fall? Perhaps just more on the order of more complexly implemented and designed scripting languages? Those three make up a good swath but systems, functional and scripting don't so map well to those.


Rust excels in writing fast and efficient libraries that are portable and caller-agnostic. Java, C# and Go all have GC-endowed runtimes, so they might fare not as well on that field.

On the other hand, GC allows for faster and easier programming without the need to reason about lifetimes, like you would have to do in non-GC-languages like C and Rust. Java, Go and C# are fast enough to do a lot better than scripting languages, plus they are statically typed and safe.

I think that what Rust provides for user of those languages, has more limited scope: mainly squeezing the last drops of perf, and especially doing some realtime programming.

Edit: One additional pro of Rust is that it managed resources like files and sockets very robustly with its type system, which may help prevent bugs. That's a thing CG won't do for you.



TL;DR: Java is awesome, Rust is awesome, let's code even more awesome stuff!


Thank you for the summary!


You're very welcome.


I'd say Rust is a generically sensible language, in terms of the modern state of the art of language design (that is to say, it has a powerful but lightweight type system and is designed in ways that are amenable to writing correct programs) - this sounds like a low bar but is surprisingly rare (at least for a language that qualifies as at all mainstream). Distinctive things are that it compiles to native code and has manual but safe memory management (i.e. you have to tell it what the lifetimes of your values are, unlike a garbage-collected language, but you'll get compilation errors if you do it wrong, rather than silent memory leaks or crashes).

It's a reasonable first choice for when you absolutely need to avoid GC, or need code without a runtime (e.g. for embedding into another language). IMO a lot of people overestimate their performance requirements though (or assume that all GC languages must be as slow as Python/Ruby, or as slow to start as Java), and would be more productive overall writing in a language that doesn't require manual memory management and then spending a little time profiling. So for general-purpose programming where the target is native code I would recommend starting with OCaml or Haskell (if VM languages are suitable then also consider F# or Scala), and only falling back to Rust if you've spend a bit of time profiling and optimizing your app (or a representative benchmark/example) and find you really can't get the performance you absolutely need.


I started using it for distributable command line executables. It compiles everything into a single binary and is easy to deal with. The upcoming sentry command line executable uses it for instance: https://github.com/getsentry/sentry-cli


I've been slowly iterating toward building "safe" abstractions around the Erlang NIF interface in Rust, because a broken NIF can bring down the entire BEAM VM, and I very, very much want to have the kinds of compile-time safety checks that Rust provides me when writing NIF modules for Erlang software.

For me it's become, "Anywhere I would use or have to directly interface with C; evaluate using Rust instead."


I have been working on a project for writing safe NIFs in Rust. I feel like I have the rough details of the API sketched out, but there is still work to do. https://github.com/hansihe/Rustler

There are still some rough edges on it, and if you see any obvious improvements, I would be more than happy to take suggestions or push requests :)


Holy shit I hadn't even thought about writing NIFs in Rust.

mind: blown.


I wrote a post recently that gives a summary of my experience with the language so far and attempts to answer why and when someone should use it: https://www.jimmycuadra.com/posts/the-highs-and-lows-of-rust...


Any ideas how rust's cross compilation compares with golang?

I am amazed with how easy it to cross compile to different platforms supported by golang. Curious how rust compares.


The current situation is "it works, but is not as easy as we'd like." We have plans underway to make it very trivial, but the needed infrastructure work isn't quite there yet.

A summary is "every Rust compiler is a cross-compiler, but you need a copy of the standard library for your target platform available." Getting said copy is what takes the work. In the future, we want it to be as easy as a call to the command-line, but for now, https://github.com/japaric/rust-cross is a good resource.

See also https://github.com/japaric/rust-everywhere , which is very interesting.


I'm maintaining a docker image to cross-compile at https://hub.docker.com/r/fabricedesre/rustpi2/. It's Rasberry Pi2 specific since it relies on a tweaked sysroot to link against OpenSSL and a few other libraries.

The way I'm using it is: docker run --name rustpi2 -v `pwd`:/home/rustpi2/dev/source -v $HOME/.cargo:/home/rustpi2/.cargo fabricedesre/rustpi2 cargopi build (cargopi is just equivalent to cargo --target=armv7-unknown-linux-gnueabihf)


Is there any plans to have a stable version and do not introduce breaking changes after that? Something like 2.x is for Python. I could not find any reference so far.


We already have a stable version. The only breaking changes we are seeing are soundness-related bugfixes, and crates stop compiling because they rely on buggy behaviour.


Not if there is bug in the soundness.


When will it reach homebrew?


Looks like 15 hours ago.


250K for hello world example... where is this world going to! :)

I am sure it can be trimmed, but we are getting oblivious to common sense a little. This is system language.


That 250K is additive, not multiplicative. Hello World, which might be 10K in C, will be 240K+10K=250K in Rust. A larger program that would be 2M in C would be 240K+2M in Rust; the significance of that 240K reduces when you write actual programs (instead of hello world).

The size is due to a statically linked stdlib (also jemalloc, which you can decide not to use) -- your C program is tiny because your system has a stdlib it can dynamically link to. Rust can do this too, it's just not the default since in most cases this additive extra binary size isn't a problem.

Being a systems language does mean we should forgo sensible defaults.


In theory you could tell the compiler to identify and statically link only those functions needed to make printf work (or a conservative approximation thereof).



Yeah, that's just running LTO (`-C lto`). This brings the binary size down a bit, thought you also need to opt out of jemalloc to get a good impact.


Edit: *doesn't


By default Rust statically links the standard library into every program. For binaries it also statically links to jemalloc by default to use as the allocator. This is a large constant factory that makes trivial programs significantly larger than C, but like all constant factors it becomes insignificant when n is larger.

Binary size is not a major concern for most programs, even those written in a systems language. If it is a concern for your use case, you can use dynamic linking and the system allocator to get the size down to the same scale as C.


    $ cat hello.rs
    fn main() { println!("Hello, world!") }
    $ rustc hello.rs -C prefer-dynamic
    $ ./hello
    Hello, world!
    $ ls -al hello
    -rwxrwxr-x 1 lifthrasiir 9064 Mar  4 14:24 hello*
See other comments for the explanation.


The great wit, Stan Kelly-Bootle, once observed (about Turbo C): "It generated a 40KB executable when I wrote my first 'Hello, World!' program. After that, I dared not say 'Goodbye.'"


On the contrary, when Borland C++ Builder (i.e. "Delphi for C++") generated an executable for the app with a GUI, my first reaction was - it's so small, no way this can be a standalone executable, I can't give this to client... :)


Finally someone understands what I wrote, thanks man


I read the exact opposite point.


You can go under 1k if you don't use the standard library: https://github.com/retep998/hello-rs


You can go 151 bytes if you are ready to do some link shenanigans: http://mainisusuallyafunction.blogspot.rs/2015/01/151-byte-s...




Applications are open for YC Summer 2021

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: