Hacker News new | comments | show | ask | jobs | submit login
Rust's 2017 Roadmap (rust-lang.org)
419 points by steveklabnik on Feb 6, 2017 | hide | past | web | favorite | 266 comments

I'm very excited about improvements to the maturity of the Rust library ecosystem. I'm happy with Rust's syntax, the borrow checker doesn't bother me that much, and the build tooling works well (though more speed would definitely help).

What it comes down to again and again with projects I think about using Rust for is "how much extra effort is it going to be compared to a language with good libraries for this?" Often I don't use Rust just because the cost/benefit of having to (re)write code that would otherwise be in a library doesn't make sense.

I'm still bullish on Rust. I know it takes time and it's an unfair competition against the likes of Node and Python (with a zillion library authors), or Go (with a huge company dedicated to a kickass standard library). But nonetheless that's the playing field, and I think a stronger library ecosystem is probably the most important thing for Rust adoption right now.

Well, that and not having to use nightly rust for serde, but you already fixed that.

> Often I don't use Rust just because the cost/benefit of having to (re)write code that would otherwise be in a library doesn't make sense.

On the other hand, if you're looking to fill out your resume or get some open source credentials, you can see this as another frontier opening up (depending on how popular you think Rust will be in a few years). Become the solution for that need in the Rust ecosystem, and there are probably massive reputational benefits to be reaped. In some ways, reputation is better than money, because when invested wisely can have a much higher rate of return, and depending on location and circumstance, can be levered into high-paid employment.

One person's problem is another person's opportunity.

FYI, serde derive macros already run on stable, since 1.15 :)

> Plans include a new book,

You should consider publishing an official, printed book. I would totally pay $30-40 for something like this. And once it's already written, the actual publishing shouldn't be too time consuming (but idk lol). I think that there's a lot of people who'd buy it just to support the project.

On one hand, I do have environmental concerns, but on the other hand, I feel like my retention rate with print is like 70% as opposed to like 40% with digital.

The new book will be published by NoStarch press!

Rust needs a Rust book not written by the Rust developers. "Rust for Dummies", if you will.

Then see O'Reilly's upcoming "Programming Rust" by Jim Blandy, who, despite working at Mozilla, has never been involved in Rust development, and has long years of experience developing SpiderMonkey, GDB, SVN, GNU Guile, and Emacs. It's still in early access, but 17 out of 20 chapters are available.


I've been getting early release PDFs of this book for a while now. I highly suggest it for moving from beginner to getting serious. I still think the Rust book is a better first read, but this is much more in depth and detailed.

One of the sections that was most useful to me was comparing how memory is laid out in a few different languages with a few different concrete code examples. These comparisons really helped me "get" borrowing.

Did you pre-order the print book? Does that get you early access PDFs? Or does O'Reilly charge separately for the PDF and print?

I didn't pre-order the print, I just ordered the PDF version. If I remember correctly the print order didn't come with the PDF, but you get a discount if you get both. I'd check the site for details since this was getting on a year ago and my memory is poor.

Annoyingly, O'Reilly doesn't do a bundle. In the end, I found a 40% discount code and bought both.

I'll second that recommendation.

Why couldn't Rust's developers write "Rust for Dummies"? I feel like you're connecting dots that aren't actually connected. Obviously having more books from different authors/perspectives is great for Rust, but I don't see, prima facie, why a non-Rust-developer author would do a better job of "Rust for Dummies" than a Rust developer.

For one, I would be surprised if a non-"expert" (which is, I presume, why you're so dismissive of Rust developers) could write a book that gives an accurate/useful mental model of Rust: teaching is the true test of one's knowledge about anything. Additionally, people paid to work on Rust and on Rust documentation/teaching materials (i.e. "Rust developers") are the people most regularly interacting with all sorts of beginners, and thus are likely to have great insight into, for instance, the common difficulties people hit, rather than just anecdoata about their own personal experience/background (which is good too, but is only one point of reference for things that people find hard).

Rust developers can write beginners books (indeed, the Rust book is designed to be this, and the second edition succeeds better the first, I've heard) just as much as non-Rust-developers can write expert books.

You're usually best able to explain something just after you learned it yourself. After a while you forget what was difficult about the thing you learned.

A C programmer, a Python programmer and a Java programmer should write the book together, with expert insight from a Rust pro.

I'm not sure I agree with the anecdote in your first paragraph, and indeed I preemptively addressed it in my comment: one's own experience is just one way in which something can be hard to learn, whereas people who are regularly teaching and interacting with beginners have a broader view on what people find hard. My experience is, broadly speaking, good teachers are those that have had a lot of practice teaching their material.

It's probably a "forrest for the trees" problem...the developers of a language will have their perspective colored by low-level implementation details.

As an example, I found the explanation of trait objects incredibly confusing when I read the book. They confused the subject by immediately talking about static vs dynamic dispatch of functions, which is really an implementation detail and performance consequence of using them and not a part of what they are logically. An outsider might have started with a simple explanation of what a trait object is (an instance of a trait where the compiler cannot determine the concrete type at compile time) and make that clear before diving into the consequences of choosing to use one in your code.

The Rust team is lucky enough to have a dedicated person (Steve Klabnik) writing documentation and teaching, who, as previously mentioned, gets to see a lot of the ways in which people struggle to learn Rust and practice different approaches for teaching it. For the most part, the dark corners of the language implementation are filtered from the compiler hackers, through Steve and his experience, into the actual user-facing documentation.

I think the specific case you mention was a problem more caused by a desire to have some documentation for 1.0, even if it isn't perfect, rather than just leaving the feature undocumented. That page is originally[1] based on a blog post[2] that is targeting people who want to know how they work internally, and there wasn't much time before 1.0 for feedback, revision and iteration of it (there was one cycle, which improved it in the direction you want, but more is always better). The second edition[3] doesn't have the 1.0 crunch date, and benefits a lot from feedback on the existing teaching material (like yours here) and the extra experience the team has had teaching Rust, so, by all reports, does a much better job. (It doesn't seem to currently have anything about trait objects, but I'm sure it will soon.)

[1]: https://github.com/rust-lang/rust/commit/dbccd70a5736fd2e898...

[2]: http://huonw.github.io/blog/2015/01/peeking-inside-trait-obj...

[3]: http://rust-lang.github.io/book/

Yes, this is true.

(And the second edition will have trait objects in it, it's not there yet though.)

Just look at K&R. It's so... bland and boring. As a 13-year-old I couldn't keep reading it.

There's several external books. I started from this one (via Safari):


"This title has not yet been released."

You can get early access to it via various places, e.g. http://shop.oreilly.com/product/0636920040385.do

There are multiple already, including an in-progress one by O' Reilly. I've also heard rumors of another one getting started as well.

Oreilly has an early release available: http://shop.oreilly.com/product/0636920040385.do

I have purchased it and it's pretty good. Although it really seems to be geared more towards experienced systems programmers. The online book is much better for beginners, IMO.

The online book is undergoing a rewrite at http://rust-lang.github.io/book/ and will be published in print as well, it is unrelated to the book you linked, (but that book is also great).

I don't agree. the O'Reilly book is much gentler and pedagogically sound. many things in the online book are discussed before being introduced. not so in the O'Reilly book.

I find that I learn better and trust the resource more if earlier lessons have hooks for later topics. It helps my mind piece everything together. But to each their own.

In this case, it wasn't so much "deliberate hooks" as it is "I took a lot of care to not do this, but then when pull requests came in to add things that were left out, sometimes they introduced forward references that we didn't realize at the time."

Makes sense. I'll have to go through the rewritten parts again, as I'm having a heck of a time trying to pass the TcpStream object from Tokio into my actual service, as part of a personal learning exercise (https://github.com/mike-bourgeous/cliserver_rust/issues/6). I haven't broken down and asked online yet, partly out of sheer determination and partly because I'm in the middle of looking for a new place.

It will be printed by No Starch.

Lol, you guys are way ahead of me. And nostarch too.

Non-Rust-user here. I tried Rust about a year ago and gave up on it as not ready. Unfortunately, I don't think this roadmap addresses the problems that made me reject it the first time around.

The main problem I have with Rust is that it's not up to the task of dealing with C APIs, particularly POSIX. The issue I got stuck on was https://www.reddit.com/r/rust/comments/47a0s3/dealing_with_v... . There are two possible things Rust could do that would make me give it a second chance: either commit to maintaining bindings for all of POSIX and libc as part of Rust core, or fold rust-bindgen into core and get it into shape.

Creating higher-level libraries like Tokio is nice for some use cases, but right now Rust doesn't have a working safety valve for things C can do that Rust can't. This greatly magnifies problems like lack of bindings for select() (see http://esr.ibiblio.org/?p=7294&cpage=1 ; I too ran into that problem, and lost a few days to it.)

Did you try the nix crate? It looks like maybe not. It should have all of that stuff already wrapped for you. For example: https://docs.rs/nix/0.7.0/nix/sys/termios/fn.tcgetattr.html (select is in there too https://docs.rs/nix/0.7.0/nix/sys/select/fn.select.html )

rust-bindgen has been improving a ton, and it is in fact on that roadmap, under

> Integration with other languages, running the gamut from C to JavaScript.

Happy to hear bindgen is getting attention; I didn't actually notice that when I looked through the roadmap.

I didn't try the nix crate at the time, but looking at it just now - it doesn't solve the portability issue. It defines struct Termios in https://github.com/nix-rust/nix/blob/master/src/sys/termios.... , with something #ifdef-ish branching on operating system, but not on CPU architecture. On quick inspection, I think it's probably incorrect on x86-32, and this crate is definitely a major liability for portability.

Sounds like you should open an issue; it's pretty much the crate for safe bindings, but as I'm sure you know, there's a lot of tiny details to get right.

(Also, I'll stop here and only reply on our Reddit conversation, ha!)


(The first few comments are mirrored between here and Reddit.)

I once said that Rust would never become really, Java-level, popular. Mostly because I thought it focused too much on performance to the detriment of elegance and productivity. I'm not so sure anymore. This is a step in the right direction. That said, what makes me most nervous about Rust is pointers and mutability being mandatory for certain things, rather than the absence of books. Maybe that's just me, though. Anyway, in my opinion, pointers and mutability should be something I think about once I'm optimizing the program, not something I do while figuring out the logic. As long as copying everything every time fits in my compute budget I don't get why I should be forced by the stdlib to do otherwise.

Anyway, huge fan of what the Rust devs are doing. It's truly awesome.

> I once said that Rust would never become really, Java-level, popular.

I like Rust, but IMO no language will ever again be Java-level popular (not even Java!) in the same way that e.g. there will never again be a band as big as the Beatles. Technological contexts have changed such that niche languages can now successfully thrive and counteract the lower switching cost that comes with a monoculture; while certain languages "in the middle" will still dominate, the long tail will get longer and subsequently lower any dominant language's market cap.


References (lifetimes and borrow checking) and mutability are there for safety, not speed.

Copying everything every time is a trivial (and very slow) solution to the memory safety problem. It just means that everything is on the stack (or at least that only one stack frame has a reference to any given object), so it is simply deallocated along with the stack frame. That's it. There's nothing unsafe about it.

What do you mean by saying mutability is for safety? That's a very unusual opinion.

Explicitly tracking shared mutability is for safety - obviously not mutability in of itself.

Copying everything every time isn't a solution in a multithreaded world if you actually want your threads to share data.

"sharing data" is quite illusory. Even if two threads have a reference to the same object, the CPU deals with it internally by making several copies, asynchronous message passing and locking, and in many cases it can lead to abysmal performance. It is often much better for performance to design parallel algorithms around shared-nothing i.e. local mutability + explicit message passing right from the start.

> "sharing data" is quite illusory. Even if two threads have a reference to the same object, the CPU deals with it internally by making several copies, asynchronous message passing and locking,

No. Two threads on the same CPU core really access the same data(1) without the delay and no locking happens unless the programmer wrote some locking code.

Synchronizing the different cores or processors is another topic, but it's also typically dependent on the software.


1) But there is also reordering https://en.wikipedia.org/wiki/Memory_barrier and out-of-order execution https://en.wikipedia.org/wiki/Out-of-order_execution

Most of the time the implemented techniques do significantly speed up the execution. And it's mostly software design that initiates the slowdowns, not the CPU.

Even on a single core, there are several copies in different cache layers and synchronizing them is done by sending asynchronous messages. Sure, in that one particular edge case when the threads are sharing a core you're right, but this is not a typical scenario for multi-threaded applications. Most of the time for high multi-threaded performance you want exactly opposite - one thread per core and pinning threads to cores. And if you don't do anything, you can never be sure if your threads run on the same core or not and you should assume the worst.

> And it's mostly software design that initiates the slowdowns, not the CPU.

This is quite vague statement and I'm not sure what you really meant here. Software written using a simplified abstraction model (e.g. flat memory with stuff shared between threads, ordered sequential execution) much different than the way how CPU really works (hierarchical memory, out-of-order execution, implicit parallelism etc.) is very likely to cause "magic" slowdowns. See e.g. false-sharing.

Also algorithms designed around the concept of shared mutability do not scale. Sure, you may hide some of the problems with reordering, out-of-order, etc. To some degree it will help, but not when you go to scale of several thousands cores in a geographically distributed system.

> See e.g. false-sharing.

It's also an effect of a badly written software, not something that is constantly present in the CPU execution. You based the claim to which I've replied on "sharing is illusory", "if two threads" and "the CPU deals with it" like it's necessary to happen all the time in the CPU as soon as the threads exist and they access the same data.

> when you go to scale of several thousands cores in a geographically distributed system.

There you are not describing "a CPU" (as in, the thing that's in the CPU slot of the motherboard) which is all I discussed, and I'm not interested in changing the topic.

Aren't they closely related? Single-ownership semantics benefit concurrency and optimization by eliminating the need for locks and synchronization except in those (rarer) cases where data explicitly needs to be shared between threads, in which case Rust forces you to go through mutexes. In other languages that don't offer this safety, you're likely to lock a lot more stuff as a purely defensive measure, because the only thing preventing unsafe access is the programmer.

Since the borrow checker happens at compile time, the whole mechanism is "zero cost". This results in more efficient code, because you can do things like safely hand out references (pointers) to privately held pieces of data without needing a heap allocation to track the pointer (e.g. shared_ptr in C++); the final code needs no lifetime checks, because the compiler did all the analysis for you.

I imagine the combination of ownership and immutability also lets the compiler reorder, eliminate and simplify generated code better than most other languages (Haskell being a possible exception here). Not sure if Haskell-style automatic parallelization is planned.

Doesn't having less reliance on indirection help speed?

For my curiosity, what do you find in Java that you think is more elegant than rust?

Does the "lower learning curve" goal include the lowering of learning curve for people who already know how to program? Because right now, the Book sometimes seems like it's aimed for people who either didn't program a lot, or didn't program in a language with types.

What I actually would like is a few "Books" like "Rust for C++ people", "Rust for Go people", etc. Those would describe in many examples how things that are achieved in language X using A, B, and C can be done with D in Rust.

Personally, I would like a "Rust for Gophers" book that would describe things like how does Rust do composing (that is, how to do what Go calls embedding), interfaces in Rust (with dynamic vs static dispatch), HTTP in Rust (this may be waiting for Tokio?), how to model your application's types and not get into who-owns-what traps.

Also, what I really want is some kind of a list of Rust "warts" and their explanation. Like the fact that sometimes you can't do a.b().c(), but have to write tmp = a.b(); tmp.c().

You will probably like the O' Reilly book. It's a bit closer to "Rust for C++ people", IMHO.

I fully agree that more of this kind of thing would be great.

What about the warts part?

Also, as someone who've tried to get into Rust three times now, I've been thinking, have you or anyone from the rust documentation team ever had sessions where you just

- take a random C++/Go/Python developer

- ask them to solve a not-too-simple but not-too-hard task, something that'll generally take them less than an hour in their "native" language, in Rust

- look and take notes on their struggle with it

This might shed some light on why people (like me) have so much trouble getting into Rust.

> What about the warts part?

I'm not sure enough time has passed to tell what Rust's warts truly are. I always joke String should have been called StrBuf...

I do this, yes. This is one of the reasons I hang out in IRC so often; it's an effective way to collect these kinds of things.

More data is always better, and collecting it across multiple venues. I don't think IRC is inherently going to be a representative sample. It's one of the reasons you'll see me pushing for details in Rust threads here, for example.

So I'm only a Rust amateur (I mean I don't get paid to use it), but I'm on my third round trying to build something with it, and I feel like I'm finally starting to get it. A couple warts I've encountered:

- No support for default struct fields: https://github.com/rust-lang/rfcs/issues/1594

- Terrible signatures for functions that return iterators: https://www.reddit.com/r/rust/comments/2h26cj/functions_retu...

If the next push is going to be for ease-of-use, those would be two nice things to fix.

A feature designed as a fix for the second problem is already in nightly:


Who knows when it'll be stabilized, though.

"sooner rather than later"

I think if you don't mind the overhead you can box the iterator

> I always joke String should have been called StrBuf

Agreed, and Vec should just be Buf! Let's fork the language. :)

And it should be white-space sensitive!

    1. Writing a tab character introduces the equivalent of {} for blocks, but not other uses of {}.
    2. You can still use {} for blocks if need be
    3. Everyone uses elastic tabstops in their editor
oh yeah, and parens are optional for function application

> parens are optional for function application

I think you've gotten confused somehow... Rust++ is a pure concatenative programming language, function application is wholly unnecessary.

Stop, I can only get so hard before an unsafe drop in blood pressure

Why not rename Vec to Buf everywhere with a Vec = Buf type alias for backwards compatibility?

Why not rename String to StrBuf everywhere with a String = StrBuf type alias for backwards compatibility, if this would clarify new users' understanding of String/str?

It's not clear that there's a ton of benefit, and not everyone agrees with me that StrBuf is a good name.

I see. I hope to see more better docs soon. Thank you for answering and for generally being open.

You're welcome, on both counts. I hope we can help you not get stuck if you give it a fourth try :)

Given the number of Rust developers reading this, it's probably going to be very helpful for them if you can describe what you actually got hung up on.

Another wart (in a separate post from my other wart so that replies are coherent):

I understand why &str and String are different, but why do they act like they've never heard of each other? Why do they implement such different sets of methods? Why can't they be compared for equality, so I don't always have to type "literal".to_string()?

Haskell has problems with too many string types as well (worse than Rust, because the type of their default string literals is best avoided entirely), but they fix much of the problem with the OverloadedStrings extension, which uses the type checker as something that helps you coerce string literals, instead of arguing with you.

> Why do they implement such different sets of methods?

String gets all relevant str methods from Deref; it has some additional methods, but it should largely be shared.

> Why can't they be compared for equality,

Hm? They both compare just fine with ==.

I mean, if I have a method foo() that returns a String, as many methods do, why can't I check if foo() == "bar"? Why do I have to check foo() == "bar".to_string()?

Works just fine here: https://play.rust-lang.org/?gist=5a3228a1d42d81690458337eb77...

Maybe you were running into something else?

String == &str works. Maybe you were encountering some other problem.

Oh, I guess the case of this I encountered most recently was actually Option<String> == Option<&str>.

I get why that's different, but it would be great if the type system could figure that out, so that the literal Some("foo") could be an Option<String> if necessary. Maybe I'm still being spoiled by Haskell's OverloadedStrings.

Nitpick: if you have a String and a &str, you want to convert the first to the second, not the other way around. Strings are owned, so &str to String does an allocation and copy of the bytes, while String to &str is a trivial operation that just throws away the capacity field.

But yeah, it would be nice if Rust had better ergonomics regarding coercing the insides of wrapper types. Though I'm not sure how exactly that would work.

You can do:

  foo() -> Option<String>

  foo().as_ref() == Some("str")

> Oh, I guess the case of this I encountered most recently was actually Option<String> == Option<&str>.

There are pretty rough type inference issues preventing this impl from existing. Too many people are doing `opt == None` & the compiler would no longer be able to infer the type of None. (You should not be one of them, just call `is_none()` instead)

I'd like to see this impl some day but its not clear how to make it happen.

Here's a wart. Why is reading lines from a file so hard? It's one of the first things people are going to need to do in a programming language.

On top of handling errors (which I understand is necessary, and which the ? operator makes easier), it requires importing BufReader and BufRead, and wrapping a reference to a file handle in BufReader.

Nobody is going to know how to do this unless they come across it in Rust By Example or on Stack Overflow. Shouldn't a standard library be able to abstract over fiddly details like this? Why is it my job to tell it how to buffer?

I say this as someone who is enthusiastic about Rust, but still learning.

> Why is reading lines from a file so hard?

It's not so much that it's hard, as is there's lots of options. Can you afford to read it all into memory as a String? Do you need to only read it bit by bit?

> Why is it my job to tell it how to buffer?

Systems langauges need to expose these kinds of details and levels of control.

> Nobody is going to know how to do this

I googled "rust open a file and read by lines" and got these results:




The first two show you directly how, though the first one is dealing with the error message. The last one shows reading it all in. I have some more work to do :)

It would be silly and wasteful to read the whole file into memory just so I can iterate its lines. I'm not sure why that's an option you'd need to accomodate.

However, many languages make a reasonable assumption that you can afford to fit each line in memory, and provide an obvious way to do this.

> I'm not sure why that's an option you'd need to accomodate.

Again, it's about control. Maybe you're only loading a small configuration file, and so fetching it all in one go is better than dealing with a buffer.

> an obvious way to do this.

If you search for 'line' or 'lines' in rustdoc, the correct thing is right near the top, which will show you how to use it with BufRead.

> Maybe you're only loading a small configuration file, and so fetching it all in one go is better than dealing with a buffer.

If it's small, it can't be that much worse to buffer. What's wrong with reasonable defaults?

Can you tell me what it means to search for 'line' or 'lines' in rustdoc? Is this a command-line tool? I'm aware of the 'rustdoc' that generates documentation, but not of anything by that name that searches documentation, and Googling isn't turning up much of anything except the Rust book here.

It would be great to have an offline way to search Rust documentation and examples, as things like this make it very hard to write Rust on a plane, for example.

Googling often gives answers that are wrong or pre-1.0. You seem to have better luck at searching for the right things, perhaps because you are an expert in the language and know exactly what to search for.

(Amusing related anecdote: I googled for "how to return self in rust" and got a suicide prevention website. Jeez, Google, it's not that bad.)

> Can you tell me what it means to search for 'line' or 'lines' in rustdoc?

Rustdoc has a search bar at the top. with 'line' it's the third result, 'lines' is the first https://doc.rust-lang.org/stable/std/?search=lines

and the short description makes it clear that the first two results are irrelevant in this case.

> It would be great to have an offline way to search Rust documentation and examples,

It all works offline. These docs are pre-installed for you when you install Rust, and 'cargo doc' will generate them for your whole project.

> perhaps because you are an expert in the language and know exactly what to search for.

I copy-pasted my search term exactly in the previous post.

> Rustdoc has a search bar at the top.

Let me assure you that I mean only the best for this language and I appreciate your persistent willingness to help, but I want to help debug your use of internal terminology that doesn't help beginners.

"rustdoc" is not how you should be telling beginners to look for documentation. As far as I can tell, "rustdoc" is the tool for generating documentation, not searching it. I assume you use rustdoc a lot and that's why the name comes to mind.

Maybe the term is overloaded, but I cannot find anything called "rustdoc" that involves a search bar. Googling for "rustdoc" gives many forms of Rust documentation, none of which have a search bar. DuckDuckGo-ing for "rustdoc" (as some documentation suggests instead of Googling) gets me some of the same things, plus rustdoc.com, which is a big honking security warning on top of someone's broken personal blog.

I know that exact link you gave me includes a search bar, but that's an example of finding the answer because you already know it.

And if you know to go to doc.rust-lang.org and go to the Standard Library API Reference, you get a search bar, but that's not where I would think of going to answer the question "how do I iterate lines".

That's why I end up Googling and getting to Stack Overflow, full of wrong answers and someone named Shepmaster yelling at newbies.

As I said in the other thread, I truly appreciate it. :)

Sorry, as I mentioned elsewhere I was in a meeting; I should have just waited to apply to you. It's true that "rustdoc" is overloaded, people use it to mean "rustdoc's output" which would mean the standard library docs in this case. My bad!

o/ waves

Please make sure to leave a comment and/or downvote any answers that are wrong to help people coming after you. I also apologize for whichever specific way I harmed you.

> It would be great to have an offline way to search Rust documentation and examples,

>> It all works offline. These docs are pre-installed for you when you install Rust, and 'cargo doc' will generate them for your whole project.

Not by default, anymore[0], as I found out when I wanted to read the book offline.

To set up offline documentation, do:

  rustup component add rust-docs # One-time thing
Now to open the docs in a browser, do one of:

  rustup doc
  rustup doc --api
  rustup doc --book
The second and third forms open up the API documentation and the book, respectively. Bookmark for one-click access.

[0] https://users.rust-lang.org/t/psa-rust-documentation-is-now-...

Gah, I always forget. I want that switched back.

> I copy-pasted my search term exactly in the previous post.

I'm bringing up a problem with how the language presents itself to beginners, not asking you to solve a specific problem for me. If searching is the answer to everything, then there needs to be a concerted effort to take Google-juice away from bad or deprecated answers.

Incidentally, the search result you're describing for "lines" says "An iterator over the lines of an instance of BufRead.", which only appears to be the answer to "how do I iterate lines of a file?" if you already know what the answer is.

No no, I very much appreciate it!

I think we have two searches confused. When I meant "my exact query, when we were taking about Google, here: https://news.ycombinator.com/item?id=13585902

I find it interesting that we got different results.

I thought "an iterator over lines" would be enough; maybe not.

That's a good point. It's slightly reminiscent of Haskell, where you have to start trying to understand the IO monad and do notation to do the most trivial real-world examples.

In other languages, you might do File.ReadAllLines("foo.txt"), and bam, you're done.

Unless foo.txt is too big, in which case, bam, out of memory.

No, Python, ruby and such have a common interface for all lazy iterations.

In Python you can just loop on anything declaring the __iter__ method, and it will lazily yield results little by little.

IO related objects implement it plus an additional layer of interface so that you can do:

    with open('file', [mode, encoding]) as f:
To get an auto closing file handle and then choose:

- for `line in f` to lazy read it line by line. This calls __iter__.

- `f.read([byte_count])` to read it all or some bunch of bytes. `f.seek(index)` to move around, etc.

- `f.readlines()` to get a list of all lines in memory.

- `f.close()` if you wish to close the file manually instead of letting the `with` keyword doing it work you.

This interface works for most IO, including files, sockets, in memory buffers, etc.

So you can choose an automatic lazy loading, a manual loading, load everything in memory, etc. And still have a lot of control.

I think rust should get some traits to expose such a common high level interface on top of the current way it deals with files, to ease simple operations. Just make sure the documentation states what you can do to go lower level.

There is a similar trend with many basic topics trying to do that as 3rd party libs too. click makes creating cmd UI very easy on top of argparse which is lower level. pendulum is higher level than datetime. requests higher than urllib. etc. Those all make doing those very common operation super easy.

Now rust is not meant to be Python/Ruby/whatever, being very low level, and checking safety at compile time implies very different requirements.

But those communities have some good concepts on API ergonomics and it would be a shame to not steal ideas from them.

The Rust team seems to not want to write trait impls where it would be inefficient to use those traits. If a method doesn't exist, its often because there is a slightly better way. They also insist that memory usage, especially any heap allocations, be very clear and explicit: allocating and attaching a buffer is probably not OK within the internal logic of the library designs.

The interface for reading is split into multiple parts. Read is the lower level interface for any stream of bytes, and is what lets you read a specific number of bytes into a preallocated buffer. Seek is implemented for files, but not all streams, because in most cases it makes no sense. There are also a number of file-specific methods directly attached to File for handling permissions and sync and such.

BufRead is the trait that lets you read all the lines, but it is explicitly not implemented for File, since using it for a File would be slow, and generally the wrong way to do things. It does have an implementation for Stdin, because that is already buffered, but if you want to associate a buffer with a file, you have to do it yourself by wrapping it in a ReadBuffer.

If I were to find a problem with the File documentation, it is that it does not mention BufReader.

Make senses. I tried to apply a solution to the wrong problem.

In C# (from which the original example is), these days - and for like 6 years now - what you do is File.ReadLines, and you get a lazy enumerator.

Iterators exist.

This was a few months ago, so I had to dig up some code: https://is.gd/kQJ7nv

Basically, I tried to create a very basic HTTP server the way I would in Go, but I kept getting all sorts of lifetime errors and kept adding stuff go get away from them. So the code has issues, lots of them. Arc<Mutex<>> was recommended by a guy at the job, I don't even know why I need an Arc here, for example.

The compiler says

    consider using an explicit lifetime parameter as shown: fn handle<'a,
    'b>(&self, req: server::Request<'a, 'b>,
    mut res: server::Response<'a>)
But when I do that I get

    error: `user_index_handler` does not live long enough
And there I'm stuck. The user_index_handler is declared and initialised literally one line above. I don't understand why Rust thinks it can just delete it immediately after I've created it.

This issue is a little bit too much in the weeds (and I'm not sure what version of Hyper you were using, it's had some big releases lately) but if you're curious, I recently implemented a basic HTTP service with a router. I still have some cleaning up to do, but https://github.com/rust-lang-nursery/thanks is the repo, and the code similar to yours is https://github.com/rust-lang-nursery/thanks/tree/master/http is the implementation, https://github.com/rust-lang-nursery/thanks/blob/master/src/... is the core of using it.

I _think_ the issue here is that Request and Response both have lifetimes. The compiler suggestion you're seeing was recently removed for having too high of a false positive rate, which I believe is what happened here. By doing that, you say that the handler must live as long as the Request and Response, which is not true, since it's created inside the functions but they're passed as arguments.

I've actually got that error with 1.15 as well. Or by "recently" you meant "on the nightly"? I updated hyper to 0.10.4, but the issue is still there.

I will check your example app out (thanks for that!) but the problem in my code still remains and I hate the fact that I can't understand what's going on. That might be my biggest issue with Rust compared to Go. Something doesn't match and I don't know how to find the issue without going to IRC or /r/rust; with Go I've never had an issue that wasn't resolved by carefully reading the language/library docs or StackOverflow.

> Or by "recently" you meant "on the nightly"?

Yeah, it was removed literally last week, so it hasn't made it into a release yet.

Without knowing what version of postgres you're using, I can't _totally_ get this to compile, it complains that SslMode isn't there. But, I did fix your issue:

  fn handle<'a, 'b>(&self, req: server::Request<'a, 'b>, mut res: server::Response<'a>)
This compiles for me.

The issue here is lifetime elision. http://rust-lang.github.io/book/ch10-03-lifetime-syntax.html...

Specifically, rule 3:

> If there are multiple input lifetime parameters, but one of them is &self or &mut self, then the lifetime of self is the lifetime assigned to all output lifetime parameters.

So here, Request and Response both have lifetime parameters. This means that your original signature is the same as

  fn handle<'a>(&'a self, req: server::Request<'a, 'a>, mut res: server::Response<'a>)
Which says "when I call handle, I borrow myself for as long as request and response are borrowed." That won't work; user_index_handler only lives for the duration of the call.

The fixed signature says

  fn handle<'a, 'b>(&self, req: server::Request<'a, 'b>, mut res: server::Response<'a>)
"When I call handle, the request and response share one lifetime, and request also has another lifetime."

You're no longer connecting &self to the request and response, and so things are just fine.

Honestly, this isn't the simplest signature; it's not surprising that you got stuck. This is also what people mean when they say "I fought with the borrow checker for a while, but then got over it", as it took me less time to fix this error than to write out this comment explaining how to! But until you've got that intuition and understanding, it can feel like hitting a brick wall.

Wow, thanks for the reply. I copied the signature directly from your post and... it didn't work. I still got `user_index_handler` does not live long enough. But! If I replace

    impl server::Handler for UserIndexHandler

    impl UserIndexHandler
it does compile. Removing the trait impl was another thing recommended by the Arc guy at work, he didn't know how or why it worked. Here is the code that doesn't compile with updated Postgres: https://is.gd/fjYDIG.

Which brings me to a question, how does implementing a trait prevent this code from compiling? Sorry for comparing apples to oranges again, but in Go implementing an interface is an invisible operation, that doesn't affect compilation. What's different with Rust?

I am about to go into two hours of meetings so this will have to be quick.

The issue is, because this is a trait, you must conform to its signature. Which is here: https://hyper.rs/hyper/v0.10.0/hyper/server/trait.Handler.ht... and specifically https://hyper.rs/hyper/v0.10.0/hyper/server/trait.Handler.ht...

So yeah, my idea won't work, because those types conflict.

> Removing the trait impl was another thing recommended by the Arc guy at work, he didn't know how or why it worked.

I'm surprised it works on first glance, as I was assuming that you were implementing the trait for a reason, not just because. If you don't need the trait, then yeah, it's much easier.

> What's different with Rust?

See my StackExchange link elsewhere in the thread for a summary.

I'll maybe poke at this after my meetings, or maybe tomorrow. We'll see how amped I am for coding after all that ;)

Okay, thank you again for your time!

Ah! I understand now.

Handlers aren't meant to be nested like this. That is, it's always going to break this way, because the handler needs to live as long as the request and response, so creating a sub-handler is not ever going to live long enough.

So yes, the solution is to not implement Handler for your UserIndexHandler; then you can just call it.

I bet there's something larger you could do as well, but I don't have enough experience with this interface to give good advice. Hyper's undergoing a huge docs drive for 0.11; I'm sure that'll help.

Yeah, I figured that out as well, but thank you for finding the time to analyse the code and write an answer. I appreciate it and may be on my fourth attempt at actually learning Rust :)

No problem. :)

Why not use Iron or Nickel?

They don't have support for async IO yet; this is using hyper master.

getting a bit off topic now - but I'm curious as to if async/io has improved performance by much? Do you have any benchmark comparisons?

http://aturon.github.io/blog/2016/08/11/futures/ is old but has some benchmarks in it; a lot has changed since then, but the blazing speed is still there.

I haven't benchmarked this particular application because I haven't had the time, and it's never going to see particularly higher load.


You might be interested in this

Why would someone trying to learn Rust be interested in a web framework? I think they ought to learn how Rust works first, or they'll just be more confused.

Different strokes for different folks

When I suggested doing this last year[1], it was very well-received. Not sure if the Rust folks ever did it. I'd wager no.

1. https://news.ycombinator.com/item?id=11156266

Interesting. Out of curiosity, what's a case where tmp = a.b(); tmp.c() works but a.b().c() doesn't? I'm still learning rust and the online book's Methods chapter doesn't say anything about such restrictions.

It happens if b() returns an owned value that c() returns a reference to. In that case,

b here is temporary, and so is freed at the end of the line, making the return of c dangling

  let temp = a.b();
Now that the return of b() is not temporary, it will last to the end of the scope and so c is fine.

> fast, reliable, productive--pick three

I love it. Short and simple, describes what Rust is. Between that and "an anti-sloppy programming language", pretty sure the marketing is there. Well done community leads, this is exciting.

Man, if developing programming languages was this easy, we should have done it years ago!

We did. The ML family has been around for about 4 decades now. Not sure why they never caught on until Rust.

It's about the runtime. I think that's probably the most important reason, even more so than memory management or performance.

If you need to write a library to implement the latest protocol, or render the latest image format, or parse the latest serialization format, then Haskell seems great. Unfortunately, the resulting library will be useless except to other Haskell programmers. Nobody wants to link in libXYZ.a and get the entire Haskell runtime, starting threads and doing GC and sending and catching signals.

I tried implementing a handler in postgresql so that you could write user-defined functions in Haskell. I made little progress, even with help on IRC and elsewhere. Any non-trivial function would need to define its own types and use some libraries, but it was far from clear how to do that and the best advice I got was to dig into ghci and try to use some ideas from that. I started down that path, ran into runtime issues, and that was the last straw and I ran out of steam. And that was only to get the most basic functionality: call into haskell to do some computation and return.

Honestly for most programmers, no GC is a red herring.

That may be true, but for programmers that can use GC, there are already very good solutions out there.

New products succeed based on how much better they are than the other solutions in their market. Rust's genius is going after the market that can't use GC, which has seen few innovations in programming language design over the last 25 years. (C++11/14/17 has helped this situation immensely, but C++ is still beholden to backwards-compatibility, which makes it unable to adopt several of Rust's more interesting features.)

> Rust's genius is going after the market that can't use GC

But the problem with that is that the market that really really can't use GC is vanishingly tiny. Rust throws the baby out with the bathwater and tries to pressure others into thinking that they are in this market. Most developers on most projects aren't.

And then Rust implements reference-counted pointers in the library, which is the slowest possible way of collecting some, but not all, garbage.

Honestly, if Rust is to become popular, it needs (a) to get rid of this elitist "we are awesome systems programmers" mindset, and (b) a good, optional way to use a GC where it makes sense, with good support for migrating away from it (i.e., by giving you a list of "I cannot determine the lifetime of this object, so it will be allocated on the GC heap" diagnostics on request).

Good cases can be made for having a no-GC mode, but having it exclusively is just premature optimization.

> But the problem with that is that the market that really really > can't use GC is vanishingly tiny.

It's huge, but specialized so may not be on your radar: small embedded devices. Think RAM size from 10s to 100s kB, and Flash from 100s kB to few MBs as rough ranges.

The interest for IoT, and the need for battery operated devices with lifetime of 10+ years, make such platforms very important. And cost and process constraints (like embedded Flash) lead them to be implemented in conservative nodes: 55 to 40 nm typical, will go to 28nm in time but unlikely to go lower. So the amount of processing won't change much in this space I believe. An any gain would not be used to get a bigger CPU/RAM, but to reduce cost and power (less power is smaller batteries => cost too). So this space will stay on the very small side for the foreseeable future.

These are devices that run on very small micro-controller cores and are too small even for a striped down embedded Linux distro. It's either RTOS, or even simple run-to-completion preemptive schedulers, or even bare metal. No room for GC here, and Rust with its focus on performance, leanness and safety is very well suited as a language.

As a platform, the fact that there's only one LLVM based compiler for Rust is a limitation in this space. In the deep embedded space there are a lot of architectures for which there's no LLVM support (Cortus, Andes, BA Semi, LM32, Nios, ...). But I hope this as temporary. GCC is the most common toolchain there, and hopefully at one point GCC will gain a Rust front end. Or maybe LLVM will become more popular in this area? We'll see.

> No room for GC here

True, if you're talking about tiny embedded systems where you would not use GC (nor any dynamic allocation), you shouldn't have much difficulty using Rust. Nor many of its benefits.

> But the problem with that is that the market that really really can't use GC is vanishingly tiny.

If you write a library in C#, you can use it from .NET languages. If you write a library in Java, you can use it from JVM-based languages. If you write a library in Python, you can use it from Python. If you write a library in Rust, you can use it from any language that can bind to C, which is virtually every language that matters.

That is not a "vanishingly tiny" market. It might not be your market, but that's okay: Rust doesn't have to be for everyone in order to be important.

> And then Rust implements reference-counted pointers in the library, which is the slowest possible way of collecting some, but not all, garbage.

That might be the case if you were to replace the JVM's GC with reference-counting, but Rust enables a data layout strategy in which a lot fewer allocations of larger individual objects take place (because not every object requires its own heap allocation). This way, RC in Rust is probably not slower (maybe even faster) than the GC in the JVM, while having a lower memory overhead at the same time.

> If you write a library in Python, you can use it from Python.

Or from C. Or from any language that can bind to C. Like Rust. I'm fairly sure the other languages you listed also allow calling from C and hence from Rust, as well as among each other.

> not every object requires its own heap allocation

Sure. That doesn't change if you add a GC to Rust: Objects won't magically become non-stack-allocable if they were stack-allocable before.

> Or from C. Or from any language that can bind to C. Like Rust. I'm fairly sure the other languages you listed also allow calling from C and hence from Rust, as well as among each other.

But then you have to include a foreign language runtime! Together with all the headaches and interoperability complications that entails. How many .NET applications do you know which include a JVM runtime, or how man Java libraries include on the Python runtime? Now compare that to the number of applications or libraries (in any language) that directly or indirectly depend on a library written in C or C++. Rust will have the same advantage.

> That doesn't change if you add a GC to Rust: Objects won't magically become non-stack-allocable if they were stack-allocable before.

My point is that a GC probably won't have a positive effect on most programs written in idiomatic Rust, therefore there's no need for it (contrary to what you claim).

> Now compare that to the number of applications or libraries (in any language) that directly or indirectly depend on a library written in C or C++.

Every significant C++ library I've ever used had its own stupid intransparent memory use conventions and invariants and segfaulted on you if you unknowingly violated them. The only difference is that you call some libraries "libraries" and other libraries "runtimes". I don't think that divide is all that sharp.

> a GC probably won't have a positive effect on most programs written in idiomatic Rust, therefore there's no need

It would allow new idioms that many people would find useful because not all code is code where performance is more inportant than clarity.

> (contrary to what you claim)

I didn't claim that Rust needs a GC to be Rust, I claimed that Rust needs a GC to be a more popular, more-widely-regarded-as-useful language than today's Rust.

"But the problem with that is that the market that really really can't use GC is vanishingly tiny."

Even if that's true (which I don't believe), tiny markets have a way of expanding when new offerings are available.

New people are getting involved in OS development in rust, for instance. We might see some really interesting stuff happen there. The same thing may happen with databases and rust.

And you didn't address my example, which was all of those "-devel" packages you need to install to get all of those libraries that so much software depends on: libssl, libjpeg, etc.

"Rust throws the baby out with the bathwater and tries to pressure others into thinking that they are in this market."

And this has what ill effect?

This will be my last post in this subthread because I think most of my points have now been made several times...

> New people are getting involved in OS development in rust, for instance.

Tiny market. Expanding it to be slightly less tiny doesn't make it non-tiny. More importantly, positioning Rust as an operating systems programming language will not get people to use it for more general application programming.

> The same thing may happen with databases and rust.

Also a tiny market. Also, not sure how wrestling with the Rust compiler will advance database theory or practice. Even more importantly, the implication (in the context of the thread) that it's impossible to write a database system in a GC language is false. To optimize the implementation, you may probably want to avoid GC in the innermost hot parts. That would be possible with the hypothetical Rust-with-optional-GC I mentioned above.

> And you didn't address my example, which was all of those "-devel" packages you need to install to get all of those libraries that so much software depends on: libssl, libjpeg, etc.

I don't know what exactly you mean by that since you even need those packages for C programming, and you need them (or something equivalent to the headers they provide) for Rust, but yes, language interoperability is hard, and yes, some language runtimes make it harder than it should ideally be.

> And this has what ill effect?

Many people will read stuff about OS kernels and database systems and vaguely elitist but never concretized ramblings about "systems programming", coupled with "OMG, everything must always be super-fast at the expense of programmer productivity" and decide that they are not in this market. And then they will keep using Python. Or, just to rub it in, Go for "systems programming" like NTPsec. <shrug> I never said that that's a bad thing; I just said that this will not drive more general Rust adoption.

Well, even though I'm totally for using a modern GC when I can, implementing an efficient GC (low pause, high throughput) requires a lot of effort and special hardware. This is still not available for a number of platforms, particularly the tiny ones, where you may only dream of things like memory virtualization. Also, reference counting may not be as bad as you think when objects are never shared between threads and when you have a smart compiler that can elide most of redundant increments/decrements.

I admit that I love C++. Of the time I spent learning programming, C++ took the lion's share. I think the language is making amazing progress since C++11. So, I started to make even more efforts to learn all the changes the language was getting.

Then, I saw rust. I fell in love with it. It is indeed what I see as the future for one who isn't tied to using C++ or C from legacy, programmer-knowledge or programmer-availability constraints. I, decided that I will place my bets on rust and am willingly giving up all the investment I made in C++. If I am their target audience, then they've definitely reached me!

Are the with-GC options really that much better? Rust or OCaml's advantages over C++ seem much the same as OCaml's advantages over, say, Java, no?

I don't quite follow the question and/or implication.

Nostrademons said "for programmers that can use GC, there are already very good solutions out there", which I understood to mean "the advantages of most ML-family languages over other languages that require GC are smaller than the advantages of Rust over other languages that do not require GC". Which I was questioning.

I wasn't thinking specifically of ML-family languages, though they could be included.

Rather, if you are in a problem domain like web-development or server-side microservices where a GC is fine, there are lots of decent options for programming languages. Python, Ruby, or PHP. Go. Any of the JVM languages - Java, Scala, Clojure, Kotlin. Swift or Objective-C. dot-NET. Many of these have had continuous attention over the last twenty years, they've got major corporate backers, and so a lot of the recent research in PL theory gets ported over to them.

If you are in a problem domain where you can't use GC - like computer graphics, games, databases, information retrieval, operating systems, or embedded - you have basically one option. C++. C++ has gotten a much-welcome facelift recently with C++11/14/17, but the core of the language is still 40 years old, and the language as a whole makes serious compromises (like memory safety) for backwards-compatibility. The excitement about Rust largely stems from its competition being C++; if you pit Rust against say Python or ES6 in the domains in which the latter are used, it's very much "Why would I use this?"

Interesting point.

I don't know if I'd pick Java as a comparison point because the ecosystem is unimaginably large, so it's hard to compare to Haskell/OCaml. A better one might be Golang or ruby.

I think ML languages just never made the case that other languages are bad enough for the particular thing someone is doing now, e.g. a web app. If you hit a NULL, you get an exception, track it down, probably not a huge issue. The best case could be made for ML on security grounds, but unfortunately nobody cares about security.

On the other hand, people were convinced that C/C++ were bad after decades of people saying so. But people didn't feel like they had an alternative until rust came along.

Try to get two programming languages to cooperate that use different GCs/runtimes. Not one combination out of the following works: C#, Java, Python, Ruby, Haskell. Because rust is runtime-less you can combine it with any of those programming languages.

This is one of those points that is so obvious that it has never occurred to me.

Memory management? Every ML implementation I know about uses GC, which is frequently just too slow. Along with many other reasons (http://www.podval.org/~sds/ocaml-sucks.html)

In the context of this discussion, a lot of those reasons are... idiosyncratic, and/or apply to Rust too.

Pretty sure F# adoption dwarfs rust.

Right. Though I checked in Google trends and found Rust to F# interest went from 1/40 to 2/3 in 5 years or so. I think Rust will become more popular in next 2 years.

Purely out of my own curiosity, do you know how popular F# is on non-windows platforms?

Also: Ada.

This is the real comparison, the only other systems language besides Rust that provides both safety and manual memory management. Ada does it via Access Types:


Memory management generally make them unviable for systems coding languages.

There is more to systems coding than writing OSes, compilers and linkers for example.

There having a GC is perfectly fine.

They were all garbage collected, which is a dealbreaker for some applications. Rust aims to fit that niche (though if the Rust Evangelism Strike Force is to be believed, Rust fits every niche). Substructural type systems akin to Rust's affine types have been relegated to research for a while.

It reminds me of the SQLite motto : Small. Fast. Reliable. Choose any three.

I'm impressed with the direction they've laid out. It is great to see a deliberate effort is being made to make the language easier to learn and to strengthen the community's ability to leverage shared code.

Is there an good IDE for windows? Something that will step through debug?

I've used VSCode, you have to pick the GNU abi and install a couple plugins but it works reasonably well.

I think VS Code is what the community is rallying around to become the flagship cross-platform Rust IDE, with intellij-rust as the second choice.

I don't really think that's the case. RLS is what everyone is rallying around, and VS Code has good LSP support, so that's where everything will happen initially, but I don't think it's really the "central" IDE that we want everyone to use.

I stand corrected, however I have started promoting VS Code for Rust because of the LSP support and the promise of RLS, despite being an Emacs user.

Ive found Atom to be better in terms of having the lint as you go. RustyCode is pretty good though otherwise :). I'm interested to see where the debugging goes with it as well.


I wish more people knew this, because it's probably the thing most people try to install as it has the most downloads. It's no longer maintained and doesn't work correctly with current stable. There's a fork that is being updated that's just called "Rust" in the VSCode addons.

Thanks! I was still using RustyCode...

Thanks for letting me know :)

thanks for that. I was using rusty code! Changed now :)

With the Rust Language Server under development, functionality like lint-as-you-go should work really nicely in any editor.

I hope not, I really dislike vscode.

What do you prefer? With RLS, it will probably be supported.


Why? Editors like vim and emacs will benefit from the work (Rust Language Server) that is making the VS Code editing environment good for Rust.

I was mainly using Atom until a few days ago. I wanted to try out the RLS, and that meant switching to VSCode, which has been really growing on me. I'm starting to consider using it for all my non-java development needs. It's a great experience with the Rust plugin, even without the RLS. I've used it on both macOS and Linux, and it's pretty seamless between those two environments.

I highly encourage people to drop any MS misgivings and give it a try (honestly that was the biggest thing for me to get over).

I tried IntelliJ and VS code and found ItelliJ to be more polished. Can't remember about debugging but I know the visual Rust project (For VS proper) will do nice debugging it although I still find it too alpha and too quickly changing to be usable for daily use.

I wish it have a good history for mobile development (iOS, mainly). I think a modern language without a good foot inside the mobile is non-ideal.

Also, a nice history for UI native widgets will be a plus. This is ask a lot, I know.

Right now I have some projects where the less-worse option is .NET, swift if only iOS+Linux, Delphi is because cost (and free pascal is unfocused). Then obviously C++, but that is where I draw a line (ie: For a C-like language Rust is the only that look nice to me. I wish pascal instead or similar)

If you like Delphi/Pascal, have you seen Nim?

Some year(s ?) ago I look at it. Seem nice. Don't clear if is good (enough) for mobile/UI. Need to recheck...

Check out this link for an example of how to build iOS apps with it: http://www.thomasdenney.co.uk/blog/2015/1/27/nim-on-ios/

Because it compiles directly to C, I've found it reasonably easy to bring into different environments myself, though I've not attempted iOS yet :)

Edited to add: Also check out https://github.com/yglukhov/nimx

I'm happy to see the focus on productivity.

I saw the long awaited Non-Lexical Lifetimes tentatively mentioned in there, and I can only hope this will help move things forward, since it's pretty frustrating spending time fighting the borrow checker and refactoring correct code that really ought to run as-is.

Cheers and happy 2017 to the Rust project!

As far as fighting the borrow checker goes, how familiar are you with more functional style programming approaches, mainly immutability?

I'm extremely new to rust so I don't know if fighting the borrow checker is in my near future or something I will largely avoid.

Mutability makes it worse, but there are at least some issues with borrowing that have nothing to do with mutability. If your experience is with garbage collected languages, I suspect you'll encounter errors that surprise you.

Could you provide an example where the borrowing would cause issues for immutable data?

I'm still learning Rust, but I thought the fact that you could have as many immutable borrows as possible would limit what errors the borrow checker could throw at you.

This is embarrassing, but I'm now second-guessing whether what I wrote was accurate. Sorry.

In penance, here are some links: https://www.reddit.com/r/rust/comments/5ny09j/tips_to_not_fi... https://m-decoster.github.io//2017/01/16/fighting-borrowchk/

Fantastic! Thanks a lot in any case.

I do find the image of "fighting the borrow checker" is quite harmful to how people view Rust. As those links state, an important aspect of teaching Rust is educating people about why the borrow checker complains, so that they might adjust their mental model.

I would love to see some improvement in the desktop GUI libs.

- GTK is a painfull to install.

- Conrod: I was unable to do an hello world application with it. It just miss some tutorial.

- KISS-UI: some dll to install

- Qt: not totally free

- Neon to plug on Electron: I'm not sure if I can do some callback from Rust to the GUI with this method.

I finnally give up to play with Electron in javascript.

> Qt: not totally free

Qt has been free for decades and is LGPL these days, about as free as it gets. Is there some Rust-specific issue, or is this just decades-old FUD from the GNOME project?

GTK is a painfull to install.

Also pretty bad on Windows AFAIK?

My personal wish would be for an easy way to use Servo to render the UI.

That's actually a cool idea. IIRC Servo was even (going to?) comply to the chromium embedded framework API/ABI so that it could be a drop-in replacement. If they're doing that, hopefully they'll go further in creating a rich API for embedding.

> GTK is a painfull to install.

Also no longer cross platform. LibUI (https://github.com/andlabs/libui) looks promising but I don't think the rust bindings are being worked on.

This plan addresses everyones biggest complaints, especially the 1.0 crate issue. Its so nice to finally quiet all the people that try rust for a couple hours and then declare it useless.

This was ultimately my problem with Rust, and I am really glad this is being addressed directly.

At the end of the day, the primary thing I want from my PL is to boost my productivity. In the kinds of software that I write, I can tolerate bugs and GC. Does Rust actually make me more productive?

If your primary goal is productivity, I don't think rust will ever be the right language for you.

If you want to remain relatively productive while building things that are one or more of (large, fast, safe), it's an excellent choice.

Add maintainable to that list. A program where its developers have a fuzzy idea of ownership of data is not in any way maintainable.

Yep, that's what I meant about "large". If you have a large project, with many developers you're going to want types + generics to maintain it over the many-years and 100,000s to millions of SLOC life.

Rust is hardly the only language to have a type system that supports generics.

I didn't say it was! I said it was an "excellent choice". Right?

Right. But, what I'm saying is, generics does not make Rust unique. There are many other good languages that I think may be a better choice.

This is one of those things that seem plausible, but I have no idea if its really true. I'm not disagreeing, I just can't see it as being obviously true. Can you give me an example of how a fuzzy idea of ownership causes, ideally, a real problem, or a less ideally a simplified example problem? I'm 100% genuinely interested in this.

It's pretty much the reason for segfaults and use-after-free issues (which manifest as either memory corruption or security vulns) in reasonably sized codebases - without a good understanding of ownership, it's non-obvious when a pointer is supposed to become invalid, and if that doesn't match up with when the data is actually freed, you have an issue that's hard to track down later.

If you're using a language with garbage collection, it's obviously not going to result in segfaults, but you can still run into logic errors when part of your code assumes that it's done dealing with a piece of data and another part has a different idea. More generally, if your object is stored in multiple places in your code, you have to remember to clean up all references to it properly, in all the different states your system can be in, and your compiler can't verify you're doing that correctly without a borrow checker.

If you don't understand your object graph, then maintaining a large codebase is going to become very difficult. You'll get questions like 'who is responsible for updating this object' or 'who is responsible for notifying when X happens', or 'why does every object talk to every other object', or 'why does it work very slowly when there's a large number of objects'

Problems involving memory deallocation in the GC applications I've written seem to happen infrequently, certainly infrequently enough that it is one of the lowest items on my list of things I'd like to fix that cause problems in development.

> but you can still run into logic errors when part of your code assumes that it's done dealing with a piece of data and another part has a different idea.

Mostly I think this is sufficiently handled by not mutating shared state as a practice. However, since Rust has mutable borrows, it seems to me that you're still in danger of running into this.

Mutable borrows are not shared - you can't touch the data elsewhere while it's borrowed mutably, and you can't get a mutable borrow while there's immutable borrows. Shared state is unfortunately a common affliction on large codebases, and if the language supports and even encourages it, it can be difficult to prevent it from happening as the codebase grows and customers want features that don't fit into the architecture wonderfully.

In higher-level languages, object lifetime is more than just whether the memory is there or not; it's also about whether the usual invariants apply.

For example, in C#, you need to Dispose() objects that are logically no longer used, to have them properly cleaned up. Using an object after it's disposed typically results in an ObjectDisposedException. This isn't as bad as a segfault, but ultimately it's still a crashing bug in the app - and solving it requires figuring out who needs to call Dispose when - i.e. figuring out the lifetime.

One piece of software that handles a lot of traffic on phone networks has an interesting issue. A bunch of headers are added to a list on every transaction. Most of these header names are constants ("TransactionID", "SourceId", "CallingNumber", etc.). However, they can be dynamic ("x-dynamic-header-foobar"). The easiest way in C to deal with this is just to strdup all the header names, so the final consumer can safely free() them.

End result: This app spends about 30% CPU time on malloc/strdup/free.

In Rust, you simply wouldn't have this design in the first place because it's so easy to avoid. In C, it's super intrusive to fix once it's obvious this is a bottleneck.

Though this isn't necessarily a fuzzy concept of ownership, just more of a pain-to-deal-with thing.

How would rust help in this situation? What alternate design is rust enabling that would be different?

Use an `enum` like Cow[1] that allows passing around static strings along with dynamic strings, as well as interior pointers: the compiler can check that things don't get invalidated and the enum makes it trivial to deallocate things correctly (in fact, it's all done automatically by the compiler), so a more aggressive design can be used, no need to defensively copy the strings.

[1]: https://doc.rust-lang.org/std/borrow/enum.Cow.html

Oh, wow, thanks. I finally understand what the purpose of Cow is!

I can't speak to personal experience since I've only dabbled in Rust, but I have followed it for quite a while and lately I have read more and more how how Rust forces you to structure your data and program in such a way that is more maintainable.

I'm not sure. I'm tempted to say I'm about as productive (in orders of magnitude) in any language that I'm very familiar with and has a large enough "batteries included" component (yes, crates would count, as does npm). Even C++ as long as the extras from boost are there.

I can type reasonably correct code much faster than I can reason about how the program should work.

I'm not saying unproductive. I'm saying, we need to be realistic. Compared to, say, Python, it's nowhere close.

Compared to C++, Java, or even Go, it's anywhere from competitive to significantly better.

You may be using a different definition of productive. I wouldn't consider python or any dynamic language productive on a large project.

>I can tolerate bugs

You might want to rewrite that statement. Or at least clarify, what kind of software that you write that can "tolerate bugs" and what kind of bugs you're talking about.

Obviously, I do things to minimize bugs, and not all bugs are "tolerable".

1. I write many kinds of software, but specifically at that moment I was talking about web applications.

2. "Tolerating bugs" is true of literally every software, except maybe things that are proven to be correct in the mathematical sense. Web applications often have parts that may cause inconvenience if there is a bug, but data is not lost, productivity is not lost, and certainly no lives are lost.

It's good to see incremental builds and async socket support (select/kevent/epoll) on the map. Those are essential to the kind of software I work on in C.

We just hit beta with incremental builds! https://internals.rust-lang.org/t/incremental-compilation-be... Give it a try :)

I'm really on board with this roadmap. I'm actually quite impress to see a language publish such a clear direction, they have really good focus, and if they keep this focus year over year, Rust will become a really great language.

The Rust 2017 Roadmap sounds really nice. Congrats for the development so far.

However, what I still miss is bootstrapping from source. It would make porting to other platforms much easier. It confuses me that Rust is proclaimed as a safe language while anyone is forced to install a binary with wget | sh.

I wish a GCC Rust compiler, but it's not planed for 2017. Hope this can change in 2018 :)

The closest to that is mrustc [1], a reimplementation which skips the borrow checking part can generate C code. Eventually you might be able to compile rust code on more platforms, though you should still develop with the official rustc for the borrow checking.

1: https://github.com/thepowersgang/mrustc

I'm surprised cross-compilation is not on the roadmap. I've heard that it is ready and that it's not quite ready.

Can I create a Windows executable for Windows on Linux?

It's not a high-level goal, but it is part of many of those high-level goals.

> Can I create a Windows executable for Windows on Linux?

Yes, but it's a little gross at the moment: https://github.com/rust-lang-nursery/rust-forge/blob/master/...

We do plan for all of this to be much, much easier in the future. Working on it.

> I've heard that it is ready and that it's not quite ready.

The foundations are strong, but the details are tough. So like, doing the cross-compilation is there, and is the simple bit, it's all of the stuff around it: linkers, other platform stuff, things like that. But the foundation to make all of this easy is in place. It just needs time and effort.

Rustup already does all the cross compilation stuff, yes.

Ambitious goals but if they pull this out it will be amazing. Very happy that they want to focus on beginners and dev experience.

Good to see plans for further improvement of code reusability.

Please start from point 2

That's a good list. Get the basics right.

Rust has been putting much effort into "l33t features" in template land. I fear Rust may be going down the C++/Boost template metaprogramming rathole.

I fear Rust may be going down the C++/Boost template metaprogramming rathole.

From that C++ thread on HN right now:

https://news.ycombinator.com/item?id=13584167 https://news.ycombinator.com/item?id=13583935

Call me skeptical, but I do not believe ADDING stuff to C++ is going to make it more competitive against Python in ease of use. Especially as programmers will still always be free to use all features, and any large enough project is going to end up with all of them.

I disagree with the "l33t features" assertion. From your comments regarding Rust in the past, you seem to have distaste for the judicious use of closure-based APIs in Rust, and I would argue that closures make some of the borrow checker's warts easier to handle.

Which features are you thinking of, specifically? We actually just rejected an RFC for one of these features for being too complex, though it isn't a rejection of the feature entirely.

Which one is that?

Don't change the language to make it easier (unless that can be a free lunch), but get someone doing a "Today in rust" or "Doing X in rust" blog/vlog. Also please get some sort of RNG into the main language (not as a crate).

The Rust's goals of high performance, safety and pay-what-your-use and you-couldn't-hand-code-better won't be sacrificed for sure. The point in ergonomics improvements is to identify "free lunches" that aren't yet taken advantage of.

That being said, lifetimes and traits being both complex and unfamiliar features, there's always going to be quite of a learning curve. But one can always reduce the amount of papercuts. If nothing else, one can at least make the documentation better, the ecosystem libraries better and error messages even smarter.

Can you say that traits are like Go interfaces?

Yes and no; I've always really enjoyed http://softwareengineering.stackexchange.com/questions/24729... as a comparison between the two.

It's old enough that it's before Rust 1.0, so some of the syntax is a _teeny_ bit wrong (int isn't a type any more, you'd use isize) but the macro picture is the same.

Any particular reason rng must be in the stdlib? It's an "official" crate, maintained by the Rust developers. In general Rust tries to keep things out of its stdlib instead opting for crates. It lets these evolve independently of the stdlib (not tied to rustc releases), and also lets them have their own versioning (none of that urllib2 urllib3 nonsense)

It's a battery people are pretty well used to seeing included in a language. POSIX even mandates one for C. For repeatable simulation work you might want something more robust, and hopefully everybody knows it's no use for crypto, but it's a handy thing to have around if you need to flip a digital coin.

It's not harder to use in Rust than it is to pull in a C header file, though.

I would say random numbers are a fairly core programming construct for a standard library. They're important, difficult to get right, and have huge implications if you get them wrong.

But given how easy/simple/common it is to depend on a crate, what practical problem would it solve for it to be in libstd? Maybe it's difficult for the community to identify the anointed RNG?

So's encryption, possibly moreso, and yet that probably shouldn't be in the standard library.

This doesn't answer my question, though. Most of the "fairly core programming constructs" are not in the stdlib for Rust (like regexes)

The Rust stdlib is mostly core abstractions and platform-dependent stuff.

Given that, why should they be in the stdlib? The rand crate is officially blessed and the one everyone uses.

You can't use rand in play.rust-lang.org so it's hard to get help from IRC since they can't get your error messages easily and fix them.

In that case, it sounds like "officially blessed crates" are a second-level stdlib with weaker guarantees about availability across time and across platforms. I'm fine with that idea, but from my outside perspective this is a confused message. Better to call the blessed crates "stdlib extensions" or something like that.

Why "weaker guarantees across platforms"? These crates have the same guarantees about platform support. Availability across time is also similar, really.

In most other programming languages functions in the standard library:

* Are guaranteed to work correctly on all platforms where the language works.

* Will always work with the latest stable release of the language.

* Will be available under one single permissive license (with a single copyright holder (body) for any purposes of required notices etc.)

* Will be supported (security patches, bug fixes, etc.) with backward compatibility, more or less indefinitely (with reasonable long time frames for deprecation windows and recommended transition plans etc.)

* Are under no danger of being abandoned simply due to lost interest of the author.

* Crucial: Will only depend on other code in the standard library. (So all these guarantees are transitive!)

And anything that's not in the standard library usually implies there are no such guarantees. This is immensely more helpful and important than the comparatively trivial ease-of-installation.

If the Rust team really does explicitly extend such guarantees to non-standard library crates, that should be clearly and widely communicated. So far the focus seems more on how small the standard library is. What's the advantage for the language user? I see advantages for the Rust team, and mainly I see disadvantages for the (professional) language user. Easy-to-use cargo is nice (especially for casual language users) but is nowhere close to really making up for that.

Most of these are true for rand. It's in the nursery so it might (probably not though) be deprecated in favor of a different library, but it would continue to work because Rust is backwards compatible.

https://github.com/rust-lang/rfcs/blob/master/text/1242-rust... has some of the motivation behind this.

Thank you for the link, that clarifies the Rust team's thinking! As an outsider considering Rust for a project at work, I'd recommend you put that somewhere front and center -- it's one of the more confusing things about the Rust world.

Thanks, but I still don't see where such guarantees are advertised. Also it's not really clear what "the nursery" is exactly. Where is it? Why is it called that? It sounds more like "not mature = ready for prime time = don't use this".

According to that link, regex is also in the nursery. According to crates.io, regex depends on other crates maintained elsewhere, by a "random" single developer, under a different licenses, with zero obvious guarantees.

Crates.io lists the authors as "The Rust Project Developers", and has the Rust libs team as an owner. The dependencies are almost all maintained by Rust libs team members or are otherwise trusted crates.


(Regex moved out of the nursery a while back)

The nursery is more of a "These crates are officially blessed and we wish to make them part of rust-lang, unless the community comes up with something better".

The guarantees aren't advertised, they're known. We can do a better job of advertising this, some of the work this year is related to that.

Sorry, I meant the author and license of e.g. this dependency of regex: https://crates.io/crates/aho-corasick

I'm sure the Rust team knows and trusts these (or they even are Rust team members), but how should outsiders know this?

Researching trustworthiness of individual authors, getting approval for different licenses, tracking and updating required notices in documentation etc. for every single dependency transitively(!) seriously takes a lot of effort. Updating dependencies that switch sub-(sub-...)-dependencies freely can turn from a simple cargo command into a month of work. A big standard library (or an equivalent construct) can really save the day there.

> The guarantees aren't advertised, they're known.

It is known. -- Jhiqui

Glad to hear improving this is in the works anyway.

RFCs are frozen in time; regex has since moved under rust-lang, and has a pre-release for a 1.0.

That person isn't random; they're on the libs team and the primary author of regex. (there are a few crates by other people as well, but most of those crates were extracted from regex as they can be useful on their own.)

My point is researching all this puts a huge burden on (serious) users that would not be there for a big standard library. Currently the Rust approach seems quite problematic, but I think this can be solved. For example by an explicit guarantee pledged by the Rust team (or a related umbrella / organization / consortium / ...) for a certain set of libraries (platform / ecosystem / 2nd tier standard library extension / stdx / nursery / ...).

I think it would be important to include in this pledge:

* Transitive closure rule. (No external dependencies not covered by the pledge. Possibly with certain exceptions e.g. for bindings to big known obvious third party dependencies like SQL databases or Operating Systems.)

* Uniform licensing rule. (Everything must use the exact same license, with a single group name as copyright holders and for any purposes of required notices etc.)

* Maintenance, backward compatibility and platform compatibility rule. (100% compatibility forever everywhere is not even necessary, but an _explicit_ rule of thumb of what reasonable time frames for deprecation windows and recommended transition plans can be expected would be nice.)

Or maybe this already exists and I'm just uninformed?

It all exists, it's just not as advertised as it should be.

It speaks to my confusion about the role of these "blessed" crates that I assumed the reason for not including them in the standard library was that they were not supported on all platforms, or that you didn't want to commit to having them in stdlib forever. If neither of these is the reason, and you intend for these crates to persist indefinitely and be available on all supported platforms, surely they belong in the standard library?

For context, I'm a Python programmer by day and well used to the comforts of its expansive standard library. From that perspective, splitting your stdlib into a first-class and a second-class stdlib seems weird, and I'm trying to make sense of it.

The idea is that these can evolve separately from the language; they are not tied to language versions. Contrast that with Python where you have the urllib urllib2 urllib3 issue; with people going and using requests anyway.

There's no inherent reason to put them in the stdlib aside from "other languages do it".

And yeah, folks don't want to commit to sticking them in the stdlib forever. If a better library turns up people can switch to that; without being tied to the language version. These libraries will still be maintained, but may no longer be the recommended way to do things. Being outside the stdlib gives some liquidity to the crate.

The Python urllib/urllib2 issue is a feature. It means any code that uses the old (inferior) urllib continues to work, without having to update it as urllib2 starts being used within the same function so that it can be migrated. That's what promising stability means.

This applies to your last paragraph as well -- I do want to be using code that folks are trusting to be the way to do things forever (well, at least for a few years). I'm not trying to learn yet another CADT language / ecosystem.

Also note that both crates mentioned in this subthread (regex and rand) are at version 0.x; to a not-really-a-rust-person like me, it signals "experimental API, avoid avoid avoid". I assume that is the reason to start a concerted effort to get everything to 1.0 levels.

(I'm mostly commenting because the roadmap under discussion seems to be heading towards the sort of promised stability that I want to see; so having somebody known to be rather close to rust development saying the exact opposite rather unsettles me. Sorry.)

(regex is only pre 1.0 because its release candidate is in testing, it looks like 0.2 is gonna become 1.0 with no changes)

> It means any code that uses the old (inferior) urllib continues to work,

Rust lets you have multiple versions of a transitive dependency in your project, so they would continue to work.

What advantage would putting an RNG into the language do? Are there any languages where an RNG is a language construct?

Relieve crates of version coupling stress. I effectively can't release an 1.0 crate if I have rand as a public dependency; if it has a new major version, I need to have that too to keep in sync and do versioning right.

Rand is a vocabulary crate (traits Rng and Rand are used for inter-crate exchange), so it's very sensitive to version coupling.

Interesting, thanks. That does seem valuable; I'm not sure that rand is gonna shoot for a 2.0 any time soon after its 1.0, but that is something being in-tree would prevent.

"vocabulary traits" like this are a good candidate for the stdlib for exactly this reason, maybe a good compromise would be putting _those_ in the stdlib, but leaving the implementation outside.

> "vocabulary traits" like this are a good candidate for the stdlib for exactly this reason, maybe a good compromise would be putting _those_ in the stdlib, but leaving the implementation outside.

How would this interact with the orphan rule? Wouldnt this mean the vocabulary traits would not have default or std library type implementations and would have to be wrapped in newtypes, defeating the purpose of having them for inter-crate exchange? I guess you could wire types explicitly with Into/From in user code. Are there any traits unimplemented in std now that these vocab traits would look like?

It helps with the orphan rule, as everyone would be importing the trait from std. From/Into are the same kind of traits. That is, the idea is that the rand crate would implement std::rand::Random, and so would some rand2 crate, and then I could write my own crate saying

    fn random<R: std::rand::Random>(r: R)
and it would work with either one. No need for newtypes.

I also might be missing something.

stdlib or in a crate for just the trait similar to log?

The parent is worried about 2.0, so that would imply in stdlib. No reason it couldn't be done as a crate just for the trait as well, but that at least leaves the possibility of a 2.0 open.

Not worried about 2.0.

Rand is at version 0.3 and stagnating. I'm worried about 0.3, 0.4, 0.5. Those that have rand as a public dependency cannot release 1.0 now, if they do, they need to release 2.0 when rand goes to 0.4 and so on.

I think they mean stdlib. RNG is not in the language for any language I can think of.

It's in BASIC, as RND(). The usual Basic RNG is good enough for games, but not good enough for crypto.

It's part of the core language semantics (arguably the core language semantics) for Java2K (http://p-nand-q.com/programming/languages/java2k/).

Applications are open for YC Summer 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact