Hacker News new | comments | show | ask | jobs | submit login

I am utterly thankful for new experience reports on Rust, especially for ones this well-written. Generally speaking, inaccuracies in such things are our fault, not the writers', due to a lack of documentation and or good examples.

With that being said, a few notes:

> It runs about five times slower than the equivalent program

I'd be interested in hearing more about how these were benchmarked. On my machine, they both run in roughly the same time, with a degree of variance that makes them roughly equivalent. Some runs, the iterator version is faster.

It's common to forget to turn on optimizations, which _seriously_ impact Rust's runtimes, LLVM can do wonders here. Generally speaking, if iterators are slower than a loop, that's a bug.

> Rust does not have tail-call optimization, or any facilities for marking functions as pure, so the compiler can’t do the sort of functional optimization that Haskell programmers have come to expect out of Scotland.

LLVM will sometimes turn on TCO, but messing with stack frames in a systems language is generally a no-no. We've reserved the 'become' keyword for the purpose of explicitly opting into TCO in the future, but we haven't been able to implement it because historically, LLVM had issues on some platforms. In the time since, it's gotten better, and the feature really just needs design to work.

Purity isn't as big of a deal in Rust as it is in other languages. We used to have it, but it wasn't very useful.

> But assignment in Rust is not a totally trivial topic.

Move semantics can be strange from a not-systems background, but they're surprisingly important. We used to differ here, we required two operators for move vs copy, but that wasn't very good, and we used to infer Copy, but that ended up with surprising errors at a distance. Opting into copy semantics ends up the best option.

> how that could ever be more useful than returning the newly-assigned rvalue.

Returning the rvalue ends up in a universe of tricky errors; not returning the rvalue here ends up being nicer. Furthermore, given something like "let (x, y) = (1, 2)", what is that new rvalue? it's not as clear.

> I’ve always thought it should be up to the caller to say which functions they’d like inlined,

This is, in fact, the default. You can use the attributes to inform the optimizer of your wishes, if you want more control.

> It’s a perfectly valid code,

In this case it is, but generally speaking, aliasing &muts leads to problems like iterator invalidation, even in a single-threaded context.

> but the online documentation only lists the specific types at their five-layers-deep locations.

We have a bug open for this. Turns out, relevant search results is a Hard Problem, in a sense, but also the kind of papercut you can clean up after the language has stable semantics. Lots of work to do in this area, of course.

> Rust won’t read C header files, so you have to manually declare each function you want

The bindgen tool can help here.

> My initial belief was that a function that does something unsafe must, itself, be unsafe

This is true for unsafe functions, but not unsafe blocks. If unsafe were truly infectious in this way, all Rust code would be unsafe, and so it wouldn't be a useful feature. Unsafe blocks are intended to be safe to use, you're just verifying the invariants manually, rather than letting the compiler do it.

> but until a few days ago, Cargo didn’t understand linker flags,

This is not actually true, see http://doc.crates.io/build-script.html for more.

> the designers got rid of it (@T) in the interest of simplifying the language

This is sort of true, and sort of not. @T and ~T were removed to simplify the language, we didn't want language-support for these two types. @T's replacement type, Gc<T>, was deemed not actually useful in practice, and so was removed, like all non-useful features should be.

In the future, we may still end up with a garbage collected type, but Gc<T> was not it.

> Rust’s memory is essentially reference-counted at compile-time, rather than run-time, with a constraint that the refcount cannot exceed 1.

This is not strictly true, though it's a pretty decent starting point. You may have either 1 -> N references, OR 1 mutable reference at a given time, strictly speaking, at the language level. Library types which use `unsafe` internally can provide more complex structures that give you more complex options.

That's at least my initial thoughts. Once again, these kinds of reports are invaluable to us, as it helps us know how we can help people understand Rust better.




Another thing that should probably be clarified is that the author diagnosed the problem with the `let y = x; let z = x;` code incorrectly (assuming that Rust is creating 'bindings'), which (not having actively programmed rust for a while) greatly alarmed me because it sounds like a terrible idea. What is in fact happening is that `x`'s value is _moved_ into `y`, which is a lot easier to think about.


> but messing with stack frames in a systems language is generally a no-no.

Could you expand on this? Optimising away a stack frame that lies on the border of some security barrier would obviously be Bad News, but what other specific problems are there? Conversely, it seems there are some possible benefits to TCO in a systems language: I'm thinking of those secure-C coding standards which (apparently) tend to ban recursion for fear of stack overflow.


LLVM does do sibling call optimization, which allows for TCO in many common cases, including all cases of a function tail calling itself (but note that RAII makes the definition of tail position subtler than it may seem at first glance).


> (but note that RAII makes the definition of tail position subtler than it may seem at first glance)

Like http://www.nhplace.com/kent/PFAQ/unwind-protect-vs-continuat... this?


Purity isn't as big of a deal in Rust as it is in other languages. We used to have it, but it wasn't very useful.

Right, purity aka referential transparency is basically required when you have lazy evaluation by default (as in Haskell).

Since Rust is a strictly evaluated language, it's easy to reason about the order statements and expressions will be executed in and when side effects will happen, so purity is not generally necessary.


Purity is not just about laziness.

It helps with things like concurrency, parallelism, equational reasoning, refactoring, understanding APIs, and more.


w.r.t. concurrency:

As you know, the difficulty with concurrency is with shared mutable state. Purity simplifies concurrency by restricting mutability; rust simplifies concurrency by restricting sharing.

That makes purity less important for rust.


Purity is also not the same as immutability. Clojure has immutable data and impure functions, and it also offers good, safe concurrency features. Purity helps with other things as the parent comment mentioned, but it's not necessarily tied to immutability. But I maintain that purity is necessary in the presence of laziness, because it's terribly confusing to reason about side effects when you're building up thunks everywhere.


Can you sprinkle parallelism annotations and have guarantees about semantics not changing while gaining parallel execution?

Also, note concurrency and parallelism are two of many interesting benefits of purity.

I'll also add unit tests and property testing which are much nicer with purity.


>> It runs about five times slower than the equivalent program > I'd be interested in hearing more about how these were benchmarked. On my machine, they both run in roughly the same time, with a degree of variance that makes them roughly equivalent. Some runs, the iterator version is faster. It's common to forget to turn on optimizations, which _seriously_ impact Rust's runtimes, LLVM can do wonders here. Generally speaking, if iterators are slower than a loop, that's a bug.

In my novice benchmark I found similar results as OP.

  running 6 tests
  test for_range_100   ... bench:        89 ns/iter (+/- 2)
  test for_range_1000  ... bench:       929 ns/iter (+/- 98)
  test for_range_10000 ... bench:      8815 ns/iter (+/- 414)
  test for_while_100   ... bench:        36 ns/iter (+/- 3)
  test for_while_1000  ... bench:       294 ns/iter (+/- 27)
  test for_while_10000 ... bench:      2768 ns/iter (+/- 268)

  test result: ok. 0 passed; 0 failed; 0 ignored; 6 measured

https://gist.github.com/simonz05/afd76c549d6c8afb8081


That is not the same test as the OP's. It's not even the same test between the two different algorithms, since you hide different information from LLVM under different circumstances. If you have to put `test::black_box` everywhere to get anything but zeroes, the only thing you can really conclude is that LLVM is better at optimizing than you are at writing microbenchmarks (I'll agree it can be frustrating at times).


> This is, in fact, the default. You can use the attributes to inform the optimizer of your wishes, if you want more control.

Isn't the default that the optimiser will do whatever the hell it wants, and the attributes simply skew the optimiser's factors in one direction or another? I think what the author means here is that the caller function should be able to define whether the callee should be inlined or not.

> The bindgen tool can help here.

Would be really useful to have an implicit bindgen thing. Maybe a compiler plugin using e.g. Clang's C parser? That way there's no need to maintain the binding. I'd say I'd like a header generator more than a reader though.


Bindgen has a compiler plugin too: https://github.com/crabtw/rust-bindgen#macro-usage

Also, FYI, https://github.com/rust-lang/rust/issues/10530 covers taking a Rust lib and generating a C header file.


Maybe I misunderstood what the parent wants, but you're right that the optimizer can do as it pleases, and you can use annotations to help it make the right decision.

An 'implicit' tool may in fact be cool. It's not perfect, and so needs tweaking in many cases, so the current state is pretty good, but for easier cases and/or when you don't care, I can see such a thing being useful.


> Maybe I misunderstood what the parent wants, but you're right that the optimizer can do as it pleases, and you can use annotations to help it make the right decision.

I understand TFAA's request to be a callsite annotation, which currently does not exist, e.g.

    inline foo()
to force inlining or

    noinline bar()
to prevent it, probably with the first one erroring out if the call is not inlinable.


I believe #[inline(always)] and #[inline(never)] both work like this.


According to steveklabnik here[1], those are on the definition, not the callsite, which is the distinction here. Although from other info here, it sounds like having it on the definition is a prerequisite in some cases if you wanted to somehow specify it for the callsite, as it needs to be serialized in the crate metadata to be inline-able, and that's controlled somewhat by whether it was defined as inline capable.

1: https://news.ycombinator.com/item?id=9548248


But those annotations are for the function definition instead of the call site, or are there call site annotations as well? I think the author's idea is analogous to the user-defined (instead of type-defined) move/copy semantics.


They're on the function definition, yes. There aren't ones at the call site.


As someone who just looked this up to confirm this and reply to someone else here, the docs[1] are fairly ambiguous in this respect. As someone soliciting feedback on the docs, that's a good spot to clarify. :)

P.S. I considered submitting a PR, but I don't know enough about rust yet to accurately phrase it.

1: https://doc.rust-lang.org/reference.html#inline-attributes


Then I'm left wondering how you interpreted the sentence you quoted and replied to...?

> I’ve always thought it should be up to the caller to say which functions they’d like inlined,


I am ridiculously tired, after burning the candle from both ends for the last few weeks to get this release shipped. In the last four days, I've written almost 1700 lines of docs. I make mistakes sometimes :)


Bindgen uses libclang. It's also quite slow.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: