Hacker News new | past | comments | ask | show | jobs | submit login

> unsafe and passing pointers everywhere?

Do you think this would impact Rust build times?

> Reimplementing vectors and linked lists.

Someone has to implement them.

Note that they're also implemented in the C++ code, so the comparison is fair.

> Also, makes use of a ton of macros.

Can you show an example where the Rust code uses "a ton of macros"? The only place I can think of is the tests, but I thought macros were what you're supposed to use in Rust for assertions.

If you're referring to proc macros, I thought those were very popular in Rust.




> Do you think this would impact Rust build times?

Perhaps, because the result is so un-idiomatic, it's unlikely to benefit from tweaks conceived for idiomatic Rust code. The Rust team uses Crater runs (re-compiling all of the public crates) to identify performance changes in the compiler, so if you do stuff that's far enough off the track there's no reason the compiler would get optimised around that.

Making your own vector seems like it'd be fairly up there.

> Someone has to implement them. Note that they're also implemented in the C++ code, so the comparison is fair.

I'm not sure I buy this argument in code like yours which almost invariably should just use the standard library and definitely needs to measure before replacing standard library features with your own hand-rolled equivalents. As I understand it your concern in this work was about development, and having a good standard library to lean on is a huge boon for development.

For example in strolling around this code, I ran into sorted_search. But, isn't this just [&str]::binary_search() except written by hand?

Actually the fact there are "linked lists" and yet there's no concurrency sets off alarms in my head. In highly concurrent software there are a bunch of clever lock-free algorithms for linked lists, so if you need that then you need it. But if you don't need this the linked list is usually going to be a mistake because on modern hardware its performance is abysmal.

Usually this is associated with C programmers, who don't know any better, but you've got other data structures, so, why Linked Lists ?


>> Someone has to implement them. Note that they're also implemented in the C++ code, so the comparison is fair. > > I'm not sure I buy this argument in code like yours which almost invariably should just use the standard library and definitely needs to measure before replacing standard library features with your own hand-rolled equivalents.

I did for the C++ code. And for a fair comparison, I ported my C++ vector to Rust.

If I made the C++ code use a custom vector and the Rust code use the standard vector, then people would complain that the Rust code was artificially shorter.

> For example in strolling around this code, I ran into sorted_search. But, isn't this just [&str]::binary_search() except written by hand?

Yes. But the standard binary_search isn't `const`, and mine is. I need to run my `sorted_search` at compile time to map translatable strings to magic numbers. (The C++ code does this too.)

> Actually the fact there are "linked lists" and yet there's no concurrency sets off alarms in my head. [...] why Linked Lists ?

The linked lists are actually linked lists of arrays. They are not grade-school linked lists with one item per node. In the C++ code, the classes are linked_bump_allocator (a memory arena) and linked_vector (a deque-style container).

I wrote linked_bump_allocator so I could use an arena allocator for parser ASTs, temporary strings, etc. I originally used Boost's monotonic allocator, but I wanted a rewind feature, so I wrote my own implementation.

I wrote linked_vector only because std::deque took too long to compile. (I'm not kidding.) So I could have easily used Rust's standard deque.


It really depends on what you’re trying to benchmark here.

If you’re benchmarking the two compilers for lines of code, you’ve done a really good job and people shouldn’t complain about the Rust code length.

> then people would complain that the Rust code was artificially shorter.

If you’re trying to benchmark Rust development vs C++ development, then people cannot complain that the Rust code is “idiomatically shorter”.

Arguably right now your benchmark has the Rust code “artificially longer”. Which is fine if that is what you’re intentionally trying to measure.


> If I made the C++ code use a custom vector and the Rust code use the standard vector, then people would complain that the Rust code was artificially shorter.

People would also complain that even though you claim “my C++ code is longer than Rust but compiles just as fast”, you’re not factoring the extra things Rust’s versions do


> If you're referring to proc macros, I thought those were very popular in Rust.

AFAIK they are very popular but also not recommended unless necessary, as they increase build times considerably.

All languages with macros tend to be abused, and the recommendation, whatever the language, is use as few macros as necessary.


I gave up only a little beyond where they revealed that they ported line-by-line. I think this is an interesting experiment for those considering a Rust rewrite, but has no utility for a greenfield project. The scales are stacked strongly in favour of C++ here, without a doubt. Software designed for C++ has essentially zero chance of becoming idiomatic Rust without a huge amount of work, and tooling tends to be designed for idiomatic use


> I gave up only a little beyond where they revealed that they ported line-by-line.

The commenter you are so dismissively replying to is the author. He did not "give up only a little beyond something" like you did, he made a serious effort that probably took hundreds of hours, and documented it. I am not sure why you are posting such a comment full of disrespect for his work, after reading a couple of paragraphs of his article.


I didn't mean to be dismissive. Like I said, I think it's useful for somebody. I just don't think it's useful to me


But as another commenter pointed out more idiomatic Rust == more borrow checking, less unsafe == more checks.

Is there really reason to believe that would be in advantage of Rust build times?

Ok, you say the proc macros you say.. but it also commented on, and also not unidiomatic?


Why would a project need to become "idiomatic Rust" for a Rust port to be useful?


To be useful as a comparison between implementation languages, it should be idiomatic. Otherwise it just tells you what it's like to transliterate one language into another, something which is usually only done for interop purposes - running on a platform which doesn't have a native compiler, or to enable native-level APIs when foreign function support is poor.

Code size after transliteration is a race the target language can seldom win; abstractions that also exist (or close enough) in the target language get translated 1:1, abstractions which don't get broken down 1:n, and abstractions which exist in the target language but don't have analogues in the source language generally don't get used at all. It's hard to end up with a shorter program following this approach. Only boilerplate with redundant repetition and ceremony - code which has little functional effect other than to satisfy declaration order rules, symbol visibility and so on - can get eliminated.


My wording may have been unclear. I have no doubt you had good technical justification for this port. I'm purely talking about the experimental results regarding builds and toolchains here. For people starting a new project rather than porting, I don't think this is a useful data point


To avoid being hounded by the purity police, of course!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: