Hacker News new | past | comments | ask | show | jobs | submit login

Everyone is going to have a different definition of "mature", and that's fine :) Obviously lots of respect for Kostya. I do think that framing these as "maturity" is a good framing, that is, fundamentally, he's right. A lot of this stuff has to do with Rust being so young, and in the future, it will be taken care of. I would argue that this is a significantly higher maturity requirement than most people actually need, and that Rust is more mature in other places and so may be for other people, but that's a different thing.

My take on the state of these issues:

> Rust does not have a formal language specification... I understand that adding new features is more important than documenting them but this is lame.

Most languages do not. It also really depends on what you mean by "formal."

It's not about being more important, it's that we value stability very strongly, and don't have the ability to document things with the guarantees we'd prefer. You might call it... not mature enough yet :)

There's been a bunch of movement here, I'm excited to see it continue to develop!

> Function/method calling convention. ... I’m told that newer versions of the compiler handle it just fine but the question still stands

The objection here doesn't have to do with calling conventions, actually, this is about "two-phase borrowing," described in a series of blog posts ending here http://smallcultfollowing.com/babysteps/blog/2017/03/01/nest...

I believe this will get even better with polonius https://nikomatsakis.github.io/rust-belt-rust-2019/

Regarding argument evaluation order, technically it is not yet documented https://github.com/rust-lang/reference/issues/248 but has been left-to-right for basically forever https://internals.rust-lang.org/t/rust-expression-order-of-e... and I actually thought that it was documented as such. I would expect this to shake out the exact same way as struct field destruction order, that is, something that's been one way for a long time and so we wouldn't change it even if maybe it's a good idea to.

> Traits... the problem is not me being stupid but rather the lack of formal description on how it’s done and why what I want is so hard. Then I’d probably at least be able to realize how I should change my code to work around the limitations.

Upcasting/downcasting is rarely used, and so has less love, generally, it's true.

> First of all, bootstrapping process is laughably bad.

So, Kostya acknowledges

> Of course it’s a huge waste of resources for rather theoretical problem but it may prove beneficial for compiler design itself.

Which is I think the way most feel about it. However, there is some desire to improve this, for other, related reasons. https://matklad.github.io/2020/09/12/rust-in-2021.html talks about some of them.

> Then there’s LLVM dependency.

While this stuff is all true, we wouldn't be where we are without it. Everything has upsides and downsides.

> And finally there’s a thing related to the previous problem. Rust has poor support for assembly.

He mentions asm; we're almost there! It took some time because it is not a simple problem. At the end of this, we'll have better support than C or C++ according to his metrics; these are not part of the language standard, so give his previous comments about maturity, I find this one a little weird, but it is what it is. :)

> There’s also one problem with Rust std library that I should mention too. It’s useless for interfacing OS.

Yes, the intention of the standard library is to be portable, so that's not really a goal.

> But the proper solution would be to support at least OS-specific syscall() in std crate

This may in fact be a good idea! I'm not sure how much use it would actually get.




> In C it’s undefined because it depends on how arguments are passed on the current platform (consider yourself lucky if you don’t remember __pascal or __stdcall). In Rust it’s undefined because there’s no format specification to tell you even that much.

>> Regarding argument evaluation order, technically it is not yet documented https://github.com/rust-lang/reference/issues/248 but has been left-to-right for basically forever https://internals.rust-lang.org/t/rust-expression-order-of-e... and I actually thought that it was documented as such.

I am aware the word "undefined" references the order of evaluation. However, I just want to clear possible confusion on the matter. Code that depends on evaluation order doesn't produce undefined behaviour. It produces unspecified behaviour. (This is not a directly reply to Steve, who am I sure knows more about this than I ever will).


And there is no guarantee that the order your compiler decides on has anything to do with the calling convention.


I agree in general, but I do feel like one counterargument needs to be brought up:

> Most languages do not (have a formal specification)

Most languages also aren't trying to replace languages that do. C and C++ are both languages that Rust, afaik rather officially, aims to replace in some areas. It can be a great language like so many others, but if it wants to replace these old giants, it needs to have a proper spec, maybe in the form of an ISO standard. Of course that will come when its time, but thats a good indicator of when a language can compete as an answer to the question of "what tech will we use for our next big, important and highly specialized project?".

This is the pertect indicator of it not being as mature as what it aims to replace.


I'd argue that when Rust is considered as an alternative in one of the domains that C and C++ are prevalent today, the fact that it does not have a formalized spec does actually hurt it today. For example: NVidia evaluated various languages to adopt for their "Safe Autonomous Driving" project. Rust was considered but didn't win. One of the reasons was literally:

"Does not have a formalized spec"

See [1] page 35.

[1] https://www.slideshare.net/AdaCore/securing-the-future-of-sa...


The nvidia thing is part of why I wrote my comment, too, yeah. Was really eye opening to the requirements that some big players have


Sure, though you can make an argument about the relative merits; it is possible that the heavyweight ISO process would have strangled Rust had we started there too early. I do agree that this is why "maturity" is a decent framing for this criticism, after all, C did not have a spec at this point in it's life.

And also, about the invocation of "formal" there...


The difficulty in something like an ISO standard comes from conflicts between stake-holders, not really anything intrinsic to drafting a standard. In C's case, the problem is that many compilers are developed for C at cross-purposes to one-another; GCC and Clang want different things from C than embedded-cross-compiler toolchain authors do; than JIT authors do; than creators of "child" languages like Objective-C or OpenCL C do; etc. The "work" of C standardization is in getting these people to compromise.

Rust doesn't have that problem; there aren't yet any alternative Rust compilers that have any other purpose than to run as a batch-scheduled crate-at-a-time compile step at the command-line.

In such a case, where there's only one real stake-holder, "standardization" becomes less about declaring what should happen; and more about specifying what does happen, in exacting detail, such that someone could build an alternative conforming implementation from the spec without looking at the source of your reference implementation.

I don't feel like the existence of such a descriptive specification would have "strangled Rust" at any point. At most, this would have roughly doubled the work of any fix: writing the code, and then writing the change in the spec. But it wouldn't have actually been double the overall labor overhead, since the increased clarity-of-purpose of modifying the spec to declare a change in intention, would likely have mooted a lot of requesting-clarification and debating at code-review time.

But besides, software-engineering as a discipline now has tools like Behavior-Driven Development to minimize the costs of maintaining a parallel descriptive spec for a project. BDD tests are just regular tests that embed a lot of descriptive strings in them—those strings being words you are already mostly thinking at the time of writing the test. So they're only a little more costly than writing ordinary tests (which the Rust compiler already has), yet can also be compiled out into a descriptive spec. (And then you can diff the generated spec, between versions, and turn that diff into the spec errata for the "minor specification addendum" of that minor release.)


> The difficulty in something like an ISO standard comes from conflicts between stake-holders, not really anything intrinsic to drafting a standard.

Sort of, ISO has some rules that are antithetical to Rust's ethos, like requiring that conversations not be recorded. Rust's development chooses when to be public and when to be private where it makes sense.

I don't actually know if the "meet in person" aspect is a formal ISO rule or a peculiarity of the C and C++ committees, but that would be another vast difference that matters a lot. Especially at this historical moment.

> Rust doesn't have that problem;

We do have this problem, it's just not driven by compiler authors, but by the relevant stakeholders directly. The language team and the compiler team, while sharing some people, are separate.

> less about declaring what should happen; and more about specifying what does happen

This is not how the process plays out in Rust, though you're right that it could, if the compiler team wanted to act in bad faith.


C was almost 20 years old by the time it got a standard. It's clearly not that critical.


Is an OS-specific `syscall` at all useful if your OS isn't called "Linux"?


Depends on the OS. We also do include some specific things, see https://doc.rust-lang.org/stable/std/os/index.html and https://doc.rust-lang.org/stable/core/arch/index.html


Sure platform specific things can be useful. But that's exactly what `syscall()` is. As far as I'm aware, other platforms don't have any real equivalent. On other platforms syscalls must be made via libraries (as Go rather famously found out the hard way).


Yes, you're right that most platforms don't have something stable here. I was thinking of the problem more abstractly.


Doesn't most platforms have a reasonably stable interface for functions that are supposed to be reached from userspace? When does it actually matter if that border coincide with where execution privileges are raised?


They do: the interface is (usually) libc (or some equivalent), not the actual details of how libc makes said call.

For an example of how this can play out, the parent is referring to things like https://marcan.st/2017/12/debugging-an-evil-go-runtime-bug/


On windows, it can talk to ntdll (which is the blessed interface to the kernel from userspace code).

Ditto libsystem on macos.

Freebsd also provides a stable syscall interface, like linux.


`syscall()` is a very thin wrapper around calling the kernel directly by number. ntdll and libsystem aren't equivalent, they are more similar to `libc`. Neither Windows nor MacOS have a stable system call interface so you have to dynamically link to the library and call through the stable library interface instead.

I can't find anything that guarantees FreeBSD's system call ABI. Do you have a source? It would be very interesting if, for example, FreeBSD 10 applications that use syscalls can run on modern FreeBSD without a compatibility layer. However, if the FreeBSD project does not provide guarantees it would be folly to rely on this behaviour in the future. If it does provide such a guarantee then I stand corrected.

EDIT: After some more research it seems the FreeBSD kernel needs the `COMPAT_FREEBSD10` option enabled for my hypothetical example to work. The default options for amd64 include compatibility options back to FreeBSD 4. Defaults for other platforms seems to differ (perhaps depending on when the platform was first supported).

I can't find good documentation on if these provide full compatibility or if any `COMPAT_` options could be dropped in future versions.


> ntdll and libsystem aren't equivalent, they are more similar to `libc`

I disagree.

System calls on linux comprise the interface provided to applications to talk to the kernel.

On windows, ntdll serves the same function--it is itself a 'very thin wrapper around calling the kernel directly by number'. (Especially important since a libc may be hard to come by on windows.)

(Libsystem it seems I was mistaken about; it looks like that's just a bundle of libc, libm, libpthread, etc. Though possibly libsystem_kernel is nearer the mark? Difficult to find information on the subject, and I don't have a mac.)


well i feel like one should focus on the more stable POSIX stuff and then also implement a wrapper for windows.

That will work for most OS's out there, and windows


It's Linux specific, not POSIX-specific.


Ye, I get it why eg. gcc would want to bootstrap itself, but why would you want that in general? Also writing the compiler in its own language to me would seem to just make debugging the compiler much harder ...


You want to be able to have the trust chain all the way to a well known "good" version. As rustc uses nightly features, the only (reliable) way to get the current version without breaking the chain is to compile every version with the prior version. This can be thought of as an "academic" problem, but some OS vendors do insist on doing this and it is annoying and time consuming.

Writing a compiler on its own language has a bunch of benefits.

Early on, it let's both evolve in tandem, even before you know what the language itself might be. Having real world experience in a complex enough codebase for the language will inform some design decisions. Things that are hard to do might get ergonomic work poured into, sharp edges tapered. Things that are too hard to implement or that might cause exponential evaluation might be redesigned to allow for a linear algorithm.

Later, having the compiler written in its language is beneficial for contributors: people that use the language can jump in and help with the development of the compiler. This has the caveat if any large codebase, but it certainly was my case. I would go as far as saying that I really learned Rust through my rustc contributions. (BTW, doing that has the nice benefit of fixing your mental models if the language to actually match reality, instead of some approximation based on the documentation and observed behavior.)

Finally, setting the debugging scaffolding in place will be made a priority in order to debug the compiler, so even early users of the language will benefit of some tooling in that area, however crude it might be at the start.


Self-hosting is a kind of right of passage for programming languages. Another reason to aim for self-hosting is that it means it's now viable to only use that language, for instance in targeting a new hardware platform, via cross-compilation initially and then self-hosting after. If you don't have a self-hosting language, you either always cross-compile or you port two languages.

That is: CRust (a hypothetical C-based Rust compiler) can be made to target XX99 hardware, but in order to run CRust on that hardware you also have to make the C compiler support it. Achieving self-hosting, especially for a language that's targeting low-level capabilities like Rust is, is rather important.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: