Hacker News new | past | comments | ask | show | jobs | submit login

> there is no reason for replace C with Rust.

If you have some reason to choose C today, I'm not sure Rust is a contender to stand in as a replacement. They are vastly different languages. That doesn't mean there is rejection that C's memory model is unsafe, just that there isn't a "better C" recognized to choose from.




IMHO Rust could be a good replacement for C when (if) it gets an actual spec, or at least when (if) there are multiple real implementations of the compiler (meaning, with borrow checker, compiler errors, and all).

Because otherwise, you could write Rust 2018 edition (for example), and there's nothing preventing a situation where your currently-valid Rust-2018-edition program stops compiling (or behaves differently) on a future version of rustc even when compiling to that specific edition, just because some behavior is (in the future) considered a compiler bug or something. And it would be perfectly valid to cause such breakage, because there's no definition of "what is Rust", other than "what this specific compiler in this specific version accepts".

I'm not talking about whether such possible backwards-incompatible change on a Rust edition would be good or bad. Just that with the current monoculture around a single implementation, there's not many guarantees.

A specification, or multiple implementations, would at least add some (good) friction.

I'm not denying its current value as a safe language; just that, as it is now, I don't consider it as a good choice for programs that must stay alive for decades, like `curl` or an operating system.


> need multiple implementations

This thinking is understandable but it’s basically pattern matching. There were two successful languages that followed this model and so a language that seeks to replace them should adopt the same model of design by committee followed by multiple independent implementations.

But there’s no reason to think that a language needs multiple implementations to succeed. Python has had one implementation used by all its user and that hasn’t prevented from becoming a top 3 language by usage. Ditto with other very popular languages like Go, Ruby, TypeScript, C#.

I don’t understand the concern that your code will stop compiling with a future version of the compiler. Firstly, Rust hasn’t broken backwards compatibility in 8 years and doesn’t plan to in future. Secondly, why do you need to upgrade the compiler? If it works on the current version you can continue to use that in perpetuity.

Honestly, having multiple implementations would be a curse. I see nothing but negatives. Even the smallest proposals (like embedding data in a binary) take a decade to get buy in and 3-4 years to be implemented. Convincing everyone of the benefit of a feature is just too difficult so a lot of good ideas are never put forward, let alone implemented.

And the difficulty of getting it working across compilers and operating systems. How hard is it to write C++20 code that will compile with all major compilers on all major OSs? On Rust it’s trivial. To put it simply - it’s trivial to answer the question “is this Rust code valid?” If rustc compiles it, it’s valid today and will always be valid (modulo bugs).

You want alternate implementations? There’s a WIP - gcc-rs. But thankfully they haven’t forked the semantics of the language. It should and shouldn’t compile all code that rustc does and doesn’t. Otherwise it’s a bug in gcc-rs. Anything else would make the life of Rust users painful.

In summary, multiple implementations is a bad idea for many reasons. If you can express the benefit beyond “that’s the way C and C++ do it”, I’m eager to listen. But please explain how Python and Go managed to succeed with one impl.


The trouble with a single, unstandardised implementation is that it's only available on the OSs and CPUs that the lone implementation developers choose to target.

By creating a standard, and allowing multiple implementations, someone creating a new OS or a new CPU architecture can create their own implementation of the language without having to wait for anyone else to deign to create the port.

Maybe a new implementation won't compile very fast, or produce great code, or it might not even catch all the incorrect constructs that it ought to. But if it can make all the existing code written in that language suddenly available for the new platform, that platform instantly becomes orders of magnitude more useful and powerful.

> How hard is it to write C++20 code that will compile with all major compilers on all major OSs?

That depends on the code. You want to compile Firefox? You might be out of luck.

You want to compile curl? Based on how portable it already is, chances are it might work out of the box once you've got a libc implementation working.

You want to compile Gnu bash and coreutils, to get a basic Posix shell environment up and running? It might be a bit of work, but easier than having to reimplement a whole userland from scratch.

> But please explain how Python and Go managed to succeed with one impl.

It's not that the language can't succeed - at least, not on popular platforms that the devs are interested in targetting. It's that those devs get to dictate which platforms are even capable of making use of the software written in those languages.

And that just rubs me the wrong way.


>By creating a standard, and allowing multiple implementations, someone creating a new OS or a new CPU architecture can create their own implementation of the language without having to wait for anyone else to deign to create the port.

Why does that require multiple implementations? There's nothing stopping said party from adding support for their OS/arch to rustc directly today. There are multiple examples of this already: fuchsia, Sony PSP, Nintendo Switch, etc [1]. Now if you want support in rustc itself out-of-the-box this does require LLVM support your target as well. But even then, again nothing stopping you from adding LLVM support too. The avr-rust project for instance maintained an LLVM/rustc fork for a while before those patches were upstreamed.

[1] See the wide variety of targets with varying support today: https://doc.rust-lang.org/nightly/rustc/platform-support.htm...


Off the top of my head? Maybe you want to target a small system with a native compiler, so users of those systems don't have to cross-compile, but the standard implementation is just too big to run on those systems? (e.g. tcc)

Or maybe you don't want your work to be redistributed under the original project's license, and would prefer to have a copyleft/permissive/proprietary licensed version instead, which can still call itself a "real" implementation. (e.g. mono or gcj/openjdk. Or gcc, originally.)


> It's that those devs get to dictate which platforms are even capable of making use of the software written in those languages.

Adding other tiers is welcomed. Take a look at the supported platforms (https://doc.rust-lang.org/nightly/rustc/platform-support.htm...) and see if there's a platform you'd like to target that isn't a tier 3 target at least. There's HaikuOS, PlayStation1 etc. I'm on a Tier 2 platform myself and I'm really happy.

If you'd like even more platforms, there's an ongoing project to add a GCC backend to rustc (https://blog.antoyo.xyz/rustc_codegen_gcc-progress-report-20).

> if it can make all the existing code written in that language suddenly available for the new platform, that platform instantly becomes orders of magnitude more useful and powerful.

I agree with this, but creating a front end that can actually compile all existing code correctly is a massive task. If the compiler is modular enough, you should be able to contribute just a backend, rather than reimplementing a frontend. I'm strongly against only the frontend reimplementation, because it's no longer clear what "valid" Rust code is.

In summary - I completely agree with the importance of targeting niche platforms. I think supporting multiple backends (LLVM now, libgccjit and Cranelift in future) gets us there without fracturing the ecosystem.

> but easier than having to reimplement a whole userland from scratch.

Probably. But once it's reimplemented (https://github.com/uutils/coreutils) I think it's really cool you can run get binaries for Linux, macOS and Windows with one build command.


Taking your message in good faith:

I won't deny it's pattern matching, but it has less to do with what C did, and more with what history has been. I've come to distrust too much centralization on important things.

Take Node.js as an example. It started controlled by a single company, then thanks to the community a Node.js Foundation was formed with an open governance model. But now that Node.js Foundation no longer exists, instead it's controlled by the OpenJS foundation, which is basically controlled by big corp.

I don't want something similar to happen to Rust.

That said, I don't mind Rust as it is today for user applications. And I happily use languages that have a BDFL, that are even less sound as Rust, so the problem is not Rust's success (this can't be denied).

But I honestly think that if Rust wants to replace C (or even SPARK) on critical systems, and even make universities eager to teach it, it would be easier to play by the industry's rules, than to offer a "black box" ("what this compiler accepts"), for the lack of a better term.

Also, as I pointed out in a different message (you might have missed it), I'm aware of gccrs, and I'm looking forward to it reaching feature parity with rustc; and I also appreciate the effort being put by the Ferrocene[1] people, because I think it with Rust getting wider adoption in the industry.

[1]: https://github.com/ferrocene


C# has a ECMA standard and several implementations.


I was surprised when I found out that Ruby is ISO/IEC 30170:2012 [1].

[1]: https://www.iso.org/standard/59579.html


Rust is not a C replacement, it is a C++ replacement.

Many of the people writing code in C do not want to use C++.

And many of the embedded use cases for C already involve very careful memory usage. If you've banned malloc and don't have threads, you've removed a huge swath of common bug sources right off the bat.

For someone writing code to setup registers on a DMA controller, Rust has nearly 0 benefit.

(That said, no one should be writing new command line utilities in C, parsing arbitrary data in C is dangerous!)


Not everyone agrees with this characterization, including some well known (now former) C developers: http://dtrace.org/blogs/bmc/2018/09/18/falling-in-love-with-...


Excellent post, thanks for linking. The arguments made for replacing C with Rust seem sound.


>just because some behavior is (in the future) considered a compiler bug or something

Unsound behavior is considered a compiler bug, and is liable to change if you accidentally rely on it. I would hope that having multiple compilers won't change that. The motivation to actually fix soundness holes is one of the primary differentiators of Rust.

That said, these are very rare.


I would even expect that having multiple implementations would help catch small things that might have been missed in the reference implementation.

I'm looking forward to Ferrocene's specification and the gccrs implementation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: