Hacker News new | past | comments | ask | show | jobs | submit login
Rust things I miss in C (gnome.org)
335 points by heinrich5991 on Feb 19, 2018 | hide | past | web | favorite | 162 comments



Tangential, but if you're looking for a project to do as you learn Rust, there's an ongoing Operating Systems class being taught at Stanford in Rust right now[1]. The "Ferris Wheel" portion of assignment #1 was particularly useful for language knowledge; 25 source files which you must make compile/not compile/pass tests (along the lines of the Ruby/Kotlin Koans), along with a test harness you can run locally.

[1]: https://web.stanford.edu/class/cs140e/syllabus/


I'm definitely interested in giving it a try. Do you know if the Pi kit they are using is sold as a complete package somewhere? I found a list of components in the Assignment 0 description, but that's it.


Bought a Pi 3 starter kit from a local place near me. It came with:

- Raspberry Pi 3

- 1⁄2-sized breadboard

- 4GiB microSD card

- microSD card USB adapter

- CP2102 USB TTL adapter w/4 jumper cables

- 10 female-male DuPont jumper cables

And I had to buy:

- 10 multicolored LEDs

- 4 100 ohm resistors

- 4 1k ohm resistors

- 10 male-male DuPont jumper cables

Specifically this kit: http://tinkersphere.com/raspberry-pi-orange-pi-boards/1538-r...


One thing jumped out for me. No integer overflow. In my embedded work we use that all the time. We represent angles as a int16_t or uint16_t where you can add and subtract angles and never have to worry about a discontinuity at 2pi because 2pi=65536.

Is this a hack that should not be allowed? I don't think so. Every processor made (that I'm aware of) in the last 30+ years has used 2's complement arithmetic. I don't think overflow is a bad thing, I think C made a mistake in calling it "undefined". They had to because when the language came along there were other options fresh in peoples minds.

I've also worked on systems in asm that had saturating arithmetic, and that has it's own nice use cases. My preference is still to have the rollover. People can implement bounds checking when they need to.


> One thing jumped out for me. No integer overflow.

FWIW that's not quite true, Rust will panic on overflow in debug but can (and currently does on all platforms I think) allow them in release. They're not UB though, they're defined as 2's complement.

> In my embedded work we use that all the time. We represent angles as a int16_t or uint16_t where you can add and subtract angles and never have to worry about a discontinuity at 2pi because 2pi=65536.

> Is this a hack that should not be allowed? I don't think so.

Rust provides APIs to make these behaviours explicit decisions rather than implicit hacks: https://doc.rust-lang.org/core/num/struct.Wrapping.html, https://doc.rust-lang.org/std/?search=wrapping_, https://doc.rust-lang.org/std/?search=saturating_ (and https://doc.rust-lang.org/std/?search=checked_).

> People can implement bounds checking when they need to.

Being unsafe by default always ends up in tears. Checking Mitre, just this year there have already been 11 public CVE for integer overflows.


https://youtu.be/gp_D8r-2hwk?t=50

A space error: 370.000.000 $ for an integer overflow https://www.viva64.com/en/b/0426/


After reading responses and reflecting a bit I guess rust got it mostly right. Undefined behavior is bad, so Rust defined it. Overflow is apparently a problem worth checking for. I don't understand how people are overflowing 32 and 64 bit integers, but OK it's common enough. Having the ability to declare overflow/wraparound as being OK is a good thing so long as it compiles down to the same add or subtract instruction and doesn't add overhead. Rust claims to have zero overhead abstractions, so I'll have to try this some day.

TL;DR the decisions made in the design of Rust seem reasonable.


https://play.rust-lang.org/?gist=5b5df368283645fbec18c4fbaeb...

Click "release" and then "asm", you'll see that they are compiled to the exact same thing.


> I don't understand how people are overflowing 32 and 64 bit integers

Some tame examples:

- When retrofitting >4GB file support into an existing program.

- Counting in milliseconds (e.g. GetTickCount overflows every ~50 days - if that causes you a crash, it will be a very nasty rare bug, to the point where windows checked builds actually advance that time by ~49 days to make it repro faster: https://msdn.microsoft.com/en-us/library/windows/desktop/ms7... .)

A less tame example:

- Any deserialization that involves untrusted sources. (So, all deserialization - there's a reason your IT department doesn't want you opening random PDFs!)

Say you have an array. So you allocate "sizeof(Element) * elements" bytes. Attacker picks "elements" such that this overflows to a small/near-zero value - which successfully allocates - does a similar amount of IO, which also succeeds - then tries to loop over "elements" number of array elements, which didn't overflow without the multiply, and is now reading uninitialized memory.

At minimum this is a potential denial of service attack (just read until you hit an unmapped page.)

But it's easy to see how this could be (ab)used for data exfiltration as well in a typical C codebase. Say this is a simple echo or chat server - it's going to write whatever it reads to someone, and won't stop until it crashes (if it crashes). That uninitialized memory might be private messages to other users... or it might be database credentials, or worse.

And this chat server probably copies that data it read around - so now you've got potential buffer overflows writing data. In the old days, you could use this to stomp x86 instructions with your own - pwned! Nowadays you have to use slightly more complicated techniques like ROP chains and figuring out ways to defeat ASLR - but the fundamental potential is still there.

It's worth noting that panicing doesn't actually fix the potential denial of service attack. But it probably makes it easier to figure out what's going on when fuzzing for such problems, makes unintentional overflows clearly bugs to be fixed rather than "eh whatever's" to be ignored, and helps prevent the more serious variations on that attack.


You can overflow 32bit integers fairly easily if your doing math with unix epoch microseconds.


Using 2's complement is not the entire story, some CPUs can trap on overflows (MIPS for instance). X86 doesn't but it does set the carry/overflow flags in case of unsigned/signed overlow to make it easy to test for it if necessary. Leaving the freedom to the implementation to catch signed overflows like it can catch divisions by zero might actually be a feature. Admittedly it's not very consistent to make unsigned overflow defined.

I like rust's approach that you can explicitly decide to use overflowing arithmetic if you want, otherwise it triggers a runtime error in debug mode. That alongside with the checked_ variants make it very easy to deal with potentially overflowing code whereas in C it's a minefield, it's not difficult to trigger an UB while trying to defend against an other UB. Ask a beginner to write a "checked_add" function in C (no Stack Overflow allowed) and see how many bugs and UB you can spot.

On the other hand advocating for implicit wrapping overflow all the time sounds risky to me. In my experience there's only a very tiny subset of operations where an overflow is desired and wouldn't lead to a bug.


To clarify, only signed integer overflow is undefined behavior in C. Unsigned integers have wrap-around semantics. (But beware that unsigned integers smaller than int could be automatically promoted to (signed) int!)


This should be higher up since it's a tiny but important distinction.


> (But beware that unsigned integers smaller than int could be automatically promoted to (signed) int!)

This caveat sounds an awful lot like undefined behavior ;p


When things are promoted is well defined, just subtle behavior.


You just have to intentionally use a method on integers called wrapping_add() to explicitly allow overflows. That said I believe overflows are only checked when you compile with the debug target. So when you compile to the release target it doesn't check them I believe.

I personally think it's a good thing to only allow overflows when explicitly allowed.


The rules are:

1. overflow is a "program error"

2. If `debug_assertions` are enabled, then overflow must panic

3. If overflow does not panic, then it must two's compliment wrap

This leaves the door open to always requiring a panic someday in the future, if performance gets there.


> overflow is a "program error"

Can you expand on what that means to someone who hasn't used Rust yet? Does it mean a third thing distinct from (2) and (3)? Or is it that (2) and (3) are the things that might happen as a result of (1)?


Imagine if dereferencing a null pointer in C was defined to segfault. Currently, it's UB, which means the optimizer assumes it's not null, and when you break that assumption, weird things happen, the least scary of which is a segfault.

This is similar. "program error" means "You're not supposed to do this. But the compiler will not assume that you've not done this; it has your back. If you do this your program will exhibit defined behavior. Perhaps undesired behavior (panicking, or segfaulting), but the behavior will be defined."


How entirely sane and helpful! Thanks for the clarification.


It's cool, this is the only place where this kind of language is used. So even if you were familiar with Rust, you might not be familiar with this particular corner.

Mostly, it means "this is wrong but it is not undefined behavior".

> Or is it that (2) and (3) are the things that might happen as a result of (1)?

Yes, this.


> If `debug_assertions` are enabled, then overflow must panic

It is very risky to have different behavior on such things in test and production.


For what it's worth, you can still compile your code to panic on overflow in release builds today if that's worth it for your use case.


Yes, that's exactly the path I would choose. Not that I'm the target audience in my present day job but nothing is more irritating than having a production environment that subtly differs from the test environment at the language level.


I think it's more frustrating when production fails more often than the test environment than the other way around but I see your point. Main thing to consider obviously is that enabling the checks in the release code will degrade performance. Ultimately, overflow is a difficult problem to solve, I don't think there's a universal win that doesn't trade something off.


its fairly common to have a compiler produce different builds for release vs debug envs.


Rust cannot fix all bugs. This isn't a memory safety issue. Sometimes, you have to make tough calls.

Maybe Rust++ will fix this, someday :p

(Or, once people have the tolerance for the performance degradation and we can turn it on in rustc, exactly why we specified it the way we did.)

(Or, you can tweak it to include this behavior in release mode; always an option)


This is not about 'fixing all bugs', it is about re-visiting a specific class of bug that we already know about and have already seen many bad instances of in the wild.

Few things are harder to debug than things that pass the tests but fail in production and anything that inserts behavior like that should be avoided like the proverbial plague.

If you're already of a mindset to have a Rust++ then you are missing the point, Rust quite possibly has a window of opportunity to displace C but for that to work at a level where it succeeds rather than as an 'also ran' you will need to religiously avoid repeating the past.


> you will need to religiously avoid repeating the past.

Rust is not a religious language, regardless of what some people may think. It's even in the name, which evokes something practical, well used, and a bit worn.

> If you're already of a mindset to have a Rust++

Language design is about trade-offs. Do I think Rust is generally an excellent language? Yes. Does that mean that I believe we have solved programming languages, that there will never be a language better than Rust? No. Someday, Rust will be the old incumbent, and a new language will overtake it. That's how progress works.

In the end, as I said, this was a very tough call. In the end, we decided to be hardline[1] about one thing, and one thing only: memory safety. Does Rust care about program correctness? Absolutely! Does it care about it as much as memory safety? It does not. There's several PLT features that could help improve program correctness that are not in Rust. They're not in it because it's a balance. Including them would harm several of our other objectives for the language.

This RFC was one of the most discussed at its time. 160 comments! https://github.com/rust-lang/rfcs/pull/560

We would have loved to say that it's always on, but that's just life. Nothing is ever perfect. Rust certainly is not.

1: note that I said "hardline" and not "religious" here even; even Rust's most sacred principle, memory safety, has a keyword built into the language that lets you subvert it!


Sorry, but that just doesn't wash.

The reason C has the bad reputation it does is because it makes performance over correctness trade-offs that we have come to realize that are not just far from ideal, they are fundamentally wrong.

And now Rust, the supposed replacement of C is going to make different trade-offs some of which are rooted in exactly the same performance-over-correctness decisions that gave C its bad name.

I completely get why that RFC had as much input as it did, it's akin to the Python 'whitespace' decision, it's a fundamental thing and to get it wrong will turn off a lot of people from what you are building.

On another note, integer overflow has been the cause of the same kind of issues that unsafe use of memory is associated with:

http://www.kb.cert.org/vuls/id/945216

That makes it a problem in the same class and frankly I'm quite surprised that Rust would take performance over safety in this matter, in my opinion good slow code is always better than faster but possibly incorrect code.


The main reason integer overflow often turns into a vulnerability is because the overflowed result is either used to index into allocated memory, or as the size of a memory allocation which will later be indexed into. In both of these cases, the vulnerability can be prevented by bounds checking every access into memory, as Rust does (except on a few methods which can only be used within an "unsafe" block).

Another reason integer overflow can turn into a vulnerability is because it's Undefined Behavior, and when encountering Undefined Behavior the compiler can do anything, including eliding bounds checks. Rust (and C with -fno-strict-overflow) prevents that by making integer overflow have a defined behavior.


> On another note, integer overflow has been the cause of the same kind of issues that unsafe use of memory is associated with:

> http://www.kb.cert.org/vuls/id/945216

That bug, like nearly all other security bugs relating to integer overflow, relies on the lack of bounds checking in C. In a language with bounds checks, that bug would not have been dangerous.


> will turn off a lot of people from what you are building.

This is what I mean by tradeoffs: if Rust had significantly worse integer performance, it would also turn off a lot of people from what we are building.

> On another note, integer overflow has been the cause of the same kind of issues that unsafe use of memory is associated with:

From a quick read of this CVE, this requires memory unsafely too.

If you could manage to produce a situation where overflow causes a memory safety issue using only safe code, then we'd switch.


Let me give you one example of how this could lead to exactly such a scenario:

An integer that has wrapped gets passed into a piece of unsafe Rust code that was otherwise bullet proof, exposing a vulnerability where otherwise the program would have abended much earlier when the overflow happened.

The very best spot to trap an error is where it is first initiated, any cycles after that point are being run in what is essentially an undefined state which will sooner or later - hopefully sooner, but sometimes much later - result in either incorrect behavior, a security issue or in the most benign cases a crash. To willfully postpone the discovery of the error introduces the risk that the error will never be caught at all, the program will continue to run and will produce bogus output, spill out your state secrets or worse.

First make it work correctly, then make it fast. If you're going to worry about speed before you have it working you are falling headlong into the premature optimization trap, a trap that C programmers the world over unfortunately have extensive experience with and that I thought - perhaps mistakenly so - the Rust crowd was trying to address.

Btw, Swift seems to get this right, I wonder what their secret sauce is.


> An integer that has wrapped gets passed into a piece of unsafe Rust code that was otherwise bullet proof, exposing a vulnerability where otherwise the program would have abended much earlier when the overflow happened.

In that case, that piece of unsafe code would have a bug, which would be a bug regardless of whether overflow happened. The contract of unsafe code is that it must not expose undefined behavior.

For example, vector indexing is implemented with unsafe code, but the unsafe code performs bounds checks, so it doesn't matter whether an overflowed integer was passed in as the index.

> Btw, Swift seems to get this right, I wonder what their secret sauce is.

Their "secret sauce" is not having the same performance goals (which is not a criticism of Swift).


> An integer that has wrapped gets passed into a piece of unsafe Rust code

Yes. That still requires unsafe code. All bets are off there. You should be validating everything with regards to unsafe. There's tons of ways unsafe can go wrong; this scenario is a drop in the bucket. The bug is fundamentally in that unsafe code, not in the overflowed integer, as unsafe code is not supposed to expose memory unsafety; you could have passed a zero or a -128 or whatever manually, and it would still have caused this.

> The very best spot to trap an error is where it is first initiated,

I agree completely!

> I thought - perhaps mistakenly so - the Rust crowd was trying to address.

If you believed that Rust was about program correctness above all else, then yes, you were mistaken. As I said above, our priorities are memory safety above all else. Correctness is certainly up there, but when the rubber hits the road, hard choices have to be made.


Ok, in that case thank you for the correction, it helps to place Rust a little bit more accurate on my mental map of programming languages.

Btw, and on the same note, I always felt that it should be possible to generate a fault on an unexpected carry so I see this as much as a CPU issue as a programming language issue.


No worries! That discrepancy might explain why we've occasionally butted heads in the past with regards to the language :)


My mental model of Rust's current overflow behaviour is that "integer overflow is wrapping overflow, but during development it also panics to push me to use explicit wrapping in the code." I'm failing to see how that can qualify as something that "pass[es] the tests but fail[s] in production".


A pretty reasonable test would be that some function panics on invalid input, in this case by way of integer overflow. This test would pass in a debug build, but could cause unintended, possibly even insecure, behavior in a release build.

Ideally you would run your tests in both debug and release modes, though. :)


I'm also working in embedded, and you're picking one of the few examples where you want wrapping behavior. So it's better IMHO to do it the Rust way and only allow wrapping with the Wrapping<T> type. That way you can see where wrapping is part of the program design.


In Rust, you can make your intent explicit by calling `wrapping_add` instead of using `+`.


Or if this type is used a lot, you can wrap the 16-bit integer inside its own type and overload addition and/or other operators.

    struct Angle(i16);
    impl Add for Angle {
        type Output = Angle;
        fn add(self, rhs: Angle) -> Angle {
            Angle(self.0.wrapping_add(rhs.0))
        }
    }


There’s already a `Wrapping<T>` struct in std for you. I don’t think you need to roll your own.

Of course, it might make sense in the embedded space to do so.


Wrapping<T> actually comes from core and is just re-exported by std, so embedded targets that lack std support can still use it by importing directly from core.

https://doc.rust-lang.org/core/num/struct.Wrapping.html


Fantastic!


`Wrapping<T>` is a great solution. When in doubt, encode the behavior in the type system.



> Is this a hack that should not be allowed?

Yep. This is a hack that should not be allowed.

If yo want to use it, you should create your own type, with explicit modular arithmetic. Not try to ride architectural dependent assumptions.


Fwiw, Rust provides checked_op, saturating_op, and wrapping_op versions of arithmetic, as well as a Wrapping … wrapper

https://doc.rust-lang.org/std/num/struct.Wrapping.html


> Every processor made (that I'm aware of) in the last 30+ years has used 2's complement arithmetic

DSPs often use saturating overflow IIRC


Ideally your ISA would provide adds that wrapped, adds that saturated, and adds that threw an exception on overflow.


>> DSPs often use saturating overflow IIRC

I've used them, and used that feature. It was an option that could be turned on and off. The basic arithmetic was the same as any other CPU unless that was enabled.


Similar to above comments about wrapping_add, Rust has saturating_add as well.


It should be "allowed", but not by default in normal arithmetic. This has cost Etherum developers and their victims a lot of money e.g.: https://github.com/ethereum/solidity/issues/796

Ideally you'd be able to specify which mechanism you wanted for arithmetic as a property of a type, so you could have int_saturating and int_wraparound and int_nooverflow in the same code with the compiler preventing you from combining them by mistake.

C made it undefined because at the time of its manufacture twos-complement had not quite comprehensively won. There were still a few ones-complement systems in the 80s. https://en.wikipedia.org/wiki/UNIVAC_1100/2200_series


> We represent angles as a int16_t or uint16_t where you can add and subtract angles and never have to worry about a discontinuity at 2pi because 2pi=65536.

Not sure I get your trick, 2pi=6.2832 so I don't see how you avoid accumulating errors this way.


You don’t use the value directly. Instead, divide the circle up so that there are 65535 steps around the unit circle between [0, 2*pi) and use these units instead of degrees or radians. This makes integer overflow and wrapping around the circle correspond: going a quarter way around the circle five times, adds up to 90 degrees, rather than 450.


This is pretty awesome little trick, thanks!


It's basically fixed-point arithmetic[0] applied to base tau. So any integer value x that your computer is actually storing is conceptually the value 2 * pi * x / 65536.

[0]: https://en.wikipedia.org/wiki/Fixed-point_arithmetic


Rust has all these arithmetic modes on its integer primitives.

The operators, such as +, panic on overflow in debug buids and wrap as two's complement in release builds.

As methods you get:

1) Two's complement wrapping (even in debug builds)

2) Saturation

3) Overflow checking (these return Option<T>


> 1) Two's complement wrapping (even in debug builds)

As it's a relatively common need, there's also a wrapper struct for this one: https://doc.rust-lang.org/core/num/struct.Wrapping.html


I agree, Overflow is useful and it reflects how the hardware works, so its not something you want to emulate with modulo.

The mistake of the C standard was not having it "Platform dependent", rather then "Undefined". The difference is a big deal. Since overflow is undefined the optimizer is allowed to assume it never happens and therefor optimize away things, even though the behavior is defined on all CPUs known to man.


unsigned overflow is well defined in C.


> Is this a hack that should not be allowed?

not by default, no


For more detail into the porting effort, I recommend taking a look at these slides, taken from a talk the author gave last year:

https://people.gnome.org/~federico/blog/docs/fmq-porting-c-t...

"Rust revived my interest in maintainership. It is very empowering to finally have a good language that I can use at the level of the stack where I work."


> Rust generates documentation from comments in Markdown syntax. Code in the docs gets run as tests. You can illustrate how a function is used and test it at the same time:

  /// Multiples the specified number by two
  ///
  /// ```
  /// assert_eq!(multiply_by_two(5), 10);
  /// ```
  fn multiply_by_two(x: i32) -> i32 {
      x * 2
  }
Okay that's pretty cool.



Perl has that as well: http://p3rl.org/Test::Inline


Godoc does this too


I'm working on library to bring some Rust features back into C, like semiautomatic memory management (defer), Vec, Slice, String, Str, Option, built-in test cases, generics. But I lost interest because I'm allowed to use Rust in embedded project now. :-)

https://github.com/vlisivka/crust


Thank God!


It might be off topic but I just had this thought and like to write down…

If you treat your program (and your problem) as math, then it is necessary to explicitly define all its boundaries. However many (most) problems in real life are not strictly math problems due to their nature of ambiguity. The ambiguity has its physical source of unknown boundaries. For example, you may think you only need 32-bit integer and you may be sure of it today, but your assessment may change in the future. Similarly, you may think today you should use arbitrary precision and the constant boundary check does not matter, but it may matter some day. In the effort to eliminate undefined behavior, we are projecting our ideas from real world onto math realm. That projection may be quite lossy.

I am aware that the actual compiled program is always well defined and can be a perfect mathematical object, but it's source as a language do have ambiguity -- e.g. compiler implementations and undefined behavior. Ambiguous language sometimes can express ideas better than a mathematical language -- if we treat the legacy of our code more in its source rather than in its binary.


TIL: How about this instead, with pattern matching on function arguments.

I didn’t know you could do that. I might need to go back and see what I can simplify with that trick.


I just installed rustc to test this example, turns out you cannot do pattern matching on function arguments like you can in erlang.

note: `my_func` must be defined only once in the value namespace of this module


It might be more accurate to call what Rust can do to function arguments "destructuring" rather than "pattern matching"


Destructuring is a feature of pattern matching, so yeah, this is slightly more specific. I don't think "pattern matching" is incorrect, though.


Correct. One signature, one body. You can destructure args, but Erlang-style matching or overloading is not a thing.

Which sometimes I miss


You can get multiple dispatch using traits: http://smallcultfollowing.com/babysteps/blog/2014/09/30/mult...

However, that's a fair amount of boilerplate for the feature, so I highly doubt it would pay off to use that pattern to e.g. convert Erlang code that makes heavy use of function-declaration pattern matching.

Additionally, a lot of the true power of Erlang's dispatch-plus-pattern-matching is because of its runtime pattern matching (e.g. http://puredanger.github.io/tech.puredanger.com/2008/12/30/p...). You can't do that in function signatures in Rust; you can only match on things known at compile time.

So if you were converting Erlang that made heavy use of this, you'd have to pull apart the compile-time-checkable (e.g. a function can be called with a string or a number) parts of function signatures, convert them into trait-based polymorphic dispatch locations, keep track of the runtime-checkable parts, and then convert them into 'match' statements or conditionals and make sure the right match code got run in the right trait-based dispatch locations. I suspect, past a pretty low number of functions making use of this, Rust code would either get very duplicated (if you have a separate trait method for each function name,argument-types tuple), or very tricky to deduplicate if you wanted to use the same trait methods for functions with the same argument-types signatures, but only run certain match code depending on the specific function/pattern that was called.

That would work, but if you did it naively/formulaically, it would make for less-than-readable code, I think.

TL;DR you can robotically implement the Erlang pattern in Rust if you want. You can hit a square peg really hard into a round hole and shave the corners off if you want, too.


I’ve used param structures with destructuring where things got too messy. I wish they’d add optional named parameters as first class one of these days.


Optional parameters are tricky because of type inference. See this tracking issue: https://github.com/rust-lang/rfcs/issues/323


I would just like to point out that many of these features are also features of C++ :)(RAII, Templates, etc.).


True, but Rust feels like C with the good bits of C++, but none of the weird ones.

It doesn't have Rule of Three (or Five), it doesn't even have constructors. No SFINAE surprises ("templates" work like Concepts which are promised, but not in C++ yet). No headers. Pointers are "smart" by default and moves work without explicit std::move. "Panic safety" is less taxing and needed less often than exception safety. You can pass objects by value without accidentally truncating them, etc., etc.

C++ can do a lot of nice stuff, and even a lot of it quite safely, but I feel like it's going to backstab me every time I forget about some obscure rule, and it's going to be my fault for not knowing it. In Rust I can be confident that if it compiles, it's good.


C++ does have a lot of the features that Rust has. You won't get any of the memory safety guarantees or compile time checks though. I will say that to me a lot of the C++ equivalents to Rust features seem very awkward to use (maybe not once you're used to C++ I guess).

This guy does a pretty good presentation on Polymorphism in Rust vs C++: https://www.youtube.com/watch?v=VSlBhAOLtFA


The major issue with C++, is what also helped the language gain market adoption, copy-paste compatibility with most of C89 code.

So security conscious C++ developers tend to use enum class, string, vector and array classes, RAII, iostreams, wrapping data access in classes with invariants,...

Developers with more C oriented mindset, tend to just code away like "C with C++ compiler" programming style.

Hence the need being discussing since the last three CppCon, to stop using C style programming in modern C++.

CppCon 2015: Kate Gregory “Stop Teaching C" - https://www.youtube.com/watch?v=YnWhqhNdYyk

CppCon 2015: Bjarne Stroustrup “Writing Good C++14” - https://www.youtube.com/watch?v=1OEu9C51K2A

CppCon 2017: Bjarne Stroustrup “Learning and Teaching Modern C++” - https://www.youtube.com/watch?v=fX2W3nNjJIo&t=2s

Of course, preventing copy-paste compatibility with C, already solves many of those issues from the get go.


> Developers with more C oriented mindset, tend to just code away like "C with C++ compiler" programming style.

The way I've seen it described is 'C with more convenient comments'.


You mean C99 ? What I really miss from C , with some of the things of this article, is constexpr, compile time checked enums, ...


The problem is it has all the other ones too, the wildly unsafe ones, the impossible to understand ones, and as a result the good ones can’t make anywhere close to the same kinds of guarantees Rust can. C++ even pretends to implement Rust features in some cases like std::move which doesn’t ... move things. Or pattern matching which doesn’t ... match patterns due to lack of algebraic data types.

What makes Rust a joy to program in is as much what it won’t let you do as what it will.


C++ lacks cargo, and usually requires CMake or other abomination :(


My favorite thing in rust is the error handling and propagation.

Using JS for comparison.

Example, this code in JS

function decode(path){

    try {

        return parse(path);

    } catch (err){

        throw err; // rethrow

    }
}

Could be compacted into this using try! macro or "?".

pub fn decode(path: &File) -> Result<Image>{

    try!(parse(path))
}

For error propagation, I could easily consume the error of a third-party lib, and convert it into my own using the From trait:

/// Convert std::io::Error to RasterError::Io

impl From<IoError> for RasterError {

    fn from(err: IoError) -> RasterError {

        RasterError::Io(err)

    }
}

Where as in JS, I would have to write a lot of if else if I want to convert a third-party package's error into my own error.

if( error.name === "IoError" ){

    let err = new Error(error);

    err.name = "MyError";

    return err;
} else if (...)

    ...

}


Don't forget that try! turned into ? so that's now

  parse(path)?

!


Yes but I still prefer try! as its easier to see (personal preference)


What’s the point of catching and rethrowing?


When your inside a module and you want the app using your module to catch the exception.


> Some good experiences with C

> Reading the POV-Ray code source code for the first time and learning how to do object orientation and inheritance in C.

But https://github.com/POV-Ray/povray is all C++... It even says "Written in C++" on the Wikipedia page. Was he talking about some other source, maybe some older version?

Edit: [1] Here is a discussion about this topic from 2012. It seems that POV-Ray was ported to C++ for version 3.7.

[1] http://news.povray.org/povray.programming/thread/%3Cweb.4f01...


Concerning the "Pattern matching" section, I've never, in all my time programming Rust, seen that specific pattern of destructuring the input values of the function in the fn declaration part.

Now I feel the need to use that everywhere...


A lot of people didn't know! I made sure to put it in the book explicitly: https://doc.rust-lang.org/book/second-edition/ch18-01-all-th...


Huh, was that also in the first edition?

I learned using that and if this pattern matching pattern (heh) is indeed in there, I must have overread it...


I’m not sure, to be honest, it’s been a long time!


I tried to do a small project using Rust and SDL2 and gave up due to having to deal with lifetimes on textures.

> you just implement the [Drop] trait and that's basically it.

Is that what I was missing? could I have done that and not had to deal with lifetimes?


In the transition from 0.29 to 0.30, the sdl2 library added that lifetime parameter to textures. Personally I found that it made the library unuseable, and I switched to an alternative.

I don't think I was the only one, because in 0.31 they added the option to remove that lifetime as long as you are happy disposing of the textures yourself. If you do the following in your Cargo.toml they go away.

    [dependencies.sdl2]
    version = "0.31"
    default-features = false
    features = ["unsafe_textures"]
I think the argument for lifetimes on textures and surfaces was to increase the safety of the library. This is probably true, but in practice I found it unuseable, and I think a system of handles along with error messages when you try to use it would have solved the problem in a workable manner with minimal performance overhead.


So it wasn't just me! One thing that kept bugging me is that in my case the lifetime of every texture was "forever" and it seemed like there ought to be an easy way to indicate that, without lifetime annotations propagating throughout all the code.

Maybe there is!


That's `'static`!


Lifetimes can be a challenge to learn but once you do not having to worry about whether something is actually getting destroyed is a great feeling. The opposite example is I just wrote some GL in Swift and I have no idea if my GL textures are leaking. I’m confident with lifetimes I’d know ^_^ my email is in my profile. Happy to help out if you’re still interested in debugging.


You always have to "deal" with lifetimes. The rules are not that complicated once you understand them. It boils down to:

* An object can only have one owner.

* When giving a reference away from an object, the reference has a lifetime that makes sure the reference doesn't outlive the owner.

* You can either have one mutable reference >>or<< arbitrary read-only references.

* An owned object must be denoted mutable to be mutable, otherwise it is read-only.


Some comments with the caveat that I have used Rust only for a short learning exercise a while ago and I don't fully understand programming in Rust (and found it tough going), but I think the problem is not about being able to understand the ownership models and lifetimes (which seems fairly straightforward conceptually).

The problem is in being able to put together a program that the compiler does not complain about. It always seems like you have to "feel" your way around by trying different things like a jigsaw puzzle rather than having a "canonical" way to do things that fit the lifetime/ownership model. Maybe there are more canonical ways based on people's experiences nowadays and maybe this problem is due to approaching solving things using a C (or other language) mind-set. It seems like learning these ways to avoid the Rust compilers errors related to lifetimes in any complex program takes longer than in other languages.


Rust is for sure not an easy language (but compared with Haskell also not that hard/time consuming to learn). The major stumbling block seems to me is, that the syntax looks some kind of familiar, which tricks you into believing that knowledge-of-other-language applies, but in reality Rust is a completely different beast.

For me there was a gap between reading the Rust book and applying the knowledge. While reading I thought "that's easy" but later on "oh shit, the compiler complains all the time". If you make it past that point – which takes for most non-novice programmers around three weeks – then Rust becomes a very powerful tool to you.


If you have any thought or insight on how to get over that, please let me know. The docs team wants to work on "what comes after the book" next, so any insights into what made it work for you would be very helpful!


> If you have any thought or insight on how to get over that...

It depends with which concepts someone is already familiar. To me the only really new concept was the lifetime and ownership one, all others have just a different syntax plus here and there some extra goodies. As example, when I saw the match keyword I thought "cool, Rust has a keyword for guards".

I learned Rust shortly after 1.0 was released. The real obstacle for me to overcome was to build a mental concept of what Rust is. Something that wasn't really communicated while reading the first pre-release Rust book.

You have to read a lot to get an idea of Rust, it should be the other way around – a common mistake done by almost everyone teaching something. Ideally the first chapter would be a buzzword free, Rust code free chapter about how Rust works superficially and why there is a need for yet another computer language. It should describe what kind of tools and concepts in Rust exist and which problems they solve. This is better than introducing somewhat complicated concepts/tools and trying to explain how they fit into the big picture at the same time.

Ideally there would be an introduction like this:

A program needs a structure, variables, control flow, tools... blablabla... this leads to some common problems [some easy examples] and [this] are some concepts Rust invented/uses to solve them.

In later chapters the abstract ideas would be "converted into Rust". Also don't go too much into detail. I don't think for example that a representation of the memory layout of a vector helps much. At first I just need to know there is a thing where I can put objects of the same type into, how it is implemented is not that important for a beginner. It's better IMHO to know how to do something right, than why something is right. This reduces the mental burden in the beginning of the learning phase. The why is something that you learn over time – or maybe you are not that interested and skip it forever, also a valid thing to do.

Overall the Rust community does a stellar job providing so much help in so many different ways – /r/rust on Reddit is really awesome – thanks for the book and your open mind to ask for feedback.


Thank you! Very helpful.


Yeah, most people who make it over the hump describe it as "once you learn the rules and develop intuition for them, you start designing your code in a way that works well first rather than fighting with the borrow checker all the time."

> It seems like learning these ways to avoid the Rust compilers errors related to lifetimes in any complex program takes longer than in other languages.

Opinion is split here. Some people prefer that C and C++ let you "get it slightly wrong" for a while, so you learn more incrementally. Others prefer that the Rust compiler tells you exactly what you've gotten wrong, so it can help you figure out what to learn about. I suspect which works better for you depends on you.


It has been a while but I was moving along ok until I wanted to put some of them in a map and then I ran out of IQ!


Best practices are "use owned types for the keys and values until you have a reason not to." It makes things much simpler. Of course, this isn't always possible, but most of the time, it is!


I found the first point on unit-testing of C odd, I always treat static functions as implementation details and test only the public interface. CUnit is nice enough (if a bit long in the tooth).


Rust is ambivalent about it. You can put tests in external files and force yourself to test only the public interface if you want.

But you can also throw a `#[test]` function alongside the implementation. It's sometimes super convenient to test a leaf helper function directly instead of mocking a whole program around it. And because the test lives next to the implementation, it's easy to change or delete it when the impl changes.

There's another nice side-effect of having one standard test framework - `cargo test` can test any Rust program. The Rust team can automatically test new compiler releases against all known Rust code: https://github.com/rust-lang-nursery/crater


Generally speaking, yes, but it's nice to have the option. I've often encountered situations where writing tests is a good part of either developing a complex system, or debugging it when it goes wrong.

As a random example, say you're making a data-structure of some sort, and you need to write some internal mechanism to mutate it (balancing a binary tree, resizing a hash table, whatever). If that internal mechanism is somewhat complicated, or gets called many times in public call (or even recusively), it's nice to be able to write tests for it specifically, to make sure you got this critical piece of the puzzle right.

Writing a test for just the public interface would probably catch that SOMETHING is wrong, but not what part of the code. It's basically a version of sprinkling your code with assertions to make sure you've got your invariants covered.


It's fairly trivial to have every listing contain a main() gated on a testing ifdef which may call the local static functions. In essence every .c listing becomes a test program.


There's a culture of testers who believe 100% code coverage is important enough to include "private" implementations, and will happily break encapsulation rules for sake of hitting 100% coverage.


Eh, I think both extremes are sub-optimal.

Some small public APIs are backed by large enough implementations that it pays off to be able to test implementation details. Sure, it might be "poorly factored" code that should have a bigger API and smaller guts, but that's not always an something you can change. Also, writing tests for internal behavior before refactoring can give you a good blueprint for how the refactored code should behave--being able to read the tests to specify unclear behavior is, while far from enjoyable in some cases, better than nothing.

You're right that there are some pretty silly test suites that break encapsulation for a coverage number without actually testing anything useful, though.

I don't think an absolute "all tests must behave thus" rule (e.g. coverage requirements, "only test public functionality", "refactor the instant something isn't easily testable") is useful. Explain the benefits of each path, and make sure the decision of what compromise to make--and in any project more than a one-developer hobby, you will have to compromise here eventually--is in the hands of people with the experience and common sense to make the right one.


If things are really gnarly in the implementation, one can always break it up into submodules to expose that functionality as public (and that's usually good architecturally IMHO).


I think that C would have lasted us much longer if it just had C++-style templates. No classes, exceptions or other C++ features needed.


Can't macros already do everything templates can? I mean, it would be nicer to have templates, but I don't see why that would be a bottleneck for anything. Someone could probably write a templating macro and you could use that to the same effect.


I've seen that many times, written a few of them, and it's pretty horrible. Real templates are needed for such basics as generic data structures.


I think the other critical feature you need is destructors, before you can build really helpful containers.


Enjoyed this article so much.


> You don't have to fight dependencies. It just works when you cargo build.

Unless you're using caret requirements, or tilde requirements, or wildcard requirements, or inequality requirements, or multiple version requirements, or dependency overrides.

By default, Rust, much like every other language I know of, does not require explicit versions or checksums for dependencies. This means it's up to the developer to work really hard to make sure their code will actually work as expected outside of their own build environment. This could be mostly avoided if more people used explicit version requirements in their code, but nobody seems to want to do this (which is strange, because your code is what needs specific deps, so why not keep the version requirements right there?)

edit: For all the duplicate comments mentioning cargo.lock: The official FAQ details how libraries do not impose requirements at build time (https://doc.rust-lang.org/cargo/faq.html#why-do-binaries-hav...): "Users dependent on the library will not inspect the library’s Cargo.lock (even if it exists)". The user cannot be certain their app will always work, because they can't be certain that their dependent library has been tested on each other library dependency. Basically, there's no certainty if you use libraries.

The other thing people are missing is how being able to call duplicate conflicting versions of the same library where it is needed in your code would allow you to work around complicated bugs in large projects automatically, rather than having to find a magic combination of conflicting dep versions that work for your app and its dependencies. Apparently nobody here has ever tried to manage more than one OpenStack tools tree for use by the same application.


Caret, tilde and other loose requirements are fine in Cargo.toml because cargo generates Cargo.lock on your behalf with the exact versions required for your build. This is how pretty much all modern dependency managers work(Bundler, Composer, NPM, CocoaPods, SPM, etc). Saying that this doesn't work just is not true.


All you have to do is keep your Cargo.lock file in version control, and you will have consistent dependency versions in any environment. This will work regardless of how generally you specify the version requirements of your dependencies.


Semver package maintainer here. Cargo assumes a carat requirement if you don't write one.

Of course, Cargo.lock will keep your dependencies locked exactly until you decide you want to upgrade.


> call duplicate conflicting versions of the same library where it is needed in your code

Cargo already does this automatically if two of your dependencies depend on conflicting versions of the same library.

And for what it's worth, the libraries on crates.io seem to follow semver closely enough that the problem you describe doesn't really come up.


  > > call duplicate conflicting versions of the same library where it is needed in your code
  > 
  > Cargo already does this automatically if two of your dependencies depend on conflicting versions of the same library.
Assuming this is the case, your code doesn't specify which version of what function to use. So how could Cargo know which to use?


To be clear, Cargo will attempt to unify versions as much as it can. If it can't, you'll get multiple copies.

For example, let's say my project depends on two libraries; A and B. Both depend on library C.

If A depends on C version ^ 1.0.0, and B depends on C version ^ 1.1.0, and 1.2.0 is the latest version of C, then both A and B will end up with C 1.2.0.

If A depends on C = 1.0.0, and B depends on C version = 1.1.0, then you'll get two copies of C.


And when A calls something in B? For this example, let's bump the major rev.

B is expecting C = 2.0.0. A is expecting C = 1.0.0. A makes a call to B, which it passes to C. Does A know that B now expects to be called differently, since C is also significantly different now? According to the above requirements, no version of the deps exist that could allow the app to execute correctly, even though it will technically build fine.

The way to resolve this would be to say A requires C < 2.0.0, and simply fail to build until someone fixes A. But you probably won't even know about this conflict until someone tests the app with a specific feature of A that conflicts with C >= 2.0.0, the build fails, someone figures out the conflict, and updates the requirements.

But you can only know this, and add this requirement, once you have found the conflict. Thus code in the present, even with dependency requirements, may be indeterminate in the future. (Unless you walk back through all calls in the code to find calls between multiple dependencies that eventually land on conflicts... but I don't think that is possible in Turing-complete languages)

If you pinned the version in each function call in each part of the code, you could have the compiler or interpreter walk back through dependent code, identify mismatches, retrieve a compatible dependency, and continue execution. Or at the very least throw warnings all over the place when code is running using dependencies it wasn't written for.


You get a type error, which was actually funny, given that the message used to say something like "expecting type T, got type T".

Funny enough, what you describe is theoretically possible in JavaScript, but strangely enough, it seems to work... it's something I've long been trying to come up with a POC exploit for.


It seems to me he, first, talks about all the wonderful code he has seen written in C, then complains about C when he runs into terribly written C code but wants to blame it on C. I have a problem with that.

He then concludes that he likes things about Rust but then states Rust is a lot harder to learn but you become a better programmer cause you have to work harder on it. But isn't that what he complains about with C?


Uhh did you read the same article I did? 90% of the article was features missing in C: Generics, Slices, Dependency Management, Tests, Useful macros, Pattern matching, etc.

Those things have nothing to do with good or bad code and have everything to do with the language.


Did you read the first section of the same article I did? You seem to have skipped that part.


He mentioned some bad C experiences he had for a whole 7 sentences and that's your take away from the article?

You're totally missing the point dude. The point is that C doesn't protect developers from some of the most basic mistakes and offers very little tooling compared to modern languages. It turns out that maintaining huge software projects in C is actually really hard.


> He then concludes that he likes things about Rust but then states Rust is a lot harder to learn but you become a better programmer cause you have to work harder on it. But isn't that what he complains about with C?

The difference is that when you haven't learned/aren't good enough in C, you'll make a codebase that sort-of works for now but is buggy and unmaintainable - code that reads uninitialized values or never free()s or works as an application but could never be turned into a library - whereas in Rust your code won't compile until you get good enough to write it properly.


Not only that, rustc is tremendously helpful in guiding you toward the correct solution. If you accidentally tell a C compiler that you want to shoot yourself in the foot it happily complies. rustc says "It looks like you're accidentally trying to shoot yourself in the foot. Here, let me show how to aim at the target instead."


Some of this could be improved with better C compilers, right? Yes, there's less rich language metadata for the compiler to use than in Rust. And yes, both Clang and GCC have come a long way in warning diagnostics in the last five years as part of renewed competition. But there's still room to grow.


The fundamental problem here is that C compilers just don't have enough information about your code. An "unsafe pattern" that might warrant a warning in one place may be crucial in another.

Rust's innovation here is that it can separate these cases based on lifetime annotations on functions and types, where the program text itself disambiguates its intentions.

So to get much closer than current -Wall -Wextra, you need to either impose some default assumptions (the C++ core guidelines do this) or start adding annotations somehow.


That is pretty challenging, actually. The C language spec allows about anything with the semi-colons in the right place. Without more strictures in the language spec there is nothing to raise errors about.

Now you can ask for lots of warnings, -wall, but I suspect many people turn that off after the first ASCII tsunami.


another thing worth noting is that many projects need to support multiple platforms, which means multiple different compilers. good luck getting a large project to build on clang or gcc and msvc without warnings. i've run into cases at work where it seems like i'm going to get a warning from at least one of the compilers no matter what i do (barring complicated #ifdef/#pragma salad).


-Wall is a pretty low bar nowadays. There's some resistance to adding more to it, so you also have to enable -Wextra. I do at least those for all new C code.

And even those two doesn't get you everything. Clang's got -Weverything, but the developers suggest it's an internal feature and shouldn't be used. From there you have to manually identify and add -Wfoo options.


I agree. I'd like to see C compilers that were even pickier about warnings than current compilers with -Wall. Compilers should be able to pick out more unsafe patterns and practices than they do, and warn about them, even as they accept the code as valid C.

When I write my own code I turn on as many compiler warnings as I can, and make sure it compiles clean. If there were more warnings I'd turn them on.

Problem is usually third party libraries. Plenty of open source projects don't give a single shit about compiler warnings and unsafe practices. So if I include a header from libSomething package, here come 10k warnings. I remember one project (don't remember which one) specifically said in its documentation something like "don't submit patches to fix compiler warnings, it's impossible to keep up with every complaining compiler out there." With programmers like that out there, what hope do we have?


It's usually possible to selectively disable warnings for 3rd party code. A last-ditch approach is to use the following around an include:

    #pragma GCC diagnostic push
    #pragma GCC diagnostic ignored "-Wfoo"
    #include <my_3rd_party_header.h>
    #pragma GCC diagnostic pop
E.g., https://stackoverflow.com/questions/3378560/how-to-disable-g...

(Despite the "GCC" moniker, Clang supports the same approach.)


> Some of this could be improved with better C compilers, right?

Not significantly, not as long as they are C compilers and thus accept all valid C code.


Depends on what you mean by "accept," doesn't it?

Yes, without -Werror, C compilers won't act on diagnostic warnings. But C compilers can and do warn about "valid C code" that happens to set off alarm bells.

And with -Werror, they reject some valid C code.


That is a fault of the programmer, not C. Sure, having those things in rust is nice but, my point about the above line is, he's talking out of both sides of his mouth (with all due respect).

I don't want to come across as dissing rust. I've only tinkered with it and liked the parts I saw but feel some of his comments are misplaced.

EDIT: Obviously no one gets my point after twice explaining, that he's complaining about badly written code by bad programmers as his reason for dissing C, so I'll refrain from commenting anymore.


>> Obviously no one gets my point after twice explaining

Actually we do. We're just not indulging your point. It's the same point that has plagued every new tool and technique that has ever emerged; "we don't need this, the exiting tools are fine, just be a better programmer." "See? If you just program better you wouldn't need this pointless such and such." I suspect this mentality is found in every field of endeavour.

The world you envision, where world class programmers write and maintain flawless code and everyone else stays home, is fictional. It has never been and it will never be. Thus we pursue tools to improve the state of the art despite the lack of exclusively world class programmers.

Maybe Rust is one of the right answers. Maybe not. I don't know. I just know that most of us are straight up done putting up with people that insist the pursuit of better tools is somehow misguided. Figure that out and you'll do better.


> That is a fault of the programmer, not C. Sure, having those things in rust is nice but, my point about the above line is, he's talking out of both sides of his mouth (with all due respect).

The problem is, that at least 99% of C-programmers aren't good enough to use C. Seriously.

If you had a vehicle model that causes 99% of drivers to crash, wouldn't you say there's a problem with the vehicle rather than with the drivers?

We have now 40 years worth of proof that C is simply too difficult language to master in general case. Isn't that already enough?

I use C daily and have about 20 years of experience, but still don't consider myself to be good enough to use it. There was a time in the past when I did think I mastered it, but I was a fool back then.

This is not to say my code doesn't work; it does. It just sometimes has some weird edge cases compilers and static analysis didn't catch.

This is very bad in the era of network connected computers, because those edge cases are and will be actively exploited.


> That is a fault of the programmer, not C.

It's common to hear this line from the C fandom, and very rare to hear a serious proposal for what to do with all this Bad Code written by Bad Programmers. The status quo is to rewrite the former from scratch, and totally ignore the latter in the hopes that they'll be magically pulled down to hell and have their hands burned off by daemons. As far as I know, that part doesn't actually happen. Instead, they just keep writing more Bad Code, and meanwhile the rewrites are just as liable to cause problems as whatever they replaced, because often as not, the people doing the rewrites also turn out to be Bad Programmers.

If pressed for actionable solutions, C advocates will sometimes mention tooling such as Valgrind and ASAN, as if merely having it exist guarantees that all C programmers are educated in its use and make it a part of their workflow. This clearly isn't the case. New programmers are still learning C from material written before such tooling existed. Old programmers are still recommending it. There's no path for Bad Programmers to become Good Programmers, except to be admonished in public to Stop Writing Bad Code.

Ultimately, I agree with you: The issue here isn't with the C language. It's with the people who use it. Their priorities do not include concerns like memory safety, and their normal response to questions of code quality is to make it somebody else's problem.

If the culture of the community were to change, I expect the language would follow.


Yes, and even more so for C++ than C.


The point is, there cannot exist the same sort of terrible Rust code as the terrible C code he complains about. Because the compiler won't let you do it. With C you can write code that you don't understand is bad and broken because the compiler happily obliges and the program may even seem to work fine. With Rust you cannot because you're immediately told that the code is problematic (when it comes to memory safety. Of course there are other ways to write bad unmaintainable Rust code.)


There kinda can in the sense that you could just make every function `unsafe`, but that's a bit of a giant blinking red flag.


And even then it will warn in many situations. Unsafe does not suddenly make everything okay, it only allows 3 specific things that are otherwise not allowed.


Yes, but through these it does let you break the language by creating invalid values (and thus trigger wide-ranging UBs).


Apparently it is a known issue since 1979, yet it keeps being ignored.

> To encourage people to pay more attention to the official language rules, to detect legal but suspicious constructions, and to help find interface mismatches undetectable with simple mechanisms for separate compilation, Steve Johnson adapted his pcc compiler to produce lint [Johnson 79b], which scanned a set of files and remarked on dubious constructions.

-- Dennis M. Ritchie

https://www.bell-labs.com/usr/dmr/www/chist.html

Any feature that is outsourced to an external tool, has a very high probability of being ignored, and as such, yes it is a failure of the language's design.


Yes that's the damn problem with C. You can the best programmer on the entire planet and then suddenly your skills become completely worthless because you have to work on broken code that you didn't write.

Of course people will write broken code in Java, safe Rust, Javascript. Fortunately the consequence of broken code in these languages is restricted to the application with the broken code. Broken C code threatens your entire computer.


> Broken C code threatens your entire computer.

As does broken hardware.


Yes, but hardware is easier to replace...


It seems to me that if you already know how to write correct C code, then the 'difficult' parts of Rust should be easy for you, and you'll find the compiler mostly just points out potentially dangerous things that you might have missed every once in a while.




Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: