> But you have to prove that your value proposition, doesn't lead to horrible failure down the line.
No I do not. My software is done, it has run for decades on the Internet, making me money; For whatever you could mean by "horrible failure", what I am doing either doesn't lead to it, or it's clearly not that bad.
> ... is not a valid argument
We're not having an argument; You want to ask a question, I can give you an answer, but there's nothing here to argue:
I'm ``using a "memory unsafe"´´ language because I can write secure programs that run quickly with them, and I don't live in a Harrison Bergeron world.
This is about why you can't yet, and really doesn't have anything to do with me.
No, but it requires covering for fixes, free of charge, in special security critical infrastructures.
Software company bares the cost of producing faulty software.
When it touches the bank, many companies will start considering alternatives.
In fact, this is what drove Microsoft et al to start finally embracing other stacks, the amount of money burned fixing exploits for free in OS updates.
> I'm ``using a "memory unsafe"´´ language because I can write secure programs that run quickly with them, and I don't live in a Harrison Bergeron world.
Did you prove it mathematically?
Did you wrap it in Rust API and use miri to inspect for UB?
Did you ran it through ASAN, MSAN, STACK, etc?
All UB/platform-specific behavior has been accounted for?
So you ran fuzzers for days-weeks? What issues did it uncover?
Do your test coverage account for all cases and most input variations?
Or is this a case of ``I think I can write secure programs``. There is a light-year-wide trench between thinking and proving you can do something. And it still needs to break is not a guarantee.
No, it's not. That's why I am asking for clarification. By testing, proving programs, running fuzzers, running sanitizers, and various other tooling you reduce the chance of problems to an acceptable level. You can't be perfectly safe, but nothing doing anything isn't acceptable behavior in face of preventable risk.
Sure, writing unsafe code is hard. And from what I see most of these issues relate to old versions of Rust having unsound issues that were fixed at later point.
> In the standard library in Rust before 1.52.0,
> In the standard library in Rust before 1.51.0,
> In the standard library in Rust before 1.19.0,
...
Basically by upgrading your Rust version, your code becomes less and less buggy over time, which is not something C can boast about (modern C++ is safer but still a far cry from what is achiveable in let's say Java).
> No, it's not. By testing, proving programs, running fuzzers, running sanitizers, and various other tooling you reduce the chance of problems to an acceptable level
Yes it is. If the outcome is acceptable to you it's because you think so.
The rust authors thought wrongly too: using this testing, proving, fuzzing, sanitizing and various other tooling "accepted" the code with bugs in it, but it wasn't good enough, so they fixed it. It clearly wasn't "acceptable" to them.
Meanwhile, I'm wondering what the heck kind of crazy crackhead do you gotta be to think that the "bugs" nobody has hit in ten-year-old-code that paid for my house are somehow worse than these bugs that passed testing, proving, fuzzing, sanitizing and various other tooling.
> but nothing doing anything isn't acceptable behavior in face of preventable risk
Going 200kph the wrong way is absolutely worse than doing nothing.
> Basically by upgrading your Rust version, your code becomes less and less buggy over time, which is not something C can boast about
No, your code doesn't become less buggy, you just stop using other peoples' clearly buggy but nonetheless proven, tested, fuzzed, and sanitized code.
Your application may or may not become less buggy: If a user can't hit a bug, it isn't a bug. But if they have, you're going to be hoping those other people at least made small diffs so you have a chance of finding it.
> Going 200kph the wrong way is absolutely worse than doing nothing.
What's the alternative, have twice as many critical vulnerabilities? Using Rust in Android, Linux and for drivers has already been proven to work, and work rather well (Linus snark aside). See recent postings about Rust code in Android.
> The rust authors thought wrongly too: using this testing, proving, fuzzing, sanitizing and various other tooling "accepted" the code with bugs in it
Just because fuzzing, testing, proving still let bugs exist doesn't mean it pointless. Let alone doing it in memory unsafe language.
> Meanwhile, I'm wondering what the heck kind of crazy crackhead do you gotta be to think that the "bugs" nobody has hit in ten-year-old-code
The kind of crackhead, that had to pick up pieces after 15 year old code that people thought it was "working", but had massive oversights. I know what passes for working, and honestly it scares me. From segfaults when comments are removed, to XML parsers that don't understand namespaces, to bugs caused in unexercised code that fucked over entire ecosystems. Would Rust solve all of them, probably most likely the first one wouldn't happen.
That said, I'm not judging your code, it's possible to make C code without UB, but it's kinda like winning a lottery. libfyaml is one such library.
> What's the alternative, have twice as many critical vulnerabilities?
Oh don't be silly: The software with the best security track record is written in C (e.g. qmail) so there are obviously many alternatives. You could sit and think for a bit, for example.
> Just because fuzzing, testing, proving still let bugs exist doesn't mean it pointless. Let alone doing it in memory unsafe language.
I never said pointless, just that you were wrong what what they do.
> That said, I'm not judging your code
Really sounds like you are. I talk about code that's finished, and you talk about code that isn't finished.
It totally makes sense to me how someone who wants to never get finished would use rust, but friend, that isn't me.
Yes. But you have to prove that your value proposition, doesn't lead to horrible failure down the line.
I.e. We need to make software fast, security be damned is not a valid argument.