The correct procedure when they fuck up and close the report is to ask the report to be made public. Had he done this, this would have been a non issue.
The reason people don't do this is because they think they have something that can be modified into another bug. Which is exactly what happened here.
It is a consequence of (a) Go's implicit relationship between a concrete type and the interfaces it implements and (b) Go's "heterogenous" translation of generics (like C++, unlike Java). Together, this means you can't know which methods you need a priori. All proposed solutions to date essentially compromise on (a) by limiting the generic "implements" relation to things known at build time, or on (b) by making generic methods "homegenous" (aka boxed) and thus slow (see https://github.com/golang/go/issues/49085#issuecomment-23163... for an elaboration of this approach); or, they involve dynamic code generation.
It's broadly similar to the reason why Rust won't allow generic methods on traits used to construct trait objects. It seems superficially like a reasonable thing to want, but actually isn't when you consider the details. (The sister comments link to the specific reasons why in the case of Go.)
I recall reading some details around this on https://blog.merovius.de. Maybe it was this article: https://blog.merovius.de/posts/2018-06-03-why-doesnt-go-have.... Compilation speed plays a big factor when deciding which features are added to Golang and I think they'd have a hard time maintaining the current compilation speed if they remove this limitation.
Equally serious answer: yes, if it doesn't break backwards compatibilty between the updates.
This is what the Rust project means by stable. You can update and your code will continue building. (There's a bunch of documented caveats though.) Rust has been stable in this sense since 1.0, almost ten years.
Of course, you might have different semantics for "stable". Some seem to mean "rarely updating" or "each update is small" by that. In the latter sense, too, Rust has becoming stabler over the last few years.
In the "rarely updating" sense, Rust is not going to change course. Frequent, time-based releases have demonstrably made the progress smoother, and in a sense, "stabler", as in, more predictable and bug-free.
The fact that they could go from zero to proof of concept in a year and then into production just 2-3 years later is impressive. Given these results I feel the industry will slowly move to Rust once a certified toolchain for safety critical systems exists.
At the same time, let's not forget that this is a highly competent team with tons of experience. It's not guaranteed that other developers can have the same success.
Ferrocene [0] exists today! I'm mainly interested in the space from the point of view of a theoretical computer scientist, so I'm not sure if there's additional boxes that need to be checked legally, but it's looking pretty good to me.
In the sense that there are a variety of requirements that need to be checked, there are. Each industry is different. Ferrocene has mostly been driven by automotive so far because that’s where the customers are, but they’ll get to all of them eventually.
> In the sense that there are a variety of requirements that need to be checked
Does "requirement" in this context refer to the same thing as a particular ISO/EN/... standard? Or do you mean that there are a multitude of standards, each of which make various demands and some of those might not yet be fulfilled?
My wording was much more ambiguous than I intended. What I meant to convey was that I don't know what hurdles there are beyond conforming to the relevant certifications. I.e. in the automotive conetext, Ferrocene is ISO26262 certified, but is that sufficient to be used in a safety-critical automotive context, or are there additional steps that need to be taken before a supplier could use Ferrocene to create a qualified binary?
No worries! And once again, just to be clear, I don't work directly in these industries, so this is my current understanding of all of this, but some details may be slightly off. But the big picture should be correct.
It means a bunch of things: there are a multitude of standards, so just ISO 26262 isn't enough for some work, yes. But also, safety critical standards are different than say, the C standard. With a programming language standard, you implement it, and then you're done. Choosing to use a specific C compiler is something an organization does of their own volition, and maybe they don't care a ton about standardization, but being close enough to the standard is good enough, or extensions are fine. For example, the Linux project chose to use gcc specific extensions for C, and hasn't ever been able to work with just standard C. Clang wasn't possible until they implemented those gcc extensions. This is all fine and normal in our world.
But safety critical standards are more like a standardized process for managing risk. So there's more wiggle room, in some sense. It's less "here is the grammar for a language" and more "here is the way that you quantify various risks in the development process." What this means is, so like, the government has a requirement that a car follows ISO 26262. How do you demonstrate that your car does this? Well, there are auditing organizations. The government says "hey, we trust TÜV SÜD to certify that your organization is following ISO 26262." And so, if you want to sell a car, you get in touch with TÜV SÜD or an equivalent organization, and get accredited. To put it in C terms, imagine if there was a body that you had to explain your C compiler's implementation-defined behavior to, and they'd go "yeah that makes sense" or "no, that's not a legitimate implementation." (by the way, I am choosing TÜV SÜD because that is the organization that certified Ferrocene.)
Okay, so, I want to sell a car. I need to write some software. I have to convince TÜV SÜD that I am compliant with ISO 26262. How do I do that? Well, I have to show them how I manage various risks. One of those risks is how my software is produced. One way to do that is to outsource part of my risk management by purchasing a license for a compiler that also implements ISO 26262. If I was willing to go to the work of certifying my own compiler, I could use whatever I want. But I'm in the car business, not the compiler business, so it makes more sense to purchase a compiler like that. But that's fundamentally what it is, outsourcing one aspect of demonstrating I have a handle on risk management. Just because you have a certified compiler doesn't mean that any code produced by it is totally fine. It exists as one component of the larger project of demonstrating compliance. For example, all of the code I write may be bad. So while I don't have to demonstrate anything about the compiler other than that it is compliant, I'm gonna need to demonstrate that my code follows those guidelines. Ferrocene has not yet in my understanding qualified the Rust core or standard libraries, only the compiler, and so if I want to use those, that counts as similar to my own code. But this is what I'm getting at, there's just a lot more work to be done as part of the overall effort than "I purchased a compiler and now I'm good to go."
That was a very in-depth reply and is very appreciated. One point I did not expect that core and alloc are not qualified yet. In any case, you motivated me to do some research of my own to fill in the gaps in my own understanding. What follows is my attepmt to summarize all of this in the hope that you and anyone else reading it may also find it helpful.
I want to take a step back: why does the automative industry care about certain qualifications? Because the legaslative mandates that they follow them so that cars are "safe". In Germany the industry is required to follow whatever the "state of the art" is. This is not necessarily ISO 26262, but it might be. It might also be one of the many DIN norms, or even a combination thereof.
ISO 26262 concerns itself with mitigating risks and hazards introduced by safety-critical systems and poses a list of technical and non-technical requirements that need to be fulfilled. These concern both the final binaries and to some degree the development process. As you pointed out, the manufacturer needs to ultimately prove to some body that their binaries adhere to the standard. Use of a qualified compiler does not appear to be strictly necessary to achieve that. However, proving properties of a binary that is the result of a compilation process, is prohibitively difficult. We'd rather prove properties of our source code.
However, proving properties of source code is only sufficient to show properties of the binary if the compilation process does not change the behavior of the program. This is where having a qualified compiler seems to come in. If my compiler is qualified, I may assume that it is sufficiently free of faults. Personally, I'd rather have a formally verified compiler, but that's obviously a much larger undertaking. (For C, CompCert [0] exists.)
Now, as you point out, none of this helps if my own code is bad. I still need to certify my own code and Ferrocene can be a part of that. However, to circle back to my prior question of additional boxes that need to be checked: Yes, any Rust code written (and any parts of core, alloc, and std that are used) needs to be certified, but Ferrocene's rustc is ready to be used in software aiming for ISO26262 compliance today. No additional boxes pertaining to rustc need checking; although, qualified core and alloc would certainly be helpful.
Give this is already rolling off the production line, the toolchain they use must be certified iso 26262. More than actual engineering (though there still is some), the hard part is getting that certification at all, or you can't put it in a car.
> Fuzzing is often a special case of genetic algorithms
Yes, that was sort of why I thought RL guided fuzzing could work, and possibly better. Also, for things like XSS fuzzing (which I have a little experience in), it is possible for an experienced attacker to intelligibly guide the fuzzing to a payload, which theoretically could be mimicked through RL.
There wasn't really anything novel in the proposal, it was just for a graduate cyber-security course, and one of the deliverables was a project proposal for something related. There were already some existing works that time (around 2-3 years ago) where people tried combining RL with fuzzing, and I just mish-mashed some ideas together so I could hand in something.
My main concern at the time however was that with fuzzing the positive signal would be so rare compared to the negative signal, since most randomly fuzzed inputs would just return the same negative feedback. I wasn't sure that would be enough signal to train an RL system. I'm not quite sure what new progress has been made in the field since then.
The reason people don't do this is because they think they have something that can be modified into another bug. Which is exactly what happened here.
reply