I can’t help but feel CHERI is a hardware hack for a software problem. Hardware has no business patching software’s errors. Given the history of vulnerabilities in hardware security features themselves, I’m also sceptical of it being a long term, robust solution. For instance, I recall a paper busting ARM’s Memory Tagging Extension.
It's not clear to me if ARM (the company) is actually interested in pursuing Cheri since all the work seems to have been DARPA or UK gov funded. I'm more interested in their RISC-V implementation (FPGA only for now I think). Some companies might be able to run with RISC-V and give us real hardware.
Again, it's just better if you say what you're trying to say, instead of posting a bunch of links and let us figure it out.
I don't know how any of your links really let us weigh ARM's commitment to CHERI going forward, which is the point of discussion. Indeed, none of those 3 links has anything to do with ARM at all, other than the first mentioning Morello.
The comment you responded to is this:
> It's not clear to me if ARM (the company) is actually interested in pursuing Cheri
We're discussing whether this is a likely future direction for ARM, and whether ARM will continue to invest in this or push this as a key future direction. That is what you replied to. You've shown nothing other than ARM was interested in taking some research money in the past. And, frankly, you've wasted my time with links trying to figure out what you were saying (which turned out to be irrelevant).
It probably doesn't make sense for FreeBSD to maintain a separate distro build for an experimental architecture that only supports a single demo board and requires lots of patches.
It is a super interesting project though. It would be great if the hardware performance overheads were reduced so much that it became a mainstream ARM target.
Sadly, seL4's design approach and formal verification process still hasn't caught on. Until then, we're just rebuilding castles with slightly different grades of sand in the ocean surf and expecting a different result.
I’m not sure about the “sadly” part here. Whilst it proved that something could be done, it also taught us a lot about what doing it cost (a lot), how scalable it was (not), and whether it would be broadly applicable (no).
Any strategy for improving software resilience has to account for the scale and pace of software development, as well as accommodating the skill gradient and surrounding social factors. FV as currently practiced fails at all of this.
Doing the same thing and expecting a different result is either madness or stupidity. Rigor and correctness are lacking in most, if not all, software engineering and it's still not okay.
Those kind of solutions only work, if you get the human side buy-in as well.
For many safety concepts in software development, the problem isn't technical.
And since many devs are like St. Thomas, where only if you build X with Y to prove Z, while achieving Webscale delivery, they will "maybe" adopt the technology.
Thus failing the money, and with such sceptic attitude, many good technologies fail to gain adoption.
Apple's M1 certainly reset people's expectations for battery life in a laptop.
A morello-like Apple Silicon implementation with software support in macOS might not be as blindingly obvious to end users in terms of superior behavior, but it could potentially serve as a compelling proof-of-concept example to competing vendors, demonstrating the practicality of hardware memory safety features for mass market devices.