Hacker News new | past | comments | ask | show | jobs | submit | adds68's favorites login

Extra comment to add something I left off. There's at least two types of static analysis and solver tools: unsound and sound. The sound ones, especially RV-Match and Astree Analyzer, use a formal semantics of the code, a formal statement of the property, and automatic analysis to determine if it holds or doesn't depending on the goal. Related, SPARK Ada and Frama-C have their formal specs and code turned into verification conditions that check for code conformance to the specs. The VC's go through Why3 which sends them to multiple, automated solvers to logically check them. Far easier to scale and get adoption of these automated methods than manual proofs.

The main drawback is potential errors in the implementations of the analyzers or solvers that invalidate what they prove. Designs for certifying solvers exist which essentially are verified or produce something verifiable as they go. There's examples like verSAT and Verasco. The tech is there to assure the solvers. Personally, I'm guessing it hasn't been done to industrial solvers due to academic incentives. Their funding authorities push them to focus on quantity of papers published over quality or software improvements with new stuff over re-using good old stuff. Like infrastructure code, everyone is probably just hoping someone else does the tedious, boring work of improving the non-novel code everyone depends on.

Also, given my background in high-assurance research, I'm for each of these tools and methods, mathematical or not, to be proven over many benchmarks of synthetic and real-world examples to assess effectiveness. LAVA is one example. I want them proven in theory and practice. The techniques preventing or catching the most bugs get the most trust.


"Well that's impossible (see also the halting problem) so that's pretty clearly not good security practice."

No it's not. It's been done many times. The halting problem applies to a more general issue than the constrained proofs you need for specific, computer programs. If you were right, tools like RV-Match and Astree Analyzer wouldn't be finding piles of vulnerabilities with mathematical analyses. SPARK Ada code would be as buggy as similar C. Clearly, the analyses are working as intended despite not being perfect.

"Security is about identifying and mitigating specific risks. "

Computer security, when it was invented in the 1970's, was about proving that a system followed a specific, security policy (the security goals) in all circumstances or failed safe. The policy was usually isolation. There's others, such as guaranteed ordering or forms of type safety. High-assurance security's basic approach was turned into certification criteria applied to production systems as early as 1985 with SCOMP being first certified. NSA spent five years analyzing and trying to hack that thing. Most get about two years with minimal problems. I describe some of the prescribed activities here in my own framework from way back when:

https://pastebin.com/y3PufJ0V

I eventually made a summary of all the assurance techniques I learned from studying these commercial/government products and academic projects:

https://pastebin.com/uyNfvqcp

Note that projects in the 1960's were hitting lower defect rates than projects achieve today. For higher cost-benefit, I identified the combination of Design-by-Contract, Cleanroom (optional), multiple rounds of static analysis by tools with lower false positives, test generators (esp considering the contracts), and fuzzing w/ contracts in as runtime checks (think asserts). That with a memory-safe language should knock out most major problems with minimal effort on developers' part (some annotations). Most of it would run in background or on build servers.

https://www.win.tue.nl/~wstomv/edu/2ip30/references/design-b...

https://web.archive.org/web/20190428052851/http://infohost.n...

Meanwhile, the state of development for a major OS leads to about 10,000 bugs that even a fuzzer can find:

https://events.linuxfoundation.org/wp-content/uploads/2017/1...

Modern OS's, routers, basic apps, etc aren't as secure as software designed in 1960's-1980's. People are defining secure as mitigates some specific things hackers are doing (they'll do something else) instead of properties the systems must maintain in all executions on all inputs. We have tools and development methods to do this but they're just not applied in general. Some still do, like INTEGRITY-178B and Muen Separation Kernel. Heck, even IRONSIDES DNS and TrustDNS done in SPARK Ada and Rust respectively. Many tools to achieve higher quality/security are free. Don't pretend like it's just genius mathematicians or Fortune 25 companies that can, say, run a fuzzer after developing in a disciplined way with Ada or Rust.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: