Hacker News new | past | comments | ask | show | jobs | submit login

Wow, I was going to comment "IoT, the 'S' stands for 'security'", but this is about VxWorks, a battle proven (literally) RTOS.

This illustrates a point that now, in 2019, there is literally no OS designed for security. I mean, security was never a real goal. Even software specifically written to address security requirements could easily have gaping holes (re Heartbleed)...




I tilted at the windmill of making and selling a secure RTOS for 10 years. It is a fool's errand, and everybody that likes money should stay away.

What I painfully learned is that hardly anybody save sometimes the US government is willing to compromise on features or pay 1 cent more for a secure RTOS.

Look at it another way. The market for cybersecurity is supposed to hit $300 billion by 2024, according to a recent report. If we only released secure software, that $300 billion market wouldn't exist. In that sense, it is vastly more economically viable to release insecure software than to spend any effort on securing it.


> sometimes the US government

Yeah, but that's a huge opportunity b/c they have tons of money and constantly overpay massively for things.

> If we only released secure software, that $300 billion market wouldn't exist

This is the broken window fallacy [1]. In fact, what you are saying is that there is $300 billion of value to be created by making secure software.

1: https://www.investopedia.com/ask/answers/08/broken-window-fa...


The broken window fallacy doesn't apply to software security for a couple of reasons. The first reason is that it isn't apparent that software is insecure until well after the sale, and depending on use (i.e. on a security network, etc), might actually be fit for purpose. In other words, measurement of window brokenness changes over time and with situation.

The other important difference is that the software vendor, unlike the glazier, doesn't directly benefit from folks breaking its software.

A better analogy is the introduction of the iPhone creating the iPhone accessories market. For lots of reasons, Apple decided not to make the iPhone crush proof, allowing Otter and others to sell protective cases for it.


To say that it's "vastly more economically viable" to spend $300 billion fixing insecure software rather than simply making it secure in the first place is fallacious.

If you made the software secure in the first place, you would have secure software and $300 billion to spend on something else (guns, butter, or what have you).

That seems like the very definition of the broken window fallacy to me---but hey, if not, it's still a fallacy.

Of course we haven't factored in the extra cost of making the software secure in the first place. If that costs vastly more than $300 billion, it is vastly more economically viable to just make broken software, but I don't think that was the intention of the statement.


Sadly that actually sounds like it could be a really interesting industry to work in tbh.


I have no idea of the details, but this:

> The URGENT/11 vulnerabilities affect the noted VxWorks versions since version 6.5, but not the versions of the product designed for safety certification – VxWorks 653 and VxWorks Cert Edition, which are used by selected critical infrastructure industries such as transportation.

Seems to me more like meaning that a "certified" "safe" version exist but that a lot of companies used (most probably to save money) the "normal" edition, and - indirectly - that the differences between the "normal" and the "certified" editions were known, at least to the developers/company actually making VxWorks.

It would be "queer" that the "certified" editions have "different" mechanisms implemented (for completely different reasons) and only coincidentally they are more secure.


A lot of "Safety-critical" certified versions of operating systems just don't include things like TCP/IP stacks or userspace applications. That's probably what WindRiver is referring to here. Otherwise they might actually have to do rigorous design verification and testing on their network stack which would cost a great deal.

For example you can get a "medical grade" QNX but the certificate only covers the kernel, so you have to write and verify the entire userspace yourself.


Sure, but then I would have expected a "reason" provided by Armis Labs, i.e. something like:

>... which are used by selected critical infrastructure industries such as transportation ...

... as they do not contain the vulnerable TCP/IP stack.


No, that still fits with my experience. In the safety-critical realm anything that casts doubt on your claims of robustness, reliability, or safety such as a TCP/IP stack vulnerability opens you up to a lawsuit.

What will happen is you'll purchase a certificate for the RTOS kernel plus a few critical components. Then you can choose to use any other off-the-shelf components that the vendor or third parties provide. Those parts don't have to be safety-critical, but if a defect is found in uncertified software it's not VxWorks's problem.

VxWorks is very clearly and concisely stating that the safety-critical certified components are not affected. But they're not going to make statements about the systems their safety-critical clients built. That's not their responsibility. And Armis is almost certainly reprinting a statement from VxWorks. Both Armis and VxWorks are leaving it up to each VxWorks customer to determine whether their particular configuration of Safety-Critical VxWorks uses a vulnerable stack as an add-on.


Well, that Armis labs is reprinting a statement from VxWorks is a possibility, but the statement is on Armis page, and sounds like they themselves wrote it, since it is an announcement on discovered (by Armis) vulnerabilities, there are as I see it two possibilities:

1) Armis tested those certified environments and couldn't replicate the bugs

2) Armis did not test those certified environments and reprinted the VxWorks statement

If #2 they should have written instead "we could not test the vulnerabilities on the "certified" versions but we believe in VxWorks' assurance that they are not vulnerable (because ... )"


Or maybe it is deployed less so targeting it is not as attractive...


I dont understand how that is meaningful.

If an entire "family" of versions of the OS is vulnerable to these 11 (eleven) bugs whilst the "certified" versions are vulnerable to none (of these specific 11 ones, not necessarily not vulnerable to other 11, maybe 12, other ones), it means that the certified versions are different.

Small volume and thus less attractive targeting might explain why - since no time has been spent to find the hypothetical "other 12" vulnerabilities in the "certified" versions - noone found them (yet).


And as usually some of them are caused by memory corruption:

> Stack overflow in the parsing of IPv4 options (CVE-2019-12256)

>

> Four memory corruption vulnerabilities stemming from erroneous handling of TCP’s Urgent Pointer field (CVE-2019-12255, CVE-2019-12260, CVE-2019-12261, CVE-2019-12263)

>

> Heap overflow in DHCP Offer/ACK parsing in ipdhcpc (CVE-2019-12257)

DoS via NULL dereference in IGMP parsing (CVE-2019-12259)

While a safer language wouldn't make the remaining logical ones disappear, there would be 7 vulnerabilities less.


battle proven in what context? I think in terms of being real time, but not security. OpenBSD is designed for security.


as in military control software.


I wonder if there is a company that's really interested in developing a secure (by design) operating system. Apart from you-know-who?


There have been efforts to do this. Back during the first crypto-wars I had some code in Java that would give it capabilities similar to the way that the language Joule did it (basically the class loader would elide any methods from the loaded class based on capabilities so they weren't even available) I got a couple of patents out of it but sadly the politics inside of Sun kept it from going anywhere useful.

Highly constrained OSes and languages that do so in order to minimize attack surface tend to be challenging to work in. As a result they constrain productivity which increases time to market and the folks who got something out, even insecure, would "win" the market. It was a sad thing to see happen.


I found that working on Green Hills' Integrity was quite a bit easier than working in user space on a typical UNIX userspace. The primary reason is that the API for Integrity was designed in the late 90's, where the POSIX API was inherited from the first thing folks thought of in the late 70's.

The other aspect was that IPC via message passing is a very natural way to program.


Take a look at seL4 [1].

That it has never taken off is more evidence that there's no money in securing software, just cleaning up the mess insecure software leaves behind.

1. https://sel4.systems/


Is it true that seL4 has never "taken off"? And might it be too early to tell?

I am under the impression that the people behind seL4 have managed to successfully commercialize earlier other versions of L4 before seL4 was created.

Anyway, even if we grant the premise that seL4 has not taken off, that does not seem to justify saying that there is no money in securing software.


seL4 just celebrated its 10th anniversary. seL4 isn't widespread in COTS systems but rather in high assurance government systems as explained in this blog post: https://microkerneldude.wordpress.com/2019/08/06/10-years-se...


SeL4 is a small microkernel, not a complete operating system. It is very, very cool, and deserves more adoption, but a customer would need a load of stuff on top of it for it to be a viable option.


I don't know who.


GHS Integrity?



Wow that's a blast from the past. That's well over 12 years old (maybe 14?) software, third party to Integrity.


They have a cool T-rex skull in their office.

https://twitter.com/phil_torres/status/700115845540765697


OpenBSD?


https://www.cvedetails.com/vulnerability-list/vendor_id-97/p...

I might hazard to say that (in my opinion) no OS written in a memory unsafe language is secure by design.


Tock might fit the bill then (Rust): https://www.tockos.org/documentation/design


THALES has put Linux into battle mode.

EDIT: bzlg. SYSGO GmBH


"Security" is a stupid goal to have: if your specifications (and their implementation) is correct then the software will be secure.

Correctness is a goal of many operating systems.


You can obviously correctly implement a wrong specification which doesn't provide security.

Common Criteria distinguishes security in security functions and assurance (the effort spent in the verification of the implementation).


This is a very surprising point of view. I hope you don't work with avionics, or nuclear power plant control software, or something that has a potential to inflict a lot of harm.

You may not always foresee the requirements for 'correctness'.




Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: