It's a shame there's no comparable open source OS :(
I actually remember using this demo floppy though, back in those days (the other impressive Unix-like OS I used back then was BeOS). It was the first time I stumbled upon Towers of Hanoi. Sadly the main developer of QNX passed away.
- VxWorks (https://en.wikipedia.org/w/index.php?title=VxWorks&oldid=857...)
- Nucleus (https://en.wikipedia.org/w/index.php?title=Nucleus_RTOS&oldi...)
- perhaps ThreadX (the "spiritual successor of Nucleus") (https://en.wikipedia.org/w/index.php?title=ThreadX&oldid=852...)
are on this list.
"The 64-bit Linux infrastructure is the de facto standard that is being used across the industry today...So it gives us more development tools, more tool chains and also more access into the third party development ecosystem."
XR itself provides abstractions on top of QNX and Linux, so devs working on platform independent code are not really affected.
All data plane activity is done on custom in-house ASICs called network processors (NPs). The CPU only handles control plane traffic and general administration.
It did a few really cool things for its time, considering that TCP/IP networking was an optional add-on when we originally deployed.
When the code is not enough, I go to http://www.qnx.com/developers/docs/7.0.0/#com.qnx.doc.qnxsdp...
There are plenty of changes to the userspace runtime, but the question was about the kernel.
Oh, also, 7.0 has an ISO 26262 certified variant. Not a technical difference, but an important one.
QNX was momentarily open for a little while until the license change with BlackBerry et al. My question: did this license change only apply to future QNX versions, or did it also retroactively apply to the "open"-sourced code as well?
My primary current interest in QNX stems from being fascinated with old and/or unusual operating systems. I'd love to be able to go fishing for 6.4 et al, maybe even compile what I find from source (or maybe not), and basically just play with the system to see how it works. If it's fine for me to go and find 6.4 and poke at it for noncommercial purposes - well, that'd be awesome to know. Obviously such a usage model would not incorporate any official agreement or warranty, and I understand that.
In a somewhat related vein, at some point I may find it useful to observe how QNX handles certain technical minutiae as part of my own (hopeful) OS development work. Obviously sourcing the latest versions of the QNX source for this purpose would offer support options, not to mention a more relevant codebase; but it would be great to know that I'd be able to safely make do with the older releases as long as I don't seek/expect any form of support.
I was unfortunately out of the loop with the QNX scene during the period it was open so I never got a chance to grab any of the repos. (And a quick search turned up what appears to be some QNX 6.4 bits and pieces on GitHub (as the previous comment hinted at) but it doesn't look very official, so I don't want to sift through it in case I waste my time.) Of course official repo access has since been closed, so I can't check that. So: I figure why not ask, what can it hurt.
When the repo was opened, did it include full commit history, or a large portion of it?
QNX 4.x is really cool. I managed to get an old copy working in a VM some time ago (took a bit of thinking; there's so little documentation out there). 6.x is nicer, but 4.x feels faster (which kind of makes sense).
I've been fascinated with QNX for years, and to be honest I want to say I find the "closed > yay open!! > closed" timeline incredibly frustrating; but besides wistfulness, this has also generated a fair bit of confusion regarding the current status quo.
They deprecated some POSIX calls like posix_spawn_file_actions_addopen (at least in QNX6.6 documentation they mention it's not fully implemented and it is indeed broken).
They made the QNX7.0 kernel instrumented-only (there are no longer non-instrumented versions).
They changed lots of low level stuff that is outside the kernel, like throwing out photon and replacing it with screen library, completely replacing the PCI server, making changes to the console, security patches, and so on. But since it's a microkernel, this is mostly in userland.
What is the most mainstream or perhaps "least niche" of the real time embedded OSes?
BlackBerry nowadays is... well let's say the mobile world is basically divided into 2 sides these days: Android and iOS :p
There are also more mainstream real time Linux distros e.g. Yocto Linux
Is anybody else encountering this problem?
Variables and places are predefined. ASLR is a problem there, not a solution.
Your operating system executes different program images for every successive execution of your program, picked in an unpredictable manner.
How do you prove that every possibility passes the safety tests? How do you measure the risk of this random selection? How do you know when you have done enough simulation?
How do you match up software randomization with the ISO 26262 concept that all software faults are systematic and not random as (some) hardware faults are?
How do you prove that memory allocation and execution always meet performance goals? How do you construct and perform reproducible performance tests? How do you demonstrate that your measurements are meaningful?
Software engineering in this case involves thinking about all of these questions and more besides.
It appears (to me, at least) that the current state of the literature on ASLR is that it is treated as a succession of theoretical arms races, which new defence militates against which new attack, and almost no attention is paid to the concerns of actually deploying it in a larger system; and the current state of the literature on functional safety is simply "we will assume that there are no randomization processes in the software" (from an actual paper presented at ESREL 2016).
Thanks for your explanation. To give a slightly different perspective on the quoted paragraph: mitigations such ASLR etc. do not protect against security bugs, they just make them more "inconvenient" to exploit. So "average script kiddie" will probably not be able to write an exploit for them. On the other hand, for well-founded agencies (think 3-letter agencies), these are no serious hurdles. In this sense, mitigations do not improve security in the sense of "less security holes". Instead their (probably unintended, though not undesired) consequence is that mostly well-founded agencies are able to exploit security holes. Whether this new situation is good or bad for software security is up to the reader to think about.
Keep in mind that before ASLR came, there was (and still is) DEP and its claims that lots of classes of attack were now impossible. The end of this story was that ROP was invented and hardly anything has changed, except that ROP code is much more tedious to write (i.e. no problem for well-funded attackers).
Now we have ASLR and you are probably right that now ROP exploits lead to process crashes instead. But attackers have already invented new techniques for circumventing ASLR, such as return-to-plt, GOT overwrite or GOT dereferencing. Again making it more inconvenient for script kiddies to write exploits, but again no problem for an attacker who can throw lots of money and people at the problem.
Helmets and bulletproof vests is no match for powerful rifles.
I'm a bit tired of this reasoning here on HN: If it isn't perfect it is worthless.
I think I can see reasons why a vendor might want to avoid ASLR in safety critical systems.
But we shouldn't talk down decent protection tecniques that will often save us.
This argument (that "If it isn't perfect it is worthless" does not hold) is suitable for many topics in life, but in my opinion not for IT security. I can conceive that this might be one reason, why so many people (explicitly including politicians) make such bad decisions about IT security.
I might be somewhat paranoid regarding this topic (which is not a bad trait if you want to work in this area), but let me give my arguments:
First: the fight for secure systems is deeply asymmetric. The attacker side just needs one working exploit, while the defender side has to ensure that there exists no security hole. This strong asymmetry really makes it necessary that the security is as perfect as possible.
Second: if the device is connected to the internet, everyone/every device that exists in the world can be an attacker. So what you are fighting against is the whole world. Or in other words: the security of the system that you use has to withstand the smartness of some of the smartest people in the world.
Let it be stated clearly that this fight is not hopeless as it looks based on these arguments: for designing the security of your system, you can resort to the knowledge of many really, really smart people, too: this is what the various standards (e.g. for cryptography) are about. What you cannot afford is to tolerate the slightest bit of imperfection in the security architecture of the system.
TLDR: In security, at least "If it isn't at least nearly perfect, it is worthless" does indeed hold.
For the accepted standards, even the smartest people working in this area have not yet found a method to find the private key sufficiently fast (at least such a method has not been published). So to the best of our current knowledge, those methods are at least very near to the perfection that is possible with our current technology.
Cryptography is a branch of mathematics, and cryptographic systems can be formally proved to have certain properties, such as being unable to derive the private key from the content of the encrypted message. That the private key can be guessed is a trivial observation, and a bad argument for dismissing formal proofs. ASLR is a hack on a hack that does not tell you anything about the formal properties of the system.
A small correction: All those proofs (if they exist) are relative to complexity-theoretical conjectures that are (ideally) widely believed to be true, but open. The only system that I am aware of where an "absolute" security proof exists is OTP, but this is hardly suitable to use in practice.
Why? This only sounds like a bug to me if it is intended to be position-independent code (PIC).
A reason why in safety-critical code ASLR is avoided is that it introduces another source of non-determinacy and potential bugs, which you want to avoid.
UPDATE: So you really want to keep the system as simple and small as possible and avoid to add anything to it that can introduce new bugs.