Its good to see classic ISAs moving away from memory protection 'rings' towards arbitrary 'zones', even if retrofitting it e.g. SMEP/SMAP gives horrendous APIs and a nightmare to keep checked and balanced! ;)
The Mill comes at this from the other direction, starting with 'zones' (termed "turfs" in Mill jargon) and emulating 'rings' (if your kernel wants that) with overlapping access rights between turfs.
On the Mill you can have lots of turfs that may or may not have disjoint memory access, and you move between turfs synchronously with a special kind of indirect function call termed a "portal". There are provisions for passing across specific transient rights to memory in these calls, so you can pass a pointer to a buffer and other aspects that facilitate the 'usercopy()' mentioned in the article but with full hardware rather than software protection.
We have tightened the portal/turf concept extensively since the Security talk http://millcomputing.com/docs/#security but it does give a gentle high-level intro to turfs and portals.
These days, we have facilities for passing buffers without exposing memory pointers and other niceties to make it easy to write correct yet efficient code. They can now all be made public but oh so little time, and I'm hoping to get a white paper out about it by the end of this month. Watch this space ;)
Happy to elaborate if anyone has Mill or general questions :)
PS an example of 'zoning' is http://elfbac.org/ , which is not getting enough attention. Its another way facilitate memory separation, albeit by abusing the classic MMU and with inherent runtime cost. Elfbac is userspace, but the hardware could be abused to protect kernels on classic CPUs too. Well worth everyone reading :)
From that security presentation, I've left with the idea that you wouldn't want a Linux kernel on the Mill. You'll more likely want something smaller, with a Linux virtualization layer for device drivers. That's because your security layer is extremely flexible, making it possible to push a lot of kernel space code into some less powerful context while keeping performance the same.
So are you working on a Linux port for it? (Maybe breaking it in pieces on the process?) Or do you intend on starting with something else? (Maybe building up from a microkernel?)
(I can't/won't watch videos, so will have missed anything that was only in videos)
So... how long is it going to take, specifically?
There's something like 15 hours worth of talks on the Mill. If you don't want to watch any of them then you're missing out on most of the design of the Mill.
Even if the Mill never gets to see the light of day, the talks were quite interesting IMHO.
I've been hearing Mill stuff since 2012. They had a weird non LLVM/GCC sorta kinda compiler port. Today in 2016, the toolchain support for way Out Of The Box stuff is great. They could do an LLVM port and a Chisel simulation and have Linux on top of that. But instead we get
The team is 90% industry vets with very long careers
If you go this route, definitely consider grsec as well.
Reasonably tuning your kernel can also offer speed (eg. via more specific CPU targeting), size and - critically for embedded environments - startup time improvements.
This is so true. Back in the day when I was involve in embedded Linux development, the quickest bootup time was about 40secs. This was booting a v2.6 kernel in a minimum configuration on an ARM7 system over SPI NAND FLASH. Hopefully, the bootup time is in the subseconds by now. Are we getting to these speed yet?
You can do that with minimal system. Actually, it's not that difficult to get booting time down to few seconds on most embedded systems. But in real world boot time highly depends on modules and drivers that must be loaded to make system up and running.
I could possibly get it lower but it meets spec and any additional shaving off would probably require plenty of work while sacrificing debug-ability etc. so for now I'm fine where it is :)
The older NAND file systems were very slow. I've had a similar case where repartitioning the huge NAND into a smaller partition and only using a small partition reduced the boot time to 10 seconds without any other change. Modern UBIFS systems apparently don't have this problem.
As for subsecond boot times: no chance with Linux.
Oh, wouldn't things be better if we had all that candy upstream?
This stuff has been there in grsecurity patchsets for more than 10 (ten) years already.
How do you reconcile that with suggesting people run this patch? If it were good, Linus would merge it. For me, the fact that it has existed for 10 years and _not_ been merged does not speak highly to it's quality.
I feel that any non-kernel dev applying a patch to their kernel is the opposite of a good security recommendation. I'm not nearly as qualified about the tradeoffs between performance and security or even code quality as Linus and the kernel team. That's why I delegate the decision about what code goes in my kernel to them.
Linus hasn't ever been security-minded, in fact half of the article is about Linus making a complains to Kess with things like "it will be slow to compile, it's a PITA to mantain, i don't understand it therefore is crazy and nobody needs this", so if you value security over anything else then Linus isn't the best person to rely for an advice on the topic.
> For me, the fact that it has existed for 10 years and _not_ been merged does not speak highly to it's quality
Parts of the grsec patch have been implemented over the years but not the whole mostly because Linus doesn't understand the need of most of the features not for quality reasons.
> I feel that any non-kernel dev applying a patch to their kernel is the opposite of a good security recommendation. I'm not nearly as qualified about the tradeoffs between performance and security or even code quality as Linus and the kernel team. That's why I delegate the decision about what code goes in my kernel to them
The fact that you don't understand why you need it, it's the very reason why _you_ shouldn't use it. Leave that decision to someone else on your team with experience handling incidents not to Linus et al.
ArchLinux has linux-grsec as a package, its enough to pacman -S linux-grsec linux-grsec-headers and boot into it.
I would recommend taking a look at NixOS as well, they have it integrated and it can be as easy as adding an option to your system configuration. If you further add any customization, you will get a unique kernel build for your system, what is said to be ideal security-wise. You can read the details on their manual:
At least they're trying to reduce the attack surface. But the kernel is just too big.
E.g. use pseudorandom numbers, store the seed somewhere. In case of a bug, extract that seed, pass it on to the dev, and he'll run his kernel with that seed to reproduce.
Crash reports from production systems come in, and you try to classify them. If the crashes look similar, you have something to look for. (Microsoft has a classifier to do this for Windows crashes, and that's one reason the Windows Blue Screen of Death is rare today.) Then you try to reproduce the crash situation. ASLR adds noise to that data. It's harder to match up similar bug reports.
Here, have 3: https://twitter.com/R00tkitSMM/status/796617449823236096
(no, I didn't get confused, KASLR is not helping regardless of the OS)
ssh -x -a firstname.lastname@example.org
Password = tomoyo1
SMAP/SMEP: Intel/x86-specific security features. See http://j00ru.vexillium.org/?p=783 for an early (2011!) take on SMEP. (j00ru is great reading, in any case.) See https://lwn.net/Articles/517475/ for SMAP from LWN.
PAN: Privileged Access Never, Basically ARM SMAP: https://community.arm.com/groups/processors/blog/2014/12/02/...
EDIT: See also: https://en.wikipedia.org/wiki/Curse_of_knowledge
What is the argument here? Is there something about this randomization that distinguishes it from classic security through obscurity?
Here are some excellent slides on exploit mitigation in general: https://events.yandex.com/events/ruBSD/2013/talks/103/
Of the top of my head there are four approaches to stopping memory vulnerabilities:
1) have no bugs, e.g. formal verification etc
2) use a memory-safe language
3) accept that there can be vulnerabilities, and use exploit mitigation to harden it
4) capability-based addressing as a mitigation (it doesn't solve use-after-free, for example; it relies on software to do that etc)
Of these, (3) is the one you can retrofit to existing C/C++ codebases... a route you are usually forced to travel.
(There may still be other kinds of bugs, e.g. the obvious sql injections etc; I am talking above about memory bugs specifically)
(Good old ZoneAlarm type GUI / workflow would be very nice.)
* Report and block if someone able to run any kind of privilege escalation exploit.
* Report and block if any non-white list apps attempt to make any network connections to internet and any external IP.
* Report and block if any non-white list apps, scripts try to run and execute any program.
Selinux seems to claim it can do most of these. But the barrier of entry to setup and using it effectively is high (at lease for noob like me).