
Linux kernel lockdown, integrity, and confidentiality - JNRowe
https://mjg59.dreamwidth.org/55105.html
======
david_draco
Linux security has been criticised[1] for not having security enabled by
default, and for allowing users to disable the security measures. This leads
to users avoiding setting minimal-rights policies, and leaving the doors open.
When security is optional, policy issues in distributions also aren't fixed
quickly.

To limit applications rights to a minimum, SELinux, firewalls, and (systemd)
sandboxes are all tools that could be used, but aren't in most installs.
However, I think we are still lacking user-friendly interfaces (opensnitch is
ported from MacOS).

One idea could be to let the desktop environment restrict one more capability
a application on every run, inform the user before running it and ask the user
after if the software worked correctly. That would gradually lead to a minimum
set of software rights.

[1]
[https://www.youtube.com/watch?v=OXS8ljif9b8](https://www.youtube.com/watch?v=OXS8ljif9b8)

~~~
Hendrikto
> One idea could be to let the desktop environment restrict one more
> capability a application on every run, inform the user before running it and
> ask the user after if the software worked correctly.

That would only work reliably if one used every single feature of a program,
every time one opened it.

I have tried to enable sandboxing for my services, but it is not easy to know
what permissions can safely be restricted, without negative effects.

~~~
EE84M3i
An interactive sandbox UX I like a lot is an app giving me a dialog "Allow",
"Deny" with an additional "Remember" button when it does something new
(perhaps modulo some time window, as long as it's clear). In certain
situations I'd like to be more granular than that, but it would be much better
than what we have today. A good way to be more granular is an "Advanced"
button that lets you drill down more specifically depending on the resource
that is being attempted to access.

Today if I want to send someone a photo I took on <insert newfangled
application that can send messages here> I need to grant it permissions to
read all of my storage. That's dumb. The OS should delegate access in such a
way that I can give the app access to a single file's contents AND be
performant/not get in the way for accessing all files if that's what I prefer.

What should NOT be possible is an app asking "hey can I read '/*'?". It should
attempt to access specific resources in the namespace, and the OS should be
responsible for saying "should this app bug you again about reading your
<namespace>?" This gives an only-slightly-more complex UX to users that don't
know/care, and gives a lot of flexibility to those that do.

I'd even be fine if it was one-time opt-in like Android "developer mode" which
is basically impossible to enable if you're not looking for it.

~~~
AshamedCaptain
But how does this interact with Unix command line tools? Any "sandboxing"
system that A) either makes comamnd line usage inconvenient or B) completely
ignore command line usage is going to create a rift.

Most of these "interactive sandbox UX" approaches basically create a
"developer sandbox" where the command line tools can all play together but
cannot access external data. And this is where things go downhill. Developers
(or even users) DO want from time to time to write a script that accesses
their contacts, gets the current GPS position and then does some munching with
Perl for whatever obscure reason. Developers DO want from time to time to read
whatever stuff Netflix program is storing on their private storage (oh noes!),
or what the PDF reading program wants to send to the net.

And then you hit either A or B from above. If A, developer is annoyed and
disables your sandboxing, and you are back to stage 1. If B, you are already
at stage 1 and developer is annoyed seeing that random Perl scripts can
apparently read your contacts list.

I find that any sandboxing approach that fails to actually think of command
line usage is just falling in the trap of the "Android/iOS-centric world
view". "Apps" may be glorified websites which are trivial to sandbox, but the
more generic concept of "programs" is not. This is not only about command line
scripts. Command line scripts interact with pipes. Programs, however, interact
between themselves in ways we cannot even think of right now.

Which is why year after year you still see completely unsandboxed PCs being
used for "productivity" despite tablets and anything else with the Android/iOS
model.

~~~
XorNot
The problem with sandboxing is it only works for server processes with very
narrow behaviors - it's completely unable to express broad ideas.

My file browser should be able to see my whole system - that's what I want it
to do. But I really don't want it to scoop up a list of files on my system,
and send it wholesale to a network address I didn't type in specifically,
after some specific actions.

AFAIK no security mechanism anyone currently proposes properly captures this
sort of intent: there isn't a firewall which defines what can be done with the
actual bytes of data an application has picked up in those terms - when
they're in memory.

Of course this is a huge challenge: proving that my file browser doesn't have
a way to, without gating through a user system, transform my file list into
any code paths which can send it via network traffic.

But it's what we desperately need.

------
freedomben
This lockdown made it a huge PITA to run Wireguard in Fedora[1]. It broke
existing installs when the kernel upgraded and then became very difficult to
insmod or modprobe the wireguard kernel modules unless you blacklisted all new
kernels (not safe). I hope this matures in a way that doesn't destroy some of
the reasons I love Linux (that I can hack on my system and do cool stuff).

[1]
[https://unix.stackexchange.com/q/543576/34855](https://unix.stackexchange.com/q/543576/34855)

~~~
htfy96
Wireguard is merged into the mainline and would possibly appear in Fedora 32
on 5.6. Fedora 32 will be released several weeks later.

The underlying problem is that many distro just use a throwaway pubkey which
makes users impossible to sign their modules. Maybe a better security model is
needed.

~~~
freedomben
yeah for this reason I've been super excited for kernel 5.6 (I don't usually
get excited for new kernel releases cause it means work for me, but this one
is different :party: )

------
keyme
Why is this considered a good thing?

The only "security" this improves, is of devices where the manufacturer has
decided to lock you out!

~~~
userbinator
Your second line is the answer to the first.

Look at who works on the Linux kernel, and their affiliation, and you will see
why things are the way they are.

~~~
mwcampbell
We are drowning in cynicism these days. Let's make an effort not to add more.

------
sandGorgon
I think Fedora Silverblue takes a very different approach here -
[https://docs.fedoraproject.org/en-US/fedora-
silverblue/](https://docs.fedoraproject.org/en-US/fedora-silverblue/)

It basically doesn't try to fix the building blocks with all their
legacy...instead it simply makes the entire operating system immutable.

I wonder what are the pros and cons of each approach. Or are they
complementary.

~~~
jchw
Complementary. Silverblue’s immutability may be aimed purely at stability and
therefore may not have strong guarantees, but if it _did_ then the next attack
vector would be from root to kernel (in memory), and that’s where lockdown
comes in; it’s designed to prevent attacking a running kernel as root.

------
AshamedCaptain
I find all the attempts at "curtailing down the powers of root" to be the
modern equivalent of [https://xkcd.com/1200/](https://xkcd.com/1200/) .

At least with the "user vs admin" distinction, I could argue that it is useful
becase my backups could be stored into a separate account with restricted
access, so ransomware running in at the "user level" account, would not be
able to touch them.

While if my "admin" account gets compromised then all bets are off as the
backups could be wiped out (even if they are offline, they could be wiped out
the next time I connect the storage device).

However I find a much harder time finding user justification for having a
separate "root" vs "manufacturer root", unless you happen to be a manufacturer
with questionable motives. If root is compromised, the attack surface becomes
absolutely huge, and it doesn't seem to be very useful as a user the fact that
I can rely that at least the kernel and bootloader will not be compromised if
everything else is dubious.

Sure, now I can trust that the builtin "restore to factory" functionality on
the device works (and even that may not be true). But unless I use it
frequently (and who does?), the malware with root access would still be able
to destroy all of my files anyway, compromise my backups, etc.

~~~
JackRabbitSlim
This is pretty much it. Let's frame the proposition differently to get people
to consider it from a new perspective.

I can verify the OS install media with trusted publishes using signing keys
and PKI. I can't do a god damn thing about the cheap, back-doored PCI
controller from China.

What protection does secure boot really offer the end user at that point? The
PCI controller is in place to just pass the right signatures to secure boot or
just wait until after the secure boot checks so it's not helping with bad
hardware. I already verified the OS media at install, so its not super useful
there either. Did my boot code change? How would I know? Did the bad PCI
controller fake it? Do I have _any_ additional trust in my system? I can't go
probing the system to try to find out.

A black box with zero control told you you were safe and there is no way to
look at or modify the system now so you can trust it. Your hardware was never
on a TAO workbench. Who doesn't feel safer?

~~~
mjg59
> I can't do a god damn thing about the cheap, back-doored PCI controller from
> China.

Of course you can - that's what IOMMUs are for.

------
staticassertion
Surprised no one has yet commented on eBPF, and the impact this has. ebpf
already gives users the right to run code as the kernel, and this is _a
security win_ in many cases - you can audit your system better.

On a server, I think I'd rather assume the attacker has root (or even kernel!)
but have good auditing, and do specific service sandboxing, than assume that I
can separate root from kernel.

My understanding is that confidentiality breaks ebpf. I'm unsure about
integrity - if integrity works with it, great. Then I'd wonder where you'd
want confidentiality - maybe a box that's handling CC info/ payment
processing?

I have mixed feelings overall.

~~~
mjg59
Integrity doesn't restrict ebpf. Confidentiality is for cases where you're
doing stuff like using EVM to prevent offline attacks, which involves the
kernel holding a key and using it to sign all files. This can be circumvented
if you're able to just scrape the secret out of the kernel.

~~~
staticassertion
Cool, that sounds reasonable then. Good stuff.

------
bogomipz
>Various interfaces make it straightforward for root to modify kernel code
(such as loading modules or using /dev/mem), while others make it less
straightforward (being able to load new ACPI tables that can cause the ACPI
interpreter to overwrite the kernel, for instance).

What exactly is the "ACPI interpreter" referenced here? I am familiar with
ACPI but this is the first tine I have heard this term.

~~~
detaro
ACPI tables contain byte code (AML), which needs to be interpreted by the
kernel.

------
rrss
Since this is controlled via a kernel parameter, can root just unset the
kernel parameter and reboot? Or are there systems where kernel parameters are
also 'locked down' by some other mechanism?

(these are genuine questions - I don't know much about linux security and am
seeking to understand).

~~~
mjg59
Most distributions carry a patch to automatically enable this if you have a
verified boot process. Otherwise, it's up to the admin to ensure that their
verified boot process applies the appropriate policy (eg, by ensuring that
your bootloader appends the argument regardless of configuration)

~~~
rrss
Thanks!

------
devit
This is laughably useless.

It's extremely unlikely they have patched all existing ways for root to change
the kernel, and anyway since the kernel is written in C it almost surely has
plenty of memory safety exploits.

Also requiring an hypervisor is much simpler than doing this work and actually
has a reasonable chance of achieving the objective of not allowing arbitrary
ring 0 code.

At any rate, all this work is mostly pointless because if you let people run
arbitrary user space code, then they can do almost anything with the hardware
anyway (like erasing all disks, etc.), and if you lock that down then you need
to force them to use a particular user space and in that case there is no need
to also lock down the kernel since without being able to run arbitrary user
space code you can't interact with the kernel anyway.

~~~
mjg59
How is requiring a hypervisor much simpler than this? I've seen no
implementations that make strong integrity guarantees.

------
lwb
Possibly a dumb question -- how can one perform kernel/firmware updates if
root is "locked out"?

~~~
Ao7bei3s
The integrity mode applies to the running kernel in RAM. You generally don't
actually upgrade that, instead you replace the kernel image on disk (vmlinuz),
which is still possible. On next boot, the new image will be loaded. But if
you have trusted boot enabled, the new image will only be booted if it is
appropriately signed.

Live patching still works if the updates are signed. The kernel can still do
whatever - with integrity enabled it just refuses to do certain things, such
as loading kernel modules or updates that aren't signed.

~~~
lwb
Great explanation, thanks!

------
jasonhansel
Another option might be to allow a "rootless" mode in which direct access to
the root account itself is impossible (but where there are still ways for non-
root users to gain a subset of root capabilities).

------
unixhero
This is a serious improvement.

------
stefan_
Do people realize what they are doing here? I'm happy not touching my Windows
kernel, stuff generally just works. Stuff _never_ just works on Linux. And now
the last time I tried to load a kernel module I had built for my Ubuntu
system, it refused because, duh, it wasn't signed by some UEFI key I had never
even seen. So now of course, _the Ubuntu is fucking gone_. Because it makes it
impossible to do things that on Linux are still a monthly requirement.

Not to mention the impact this stuff has on kernel and driver development.
Even Windows you can just boot into a development mode and it might scream at
you on the desktop, but it will allow you _to modify your system as you wish_.
I wasted a few hours when that Ubuntu thing happened to figure out where that
switch was and didn't find it.

Right now, this should be strictly the domain of Google and Amazon that can
actually have a trusted chain from bootloader to userland. Not enabled on any
vanilla Ubuntu because hey, we detected UEFI!

~~~
cyphar
If you're using a modern machine, then you probably are using Secure Boot. For
better or worse, Secure Boot requires Microsoft[+] to sign your binaries and
they have certain requirements which you cannot break or they will revoke the
signature. One of these requirements is that you cannot have a signed binary
load unsigned code into ring-0.

So (ignoring whether these features are useful or good) in order to be able to
run Linux on modern hardware these types of features are necessary. And I'm
sure you'd be just as annoyed if you couldn't run Ubuntu on a machine that was
less than 6-8 years old.

> I wasted a few hours when that Ubuntu thing happened to figure out where
> that switch was and didn't find it.

It's a shame you didn't manage to find it, because it's pretty trivial to
create your own signing keys and enroll them in the MOK. You can then use
those to sign your kernels. You just need the sbsign package.

If you want to just turn it all off, it's even more trivial -- go into your
BIOS and disable Secure Boot.

I would hope someone who wishes to do some kernel development would be able to
overcome this fairly minor hurdle.

[+] Technically it's whoever owns the keys that the manufacturer has approved
to run software on the machine. On basically all modern machines that list is
just "Microsoft" but you can enroll your own keys or remove Microsoft's keys
if you want.

~~~
quirmian
I see arguments (like one of replies to your comment here), that you can turn
off Secureboot - so simple, much wow.

Not all firmware allow you to turn off secureboot or enroll your own keys.
You’ll see plenty of this on bios-mods.com if you want to know what that looks
like. It also really throws into sharp relief questions around things like
device ownership.

Let me tell you about my experience with an Amazon DeepLens device (x86-64)
that I’ve been trying to get stock Ubuntu installed on. The only keys on this
device are Amazon ones. This means I cannot install any OS other than the one
they supply (a modified Ubuntu 16.04 install). If I own the device, shouldn’t
I be free to install my own OS? If I own the device, and have physical control
of it, I should be able to bypass secureboot, period - but not always the case
today.

~~~
cyphar
> Not all firmware allow you to turn off secureboot or enroll your own keys.

Being able to disable Secure Boot and install your own keys is a requirement
of the Windows 8 and 10 advertising requirements, so manufacturers almost
always allow it so they can get some money from Microsoft for advertising.

That doesn't mean it's always possible, but I would humbly suggest that we
shouldn't purchase such devices so that companies who make those devices learn
to stop doing that. The fact that Microsoft managed to pull this shit with
Windows RT is disgraceful.

> It also really throws into sharp relief questions around things like device
> ownership.

I don't disagree at all, and I do think that it's something we need to be very
mindful of. But Secure Boot does solve real security problems.

> If I own the device, shouldn’t I be free to install my own OS? If I own the
> device, and have physical control of it, I should be able to bypass
> secureboot, period - but not always the case today.

I completely agree. Amazon shouldn't be allowed to sell such devices. But that
doesn't invalidate Secure Boot as a concept, nor is it the fault of Ubuntu or
anyone other than Amazon.

~~~
quirmian
I can see how Secure boot solves real security problems. And I am definitely
not blaming Ubuntu here.

However, it’s unfortunate that the Secure Boot technology (or maybe this is a
licensing thing) by default does not make prescriptions, and that we’re
reliant on the device manufacturer’s good will to see it implemented
correctly.

~~~
shawnz
How could a technology itself make prescriptions about the ways that the
manufacturer lets you configure it?

~~~
Ao7bei3s
Through licensing and/or certification requirements. Large companies take
compliance serious.

