Hacker News new | past | comments | ask | show | jobs | submit login
The Linux Security Circus: On GUI Isolation (2011) (theinvisiblethings.blogspot.com)
121 points by dandelion_lover on March 5, 2016 | hide | past | favorite | 84 comments



Am I wrong in thinking that this is sort of like worrying about leaving the kettle-on while you're falling from a skyscraper?

Sure, if you have a rogue application on your computer, it can access the GUI of a program whose security is critical (e.g: a password manager), but even if all the GUI were isolated, you've still got a rogue program on your comp.

All the security posts I've ever read have pretty much told me that once you're running a malicious program, then it's game over, your system is owned by your adversary and the only good way to move on is to wipe the drives, and start fresh. Why then worry about GUI isolation? If the GUI is isolated, a rogue program can presumably uninstall your secure GUI server and replace it with another!

This blog mentions Qubes, which is cool and does mitigate the rogue program from what I understand, but anything short of isolating whole applications from each other at the hypervisor-level, is fussing about the wrong thing IMHO.

Then again, I'm not a security professional, so what I've said is likely all wrong...


That's not really true. Tremendous effort has gone into making sandboxed programs. Otherwise any website that has JavaScript could take over your computer. The browser is basically a functioning sandboxed operating system.

There's no reason that can't be true of actual OSes. All programs could be sandboxed by default. But they just were designed under the assumption that all app makers would be good and no one would ever download and run malicious programs. Yeah, right.


One such feature is the use of seccomp, which removes all but 4 system calls:

read, write, sigreturn and exit

This is used by many browsers and apps to reduce the impact of security flaws and untrusted code (such as js from a website).


It depends on the nature of the attack and the attacker. Most attacks are rehashes of what black hats made available that are easy to counter with patches, a good configuration, or even sandboxing. Targeted attacks hit specific weaknesses in applications. Aside from detection, you can try to use tech to remove classes of weakness or (simpler) attempt to isolate each program giving it just enough privileges to do the job. Programs accepting input from untrusted, isolated programs check that input for sanity/security. They raise an alarm if something is wrong.

QubesOS is in that category. It's technically called a Multiple Single Levels (MSL) of security system. The isolation tries to keep problems in each VM.


X-isolation obviously wouldn't be the only isolation mechanism you'd use to run untrusted code. For other things you would use namespaces, seccomp and chroot.

But X is the one thing that can't be done well by those because granting access to the unix socket that's needed by X means leaving the barn door wide-open.


>> once you're running a malicious program, then it's game over

I don't think it's that simple, all or nothing. Security is not a boolean. Consider defense in depth [0].

edit: Security is just a price to hack you. You can increase it by adding additional problems to an adversary.

[0] https://en.wikipedia.org/wiki/Defense_in_depth_%28computing%...


>you've still got a rogue program on your comp.

That's the default case, because any program that has bugs and takes foreign data has to be considered untrusted.

>All the security posts I've ever read have pretty much told me that once you're running a malicious program, then it's game over

Well, isn't that because of gaping holes like in the X11 design besides any bugs?

> anything short of isolating whole applications from each other at the hypervisor-level, is fussing about the wrong thing IMHO.

Unless it has bugs, too. (And if it doesn't because it is simple enough, it will evolve to send mail and eventually get bugs, or the applications are just monoliths that can't be well hypervised)


You are correct. Linux is a single user OS. Any rogue program can almost certainly escalate privileges using any number of strategies across a vast and complex surface area.

It's been almost two decades since unix systems were routinely used in true multi tenancy situations with untrusted tenants. Since then nobody has paid any attention to this aspect of security.


Well, I think it's very useful to counter that core "once you ran something malicious you're screwed" issue and have a way to run any untrusted software in a solid isolation, i.e. without fear it could compromise the whole system.

Is there anything wrong about this?


Javascript in browser does just that.


It does (although not without the issues, we have CORS, NoScript and firejail for a reason), but that's completely irrelevant to X11 problems mentioned in the article.


You have one way to run software in solid isolation, but it is not enough for you?

No, X11 won't provide it for you, but browser will.


Note that this piece was written by Rutkowska, the mother of Qubes OS in 2011.


Do you disable Java Script in your browser? Or maybe you only go to websites that you are sure about?


When you ssh -X into a machine and run a compromised program, this will stop this program from taking over your computer.

It is one part of the puzzle of sandboxing software on a machine. But is certainly not the entire solution.


This is true of Linux, non-app-store Windows apps, and non-ap-store OSX apps.

But with the App Store walled garden craze came a much better idea: let's end this "all programs having full run of the filesystem" nonsense and sandbox them, such that it is very hard for a user to compromise his entire machine by installing one bad program. iOS has always been this way, App Store OSX apps are increasingly this way (already has filesystem sandboxing), I believe Windows App Store is pushing this direction, Android has this to an extent, etc.


Speaking very generally (there's always room for improvement), this is a feature. X programs are able to interact in interesting ways. Isolation can be a good thing and I would love to have more options for sandboxing programs (with or without X), but stuff like keylogging and sending events to windows are very useful. Removing the ability to do stuff like query the state of the mouse would break useful tools I use. The ability to keylog my keyboard or mouse is a potential risk, but it's also a necessary feature. As DHH explained[1], nerfing useful tools just causes people to find other ways to do the same thing, sometimes with worse consequences.

The "window composition only" approach of Wayland where it is simply assumed that this can be handled in e.g. the GUI toolkit is problematic if you have programs that don't use that toolkit. The windowing server is really the wrong layer implement this isolation - it makes useful things much harder while not actually providing a lot of security; if a malicious program is already running as the user, there are already endless ways it can harm the user without touching the UI. Real sandboxing would be, for example, a way to open a Wayland-style framebuffer that can be handed to a FreeBSD-style jail (or as "hardware" in a full VM).

[1] https://vimeo.com/17420638#t=27m27s


Looking at this from the other end, the problem is, as long as clients of the window server aren't isolated against each other, any other kind of sandboxing (such as restricting file system access) is pointless: a compromised application can just synthesize input events for some other application to give it what it wants.

The next step towards real sandboxing is applications with cgroups and namespaces (quite similar to FreeBSD jails), as in xdg-app.

https://wiki.gnome.org/Projects/SandboxedApps/Sandbox


> any other kind of sandboxing (such as restricting file system access) is pointless

That's my point - I don't want most of my "desktop" applications sandboxed from each other. That would be unusable. That kind of sandboxing should be done by nesting the entire display server, creating a separation point.

In the future, with all clients sandboxed from each other, how can I write a tool that does the following (which was easy under X):

* monitors the keyboard input of another process

* sends new (synthetic) key events to that process when it sees a certain key press (keyboard macro)

* does not depend on any support from the process being monitored

* works with any existing program, including proprietary software that cannot be recompiled because the source has been lost

Interaction is necessary at the GUI layer, because that's the intended goal of some software. As I initially said, there is certainly room for improvement. However, simply pretending these use-cases don't exist is foolish. As Joel Spolsky so famously explained[1], all that "cruft" that is cut out during a big rewrite is real-world fixes and necessary features that were added over time. Unfortunately, the GNOME team has been engaging in that kind of narcissism for many years, which is why more and more people are ignoring them.

> real sandboxing

> cgroups and namespaces

Good luck with that; cgroups and namespaces are not designed for security. They are only superficially like jails.

[1] http://www.joelonsoftware.com/articles/fog0000000069.html


It isn't addressed in the post and the author gets kind of defensive about it in the comments, but this is one of the things that Wayland is supposed to improve on. Each application can only communicate with the compositor about its own windows, and only has access to a buffer with it's own content.

I'm honestly not sure where we are on the adoption path for Wayland. Pretty much all of my Linux usage is command line.


Wayland was planned as default for Fedora 24 but has been postponed[1]. I don't think it is on by default in any flagship distributions yet.

1: https://blogs.gnome.org/mclasen/2016/03/04/why-wayland-anywa...


Also certainly it will be in Fedora 25 (in under a year). From there it will be in everything else. Good to see this problem about to go away.


It already works fine under Gnome, every major Gnome distro should be switching either late 2016 or early 2017. I cannot imagine Ubuntu Gnome not switching in 16.10.

KDE is a little bit more behind, but I'd postulate well working Wayland somewhere around Plasma 5.8 - we are going to see 5.6 come out in a few months, and 5.8 is the start of next year. KDE themselves have had to propose tons of extensions to Wayland itself to support functionality they use - server side decorations, screenshotting, key grabbing, etc.


So you can't have screen recording software in Wayland?


You can, but the recording needs to be done by the compositor (e.g. kwin)


Which means you will have one choice which has a 50 50 chance of being crap. We really need a single way for all compositors to delegate this functionality to a trusted application.


Application-specific permissions like on smart phones maybe?


This was one of the features I mentioned in my original critique of QubesOS that commercial & academic products had. I was glad to see they added trusted GUI. Anyone interested in prior work on trusted GUI's for how they might be applied today can check the links below:

A prototype B3 Trusted X Window System (1991)

https://www.acsac.org/secshelf/papers/Epstein.pdf

High-performance hardware-based high-assurance trusted windowing system (1996)

http://csrc.nist.gov/nissc/1996/papers/NISSC96/paper041/tx-h...

Design of the EROS Trusted Window System (2004)

https://www.usenix.org/legacy/event/sec04/tech/full_papers/s...

A Nitpicker's Guide to Minimal-complexity, Secure GUI (2005)

https://www.acsac.org/2005/papers/54.pdf


Nitpicker is part of Genode, right? Whats the status of Genode, is it usable as a daily OS?


Nitpicker is from TU Dresden's group that was doing all kinds of work on L4, secure virtualization, etc. Originally part of their TUD:OS demo here:

http://demo.tudos.org/eng_features.html

Feske, a Nitpicker author, later developed the Genode Framework for constructing systems as part of European research into Dresden-style stuff. He wisely integrated Nitpicker, NOVA, and other best-of-breed tech coming out of CompSci where he could. Plus stuff from OSS that's important for a usable system. Result his people call GenodeOS although many of the pieces, even architecture, are reusable outside of it.

GenodeOS itself is at what I'd call an alpha stage where it's usable enough that people are running desktops on it but more for testing and error reports. ;) They've come quite a way but need more testing and contributions. It's not production by far unless you're backing up data.

Even so, alpha software can cost you productivity with the reinstalls so best to use them alongside another machine with same data that can keep you going during any restores.

GenodeOS Release 16.02 is latest http://genode.org/documentation/release-notes/16.02

Note: They support RISC-V now. That's pretty cool. The combo of RISC-V with separation kernels like Muen and seL4 (both supported) will be important for secure, embedded solutions later.


I really think the linux desktop world is making a huge mistake by not worrying about stuff like this.

We really, really to figure out secure booting and sandboxed applications in a way that leaves the user in control. Desktop linux has been fairly safe because it's not an important target and because most users get most of their software through distributions.

However, I don't think in 2016 we should just take it for granted that one malicious program should be able to effortlessly own the user agent and probably the entire system (on a computer with one user who uses "sudo").

The problem is that these technologies have been associated, especially in android/ios, with restricting what the user can do. However, this does not need to be the case.

Until recently, most people's data on their computer wasn't too valuable, but malware has gotten more sophisticated at targeting things like bank information. If we wait for linux desktop security to become a serious issue, it will be too late.

The worst thing is that we already have things like SELinux and AppArmor, but they aren't being used. The insecurity of X is no joke, either. We may getting close to a point where we have no choice but to throw out X just to get security, even if the other reasons for switching aren't that compelling.


Another huge issue is that user-level programs can access all everything in the user's home directory which is where all valuable data resides. There is no reason why a tetris game should get access to all your financial documents, but that's how it works today. A single malicious user-level program can destroy everything of value on your computer and no elevated permissions are required. In contrast to iOS where all app data is siloed off, exactly as it should be.


"In contrast to iOS where all app data is siloed off, exactly as it should be."

Which is also why trying to do anything productive on a smartphone is such an uphill battle. Arbitrary programs not being able to exchange arbitrary files is totally crippling for any use-cases that go off the rails of what one single app wants to let you do.


Sharing data and interopability between applications is a hard problem, but Apple is on the right track. In any case, starting out with something with sound fundamentals and then improving usability is a much better approach than starting with something that is fundamentally broken and adding kludges to make it more secure.


That may be a roadblock but the primary reason why smartphones are relatively unproductive is because a touch only interface has the disadvantage that you can only interact with things that are visible on the screen. When your screen is less than 7" big and 30% of that space is wasted by a virtual keyboard then the only way to design a UI for a smartphone is to have as few interactions as possible, otherwise we end up having to constantly scroll up and down. The end result is that a lot of apps are hiding core functionality behind the hamburger icon.


copy paste already works wonder to share pictures and the like between multiple programs on iOS at least, one can easily extend the pattern in a desktop environment and even add more entry point as long as they are user initiated, kind of like you can access the clipboard in browser only if it's on the stack of a user initiated event.


How do I type

    for src in $(find . -type '*.png')
    do
        sdir="$(echo "${src}" | dirname)"
        name="$(echo "${src}" | basename)"
        dstdir="/mnt/www/images/${sdir}"

        optipng -out "${dstdir}/${name}" "${src}"
        convert -scale 120x120\> "${src}" "${dstdir}/small_${name}"
    done
into a GUI's "copy paste" feature?


You don't there are plenty better ways to list and filter files check out how beos did it.

You could do literally the same and you didn't even need to drop to the console.


you long press the f for the terminal window to start hilighting text, and drag it to the e in done, and then choose copy.


This is my biggest pet peeve with iOS. That and the way background services act. Why can't I have an app that syncs my photos to my server run at night without the location hack? Ugh.


There ought to be a system level API that a program may call to get to read or write a user supplied file.

Opening any random file should be reserved to just a few core applications.


Its called Mandatory Access Control. It has existed for over a decade in the mainline kernel. Ubutnu, Fedora, Suse - everyone but Arch, pretty much - supports one of its implementations.

The problem is more that I do not believe any distro is shipping a default-deny policy. Unknown programs are simply given the kitchen sink rather than having, say, user friendly prompts saying "this program has no access control and wants to open X or use Y, allow?"

But we definitely have the technology, and it does not take any heavy overhead isolation to do it.


> everyone but Arch

Arch has a grsec kernel in the repos if you want one...


And nobody maintains prebaked PaX profiles for Arch, which makes the MAC part much less effective as a security feature.

There are two ways to do MAC - applications (or the distro) provides the profiles, that restrict execution to just what the program will use, and if it ever asks for anything else it should be suspect for exploitation. The other one is where you put your programs in learning mode, and that only hardens you against future exploits, not current ones, and you also lose the protection against untrusted software because by default everything is untrusted and you must trust everything on a case by case basis.


It's not that simple, and it's easy to misdesign such API.

Say, VLC team had argued that a video player may need to open a supplementary subtitle file. If application can be only granted access to a single video file - that's impossible. I think there were other examples, but I don't remember those off-hand.


Filesystems are trees. VOX could have read access to a particular tree (~/Videos) and no access to everything else, including but not limited to ~/Documents/Finance


But what if I want to watch ~/Downloads/big_buck_bunny.mkv?


Explicitly mark it as available by moving / hardlinking it to ~/Videos.


It's certainly not simple. This is a big UX problem, but one worth solving.


This is actually how it is implemented in OSX. If a sandboxed app shows a 'file open' panel then it automatically obtains the rights to read and write to any file selected (the open panel, or 'powerbox' as it is called, is implemented in a separate process so cannot be manipulated by the app). An app can also request a secure 'bookmark' to this file so it can persist file references.


You might be interested in the SubUser[1] project, which does exactly this application siloing for Linux.

[1] http://subuser.org


I like what they're doing, but applications have to be written from the ground up knowing that they live in a sandbox, so they can ask for the appropriate permissions in the appropriate places explaining why the extra permissions are needed. Otherwise the user experience is going to be pretty bad. Security doesn't work as an add-on.


Nah, there's a lot of room for easy adoption.

For example, for a text editor like vim, here's a completely reasonable lockdown:

- see none of my filesystem... except `cwd` I launched you in

- you get a homedir that's persistent for this application

- (incidentally, whatever libraries you need are there; network namespaces dropped, clean env vars, clean pid space, etc etc etc.)

The things on this list involve essentially Nothing Special and no changes to the program, and at the same time manages to radically reduce the surface area a malicious program can mess with on my computer. Even if, say, a malicious version of vim somehow made it in $distro's package tree and was signed: this still makes me... well, pretty safe. The scope of damage is confined to... exactly what I just laid out on those two bullet points.

And subuser's configuration does an excellent job of exposing exactly those kinds of behaviors to you with very minimum amounts of fuss.

(A json file with `{"access-working-directory": true, "stateful-home": true}` is all you have to say to subuser to set up an application to get... well, hey, if these aren't self-documenting names, I don't know what is.)


xdg-app is currently being developed by gnome and it does sandboxing like not giving Tetris access to your file system

https://wiki.gnome.org/Projects/SandboxedApps


This has existed for over a decade in the form of mandatory access control (MAC). Any distro worth talking about has been shipping a MAC solution (Apparmor or SELinux mainly, even though I'd argue grsec / pax is way better) for years, the problem is more that I do not believe any of them use default-deny policies, because there is no mechanism to ask the user for permission for unknown programs or programs trying to act outside their allocated access controls.

But for anything you install from the main repos, it will almost always have an apparmor / selinux profile installed along with it to prevent this kind of arbitrary filesystem access.


I think Subgraph has some application-level sandboxing, and of course there's Qubes (from the author of this article), which can be used for "risky" apps, too.


As I posted in a previous discussion of this two years ago [1]:

The very first comment below the article (correctly) contradicts the author's claims about SELINUX sandbox. The author acknowledges the comment, and criticizes the SELINUX implementation, but does not dispute the fact that SELINUX sandbox ("sandbox -X xterm" in RHEL/CentOS/Fedora/SL) does in fact defeat the keystroke logger attack described in the article.

[1] https://news.ycombinator.com/item?id=7607082


So what is the current status of GUI isolation in Linux (say Ubuntu), Windows and MacOS?


That is a very old article, I wonder if there are any new approaches to this issue.


I've been researching this recently and I think the Qubes approach is still the best. Some people use xpra, which is fundamentally very similar to Qubes (a compositor running inside a dummy X server), but xpra has been designed to run over the network and hence is not as efficient as Qubes with transferring buffers. Consequentially, I find xpra to be too slow to be usable. xpra also has gained a lot of additional features lately which I worry has increased the attack surface.

Coincidentally, I was planning to spend some time today porting Qubes' GUI isolation to run outside of Qubes (for use between containers or other OS-level sandboxes). If I'm successful, expect to see a Show HN.


> porting Qubes' GUI isolation to run outside of Qubes (for use between containers or other OS-level sandboxes)

Want level >9000.


Isn't Wayland supposed to address this kind of issue (X applications being able to read/control what others do) ?


Yes, it is.

For more details about the issues and possible solutions read:

http://mupuf.org/blog/2014/02/19/wayland-compositors-why-and...


Previous discussion (~5 years ago): https://news.ycombinator.com/item?id=2477667


It is of course entirely possible to run a less-trusted application in a nested xserver like xephyr in any distribution and avoid this.


Yes, but that's not "rootless." The Qubes approach provides much better usability because the windows from the untrusted application appear individually and are managed by the host X server.


I don't dispute this.


Qubes OS is super cool conceptually - torrenting it so when I've got time to spare I'll see if I can practically get it running, would LOVE to replace the current messy and insecure Linux/OSX/Windows systems with a new secure and stable OS that includes sandboxing by default for everything.


At one point I disabled the XTEST extension when I realized it could be used maliciously (sending clicks or keystrokes to applications, keylogging, etc.) XTEST is already disabled by default on some distros.

After reading this I see that "xinput test Keyboard0" can also be used as a keylogger on my machine. Anybody know of a simple way to disable that one?

[PS: please no trollish answers like "use a different windowing system" :P]


Xinput is an extension, so it should be possible to disable via xorg.conf.


Xinput is a very old, very widely used extension, and handles much more than just input sniffing. Disabling it will remove a lot of really important core functionality like setting keyboard mappings and mouse acceleration. It's also likely to break certain applications which legitimately need to capture events, like screen savers.


Yes, it's sadly ironic that things are modular enough to make xinput an extension of the protocol, but not modular enough to separate the very useful looking functionality in the xinput(1) manpage with a keylogger.


One mans keylogger is another mans debugger.

Note that the command is "xinput test id". Meaning that it is a way to monitor what xinput sees when a key or button is pressed.

I have personally used xev for similar purposes, map my keyboard's multimedia keys to various actions.


It should be possible (and default in the X server really) to isolate X clients by only giving them access to the contents of, and events on, the drawables created by that particular client.

Combined with a way to authenticate privileged apps (which have permission to sniff for any input event and scrape any drawable), you can have your... security cake and eat your... screen grabbers and Guake hotkeys... too?


As it turns out, there are X11 extensions (SECURITY, XACE) which mitigate this problem somewhat, and Xorg has an extension (XSELinux) which allows fine-grained access control to X11 objects via SELinux policy. It's just that the distro vendors don't actually choose to enable it. (Red Hat seems to prefer sandboxing with Xephyr, or else force-migrating everyone to Wayland.)


Just pull the trigger and upgrade to OpenBSD.

  Xenocara (least-privilege x11)  
  libc  
  LibreSSL


I'd love to but it runs horribly on my thinkpad. My choice is between 10-second system freezes when opening or closing or switching browser tabs (i.e. with APM on), or else a 2-hour battery life (with APM off).

On a Sandy Bridge i7.

OpenBSD is pretty good for servers (I run one), but my experience has been that it's just total shit on the laptop.


OpenBSD solves exactly none of the problems discussed in the article.

Wayland, on the other hand, does.


Elsewhere i read about using vnc locally to run something as different user.

That seems to block this, though not for the reason i expected (i get an error about a mission extension when i try to run xinput list).


a bit unnerving. why wouldn't some ad-tracking mechanism that's part of some apps simply collect everything it can and radio it back home?

btw, can this be considered a huge security plus for web apps? they can break out, but at least they have to put in some effort.


>why wouldn't some ad-tracking mechanism that's part of some apps simply collect everything it can and radio it back home?

Because the vast majority of GUI software installed on Linux comes from curated app stores (or 'repos' as they used to be called). It's also mostly written without any expectation of making money.

Web apps being sandboxed is a plus point for users in the sense that it increases security (on the local machine - the web app vendor is still free to do whatever they want with your data). It's also a negative since it means any code you want to execute on your own hardware has to be approved by the browser vendors (web standards don't really provide any user controlled way to break out of the sandbox[1]). The standards generally operate in a lowest-common-denominator way such that the vendor of a browser your don't even use can prevent you from utilising your own hardware by stalling web standards indefinitely.

[1] the old plugins interfaces like NPAPI do exist, but are being phased out and aren't supported at all on web-only platforms like ChromeOS and FirefoxOS, as far as I'm aware


xorg developers seem to think the X server currently does cover this: https://bugs.freedesktop.org/show_bug.cgi?id=38517


A non-problem in practice as far as I can tell. It's also a very convenient feature.


i personally would appreciate a security popup from the OS asking me if i want to give full access to an input to any app. otherwise, to each only his own, and the possibility of applying for global shortcuts perhaps...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: