Hacker News new | past | comments | ask | show | jobs | submit login
Desktop Linux is insecure (bjornpagen.com)
72 points by bjornpagen on June 10, 2023 | hide | past | favorite | 102 comments



Linux absolutely has a plethora of software one can use to improve security - AppArmor, Flatpak, Snap, and whatnot; the "all or nothing" approach the author mentioned is also not inherent to Linux, just a common setup people use. You can absolutely limit user access to a subset of superuser commands.

The problem is nobody cares. Linux users don't care, and even the users of sandboxed OSes don't care. I'm more irritated by macOS constantly popping up with "iTerm2 wants to access your Documents folder" than comforted. Some distros implemented e.g. AppArmor to generally positive reviews, but Ubuntu's transition to Snaps has been seen very very negatively.

Wayland "features" some protections too - apps can't access the global clipboard, or move themselves, screen-record, whatever. And nobody likes it! Because the inconvenience far outweighs the benefits.

Overwhelmingly I think the "problem" is that Linux users don't really want, or need, this protection macOS/ChromeOS offers.

An aside: Windows has sandboxing? Where?


In practice I would argue that AppArmor is worth nothing, because everyone uses it to allow every file access by default.

Heck, glibc by default still allows LD_PRELOAD pretty much everywhere, and most distros have available SUID binaries you can hijack, so it's useless sandboxing anyways.

Setting aside the absurd amount of self invented config file formats for it (which have so many pitfalls for endusers) I think that firejail is the only viable alternative as a seccomp sandbox.

Flatpak's sandbox approach is as useless as AppArmor's approach, nobody can use it because the people that design it seemingly never use it in production apart from their www user test cases. If you try to sandbox a complicated program like Firefox with its hundreds of rendering processes, well, good luck...we'll talk two days later on how much progress you made just creating the config files.

I got so pissed by all of it that I started to learn eBPF to build a better sandbox, which I am trying to combine with a smart firewalling approach. But honestly, it's a ton of work, and the lack of profiles for everything makes it really hard to get to production.

You literally need to implement something like a "learning mode" that test runs a program, just to figure out what you want to allow or deny. This approach kinda worked for me, but I am now literally writing a DNS filter because resolv.conf as a concept is pretty much useless as well.

Once you start building something like this you see yourself 2 years later implementing all kinds of network services which in themselves are attack surfaces already...so you start to question whether it is really all worth it. Even hooking something as "simple" as DNS resolver API calls becomes a task that is a shitload of work until completed.


> Heck, glibc by default still allows LD_PRELOAD pretty much everywhere, and most distros have available SUID binaries you can hijack, so it's useless sandboxing anyways.

Common misconception, but no, glibc doesn't allow LD_PRELOAD on suid binaries.

Root is the one thing that's well-protected on linux, you cannot get root without the aid of a user giving it to you. Similarly, separation between users is also rather secure, but no desktop user is going to run each application under their own user.


> glibc doesn't allow LD_PRELOAD on suid binaries.

I'll just leave this here then. [1]

[1] https://gtfobins.github.io/


I don't get this site. Yes, emacs can be used to read/write files, and yes indeed if you set the SUID bit or run it via sudo it'll have root privileges... And? Is this a common configuration on some system I'm not aware of?

What does any of this have to do with LD_PRELOAD?


the first sentence of this says:

> bypass local security restrictions in misconfigured systems.

"misconfigured" is a key here. Sure, people do occasionally make mistakes or bad decisions and introduce security vulnerabilities.. but that happens less often than you think.

gtfobins is pretty useless in regular server/desktop installs, at least for user permissions elevations. If user-to-root escalation is discovered with common config settings, it gets assigned a CVE and fixed, not listed at random websites.


I'm pretty sure this happens more often than you think. I'm literally scraping all Linux security trackers (both OVAL data if available and the trackers themselves) and correlating them. [1]

In my personal opinion, Debian/Ubuntu are the worst when it comes to security and maintenance. There are thousands of issues and CVEs marked as Fixed or Ignored on their security tracker which were not actually fixed but had a tag similar to the meaning of "code diverged too much from upstream".

A lot of PoCs that you can find on ExploitDB still work today, after 5+ years later, on an updated system. Especially the ones that are disputed CVEs where companies like SAP ain't give no damn about security issues (they have a dispute rate of exactly 100%, how can that be?).

Blaming misconfiguration up onto the End-User is not what I would do here, because it's the distribution that labels themselves as being secure/stable/whatever. If users use the default packages with no changes to configurations, I expect the distribution to provide sane and secure defaults.

If you argue that a "misconfiguration" is what makes a system vulnerable, then good luck finding a system that is not vulnerable - because there will be none.

Coming back to glibc and its messup when it comes to environment variables and parsing and linking issues: That was one of the reasons Alpine started its muscl library.

I mean, glibc is so ridiculous that you could get root privilege escalation via using the "ping" command up until around 3 years ago when distributions finally started to remove the SUID flag on the binary and instead use the network capability flags.

[1] https://github.com/tholian-network/vulnerabilities


This is pretty much the opposite of my experience. While there are occasional CVEs which can cause problems, most of them are not exploitable in any way. And the automated security trackers are even worse, giving totally useless results.

The latest example which comes to mind CVE-2023-22809, sudoedit bug. Out infosec department made a whole bunch of noise about it, forcing the upgrade all over the place. But this requires (1) interactive user (because sudoedit makes no sense otherwise) with no root sudo access (2) restricted sudoedit access. I am pretty sure our whole org had no machines with such config. First of all, in the era of VMs and cloud machines, the interactive users are admins, they are very likely to have full sudo. And if there is a task unprivileged user must do, it would be a web app or CI runner or a chat automation, not sudo-less ssh session. Second, if such user did exist (unlikely) and had a need to edit a non-owned file (even more unlikely), who would grant them direct "sudoedit" access? It would be a script which takes input file, validates, logs changes, and only then installs the new version.

I'd agree with you on one point: proprietary Linux software is often of horrible quality and I would not trust it. I never worked with SAP, but we have a proprietary remote desktop access app, and it's horrible design, with dozens of suid binaries, and insane authentication methods. I would not be surprised if it has a vulnerability or five.

As for the rest of your arguments, I believe you are simply wrong.

"If users use the default packages with no changes to configurations", the latest Ubuntu with all the patches will be secure. It is not hard to find a system which is not vulnerable - just regular Ubuntu LTS system, say default install + browser + emacs + some compilers, will be secure today (except for unpatched vulnerabilities, of which there are none at this moment; and unknown zero-days which no one can do anything about). glibc had its share or vulnerabilities in the past, but none in the last 10 years. The system defaults are generally secure (but not always sane :) )

If you disagree, please mention specific exploits/CVEs. As a part of my work, I spend lots of effort on keeping stuff secure, but the whole CVE/security scanner things are pretty much a huge disappointment.


> As for the rest of your arguments, I believe you are simply wrong.

Just coming across this comment, I would disagree with your assessment.

Most distros use systemd, systemd has a dependency on d-bus and to do many of the features their userbase expects, it relies on d-bus activation which reparent's processes under the systemd user in most cases via the dbus-helper which suid's. This process also breaks common admin utilities like tops and ps (they don't show up except under very specific views, like tree view). Importantly, these distros often do not have MAC configured with a safe baseline if at all.

To put it mildly auditing d-bus and activations of this sort is ridiculously obtuse from an administrative perspective. Then there's all the dated software which no one touches or views such as gsd-* that cause a fail-whale when disabled.

The way exploits work most times is by using a chain of exploits. You chain 1 piece to the next, to the next, until you get to a bug that gets you what you want and the attacker then hits paydirt. This is basic cyber-sec 101, I don't see why you would discount exploits just because you don't know how they can get to that point to use it. We work with a porous attack surface everyday.

Most distro's have system defaults which don't even include a basic endpoint stateful firewall. I would hardly call these distros secure (which is most of them). End users are not expected to have specialties in Cybersec, Information Technology or System's Engineering just to set up a out of box system that's secure by default, this is the responsibility of the distro publisher.


> Sure, people do occasionally make mistakes or bad decisions and introduce security vulnerabilities.. but that happens less often than you think.

It's not exactly expected that a local GIMP desktop install would provide reverse shell capability, yet here we are (https://gtfobins.github.io/gtfobins/gimp/)


We get it, code execution leads to code execution.

https://devblogs.microsoft.com/oldnewthing/20230103-00/?p=10...


GIMP desktop install provides python scripting. The reverse shell capability comes from reverse shell specified on command line.

gimp is no more dangerous that any other scriptable app, like emacs, vs code, web browser, etc..


Something like RSBAC/SELinux is a good fit. Not sandboxing in the traditional sense, but pretty bulletproof if setup correctly. The problem is the time it takes to do that.


> Windows has sandboxing? Where?

Since Windows 8, WinRT applications are sandboxed.

It also has group policies and Win32 API that are used by the likes of Chrome for process sandboxing.

Deployments via MSIX packages also have a kind of lightweight sandoxing, as it builds upon WinRT, but on the classical Win32 side.

Now Microsoft is looking into bringing that everywhere, Windows 11 current preview already ships the first steps of full Win32 sandoxing being applied to classical applications, and on the roadmap application signing, with revocation of developer certificates, is already announced that is being worked on.


In what way are applications sandboxed? The average multiplayer steam game basically installs a root kit on your system. Last time I checked, a unity game jam game can read whatever files it wants on your system (I barely tested this, it may have only been true of my own computer?)


If you get apps from the windows store, they're sandboxed.

If you're a normal person, and get apps from your browser or any other store, yeah, no sandboxes there.


Only if classical Win32.

You can also install WinRT/UWP applications via MSIX, and they keep the sandboxing no matter what.


Those is a classical Win32 application, not WinRT/UWP.

Their time will come and is exactly what is part of Win32 sandboxing currently in preview on Windows 11.

Long term roadmap UWP sandboxing model will be enforced on Win32 as well.


> The average multiplayer steam game basically installs a root kit on your system.

When you click the "Do you want to install a rootkit" button, yes it does.


By default windows user is administrator thanks UAC. By default you install steam as Administrator and there is steam system service.

But you can create a separate user and leave default one as Administrator. In system policies you can activate a rule where you don't allow users to become Administrator through UAC. And you can install steam as user without the service.


That’s interesting. I want to know more. What so I google? Uac, windows users, windows administrator, steam?


> Wayland "features" some protections too - apps can't access the global clipboard, or move themselves, screen-record, whatever. And nobody likes it! Because the inconvenience far outweighs the benefits.

The problem is that apps can't request these permissions either, while they are absolutely critical for a select for apps so that a desktop environment can be used for actual work. It's just "use wlroots and figure it out yourself or tough luck"


I can screen record to websites via firefox, with all involved elements using wayland? This is doable on KDE/Gnome for over a year now I think.

Open Broadcasting Software works fine too.


Yes, that's because the window managers implemented this themselves because Wayland didn't. So that means most window managers use wlroots to provide the functionality, and a few (KDE/Gnome) are doing their own thing. This also means that if you want to interoperate with them it's different for each window manager (that doesn't use wlroots).

Things are somewhat usable now thanks to this, but far from ideal.


> This also means that if you want to interoperate with them it's different for each window manager

...which is abstracted behind desktop portals, so applications like OBS or Firefox don't need to care.


Users do want it at the point things go wrong. They want it in hindsight I guess. I don’t mind the Mac OS X way (I am not sure how good it is, but it feels good). I don’t see why not every app, in Linux or elsewhere, is not ran in a container and directories are mounted, network access is permitted etc by asking you. When I was forced to used windows (a long time ago), I installed something that alerted me of network access and default blocked it; just switching on the laptop and logging in would give me 10 ‘allow this?’ and that was not only to Microsoft owned domains. This was a while ago, so I cannot even imagine what happens when opening up windows 11.

Users do get annoyed, but on the other hand: I am willing to bet, if you check statistics, that 99% of browser (and most other apps) users access folders Pictures and Downloads (and maybe Documents) only, ever. As for urls, every tab must be another sandbox-in-sandbox and by far most of them never need the access to Pictures or Downloads or anything else. So you can surely ask without annoyance. Now a site using assets from another domain are mostly ads, so just block it all and allow manual unblock per domain.


I tried to use an application firewall on Windows and linux too. Fantastic software. However the reality of modern life is the worst offender is your web browser. I wasn't about to verify each destination IP so I ended up blanket allowing the binary. Fine. Then there's also a bunch of system processes on both operating systems that are overloaded. They do so many things using tiny system or kernel level binaries. In the end I disabled the software on both OS.


Windows has UAC, so you run things like root all the time, it's great


Its equivalent to polkit


Polkit is a very badly-designed, poorly-executed imitation of UAC.

UAC is an operating-system-level system that heuristically detects when an executable is acting as an installer, and prompts the user with a Yes/No prompt (you can see screenshots online). This even works if you're running a terminal command. Terminal command needs administrator access? You get a blocking, modal dialog from the system that forces you to click Yes or No. That's very powerful.

Polkit is a component for controlling system-wide privileges in Unix-like operating systems, the manpage is less than 2 pages.


I thought you were wrong about how UAC worked, and then I went looking through Windows Vista docs on UAC and you’re absolutely right. And that’s batshit insane!


Terminal commands don't work that way. Open a command prompt, type 'sc stop spooler', and observe the 'Access Denied' message and the lack of a UAC prompt.


Read the parent, that's not an installer, you are, like me up until this very moment, assuming UAC is sensible


It's not heuristics, it's executable metadata


The official UAC docs disagree


> You can absolutely limit user access to a subset of superuser commands.

Wasn't that limit basically just a string match?

I.e. when you're allowed to do sudo abinary the whitelist will accept whatever is behind that string? So from what I recall, simply adding a new directory to the start of your path will let you execute your own script instead, giving you blanket root permission. Same with chroot on absolute paths, right?


A collection of hashes for each specific binary that now needs to be maintained and updated any time one of them changes seems like it'd be more secure, but also a huge pain. I think if an attacker already has access to my system, can install their own binaries, and then run them by specifying paths at the command line, the security of my linux desktop is already a lost cause


Nope, sudo uses hardcoded PATH and not user-supplied one. Doing otherwise would be a huge security hole, and would get fixed quickly. It also clears related variables like PYTHONPATH (the exact list depends on distro)

In general, sudo is pretty good with single command grants as long as you use this on the scripts made for the purpose (like shell scripts which take no or a single fixed argument). But granting arbitrary arguments permission to a random system binary is a bad idea, something like --log-output might become an escalation.

(And unfortunately the default PATH means to of you want to sudo-exe c binaries from your user's bin, you need full path. I hit this all the time.)


Windows Sandbox and Windows MDAG.


Tough to keep reading after seeing this kind of misunderstanding

  > There are a whole bible of strategies that ChromeOS implements to keep Chrome in it’s own little world. Among the strategies involve cgroups, namespacing, seccomp, etc… This technologies basically do what Docker does, but without the overhead of a virtualized Linux kernel
Docker doesn’t have a virtualized Linux kernel, it uses the same technologies mentioned directly on the existing kernel.

  > Also, Linux desktop software has a very all-or-nothing approach to permissions. Root, or user.
I wonder why more systems don’t make use of Linux capabilities [1]. Technically root hasn’t been necessary for years; it’s just a convenient shorthand for “all capabilities”. Lots of things that needlessly run as root could instead be granted a much more limited set of capability xattrs instead by the package manager and consented to by the user. Using root in 2023 is lazy.

[1] https://man7.org/linux/man-pages/man7/capabilities.7.html


fixed #1, thanks


Honestly I feel less secure with sandboxes Android apps that I do on desktop Linux.

On Linux I use my distro's curated repository (ex to install cozy, an ebook reader, or yt-dlp), I don't care that it has access to the rest of my system, the bonus point is that I can have access to its source code. On Android, the first app I see has "In-app purchases" and will probably spy on me. Yes... I agree I guess, on Android (therefore ChromeOS) app stores I definitelly want sandboxing because it's a very adversarial relationship between the users and the apps.

Unfortunatelly that means you need to get into sandboxes, throw stuff in VMs and all the other things that will make performance and efficiency suck.

I miss the time where my computer ran only software I controlled and it wasn't a free for all where everyone wants to run random code (Javascript from websites is a big one here). All the Specter/Rowhammer stuff is only because you're sometimes running untrusted code. Why do people need to be running untrusted code all the time? /me goes back in the cave


Just stop with this nonsense. This is such a foolish opinion that is bandied about from a "I know exactly what apps I run, sandboxing is for those noobs" mindset. Did you compile them yourselves? Did you compile the compiler?

Let's just accept the fact that Linux comes from a place of "if you run a program you are responsible for what it does". This is contrary to modern users expectations of "just because I run a program it shouldnt be able to siphon all my data", largely driven by mobile apps and their ecosystem.

If I autocomplete an ls command on my Mac's terminal it will warn me that iTerm is trying to access the specified folder. There is nothing like that on popular distros. The whole security landscape needs to be re-thought.

Maybe someone who knows more than me can talk about bring features from more restricted Linux variants to a more broad audience.


He's just bringing a valid point. Despite the lack of isolation between pieces of software, I too feel more confident about what's running on my Linux distros. That's not to say that "sandboxing is for noobs", but it's an interesting point. It's like how people in very rural areas can feel perfectly safe not locking doors and stuff, sleeping outside, etc., while that would be extremely foolish to do in the middle of certain cities, in certain neighborhoods. Different environments, different dangers, and I think it's fine to enjoy the benefits of a safer environment.

Safety in this case is not provided by physical distance between homes but by curation of software done by nonprofit groups and selection of nearly only open source software with easy building of packages and tracking of changes in the source.

> Did you compile them yourselves? Did you compile the compiler?

There's no such thing as perfect security, and I think you know that. If you think your compiler may be compromised, there's stuff you could do, you just have to evaluate where the paranoia starts and stop before then.


Do you trust your OS, compiler it was compiled with, and then trust all the hardware it's running on (which is probably even more insidious and capable of hiding stuff).

Let's not pretend that just because there's a sandbox at a higher level of the stack and some kind of user ability to accept/deny operations that things are secure.


Eh, defense in depth, if we make a huge mess of overlapping detection systems, good luck inserting a compiler quine to bypass them all.


> Did you compile the compiler?

Do you trust trust? (And no differential compilation isn't an actual solution.)


System security doesn’t care about your feelings — any rogue program, or even an npm install can encrypt your whole drive, or install a keylogger.

This is not true of android and ios.


> the bonus point is that I can have access to its source code

Are you actually reading all the code before you run it? Are you re-reading it for each update? If not, then what's the point of bragging about having access to the source?

The point of sandboxing is that it's impractical to reliably audit, on a continuous basis, the massive volume of software that the average person runs. It's more economical to apply the least-privileged principle, and only give apps access to the things they need to function.


> Are you actually reading all the code before you run it? Are you re-reading it for each update? If not, then what's the point of bragging about having access to the source?

Not every user needs to read everything. We can read pieces of what we use and trust others to also read pieces of what they use. We can also place some amount of trust that there's a body of people that have read code before we started using it, and that it's only the new changes that need the more review. People can also use reputation to make safety in review more economical.

Sandboxing is not bad, but it's not the only way that security can be achieved. Having a good social infrastructure also helps.


Not every user has to read every line of code, but I do sometimes wonder how many open source products have never been read by anyone outside of the people who wrote/maintain it, and for those projects where anyone has reviewed the code, how many of them were really qualified to understand what they were seeing?

I still believe that having the code available for review is important, but I don't think it's a reliable means of saving people from insecure or malicious software.


> I still believe that having the code available for review is important, but I don't think it's a reliable means of saving people from insecure or malicious software.

Just having the code available, in and of itself, is probably not. However, the presence of the source is not the only thing you have to provide reliance. For Archlinux, for example, different package repos have different requirements and provide different levels of safety. You can put more trust in packages in core than you can those in extra, and you can trust those in extra more than you can those in the AUR. Anyone can push packages to the AUR, and so can they to other package repos like those of different languages (rubygems, hackage, etc.). Different languages will have different communities and you can get a feel for how trustworthy they are as a whole, based on their requirements, etc. This is like the difference in safety in different cities. You can check the author and get some kind of idea as to how much reputation they're holding. You can also check the package maintainer and get some kind of idea as to how much reputation they're holding. You can check how many other people trust that software, and if there's any particular notable ones. You can see how well established and widely-adopted the development process is formulated in the homepage/github/etc. You can also review the source yourself, and even if you're not some security expert, that doesn't mean your review is absolutely worthless. It's got a score. Put a score on every source of trust, add them up, and check with your risk tolerance.

You don't need to do everything. If I decide to walk on a street, I'm not checking the crime statistics there, the internal state of the nearby police department, etc. I'm mostly deciding based on the city/neighborhood I'm in, how populated the street is, the state of the people there at a glance, and that's generally more than enough for most people.

> but I do sometimes wonder how many open source products have never been read by anyone outside of the people who wrote/maintain it, and for those projects where anyone has reviewed the code, how many of them were really qualified to understand what they were seeing?

In case my point was lost in my ramble, you don't have to base your decision on trusting a particular piece of open source software based on how much you trust the whole body of open source software in existence. You can decide to e.g. trust the official repos of a distro based on how that curation works, so trust the packages in it and not the software outside it (e.g. the AUR or random Github repos), and you can decide to trust based on other signs of your choice like that, too.


As has been demonstrated multiple times, “more eyes” is a fallacy.

There have been plenty of malware on perfectly open source and often reviewed repositories.


The overarching assumption of this article is that OS sandboxing is a 'must requirement'.

I would challenge the author to explain why is the extra layer of sandboxing a must requirement. Do we know the size of the problem? As in... how many known websites run malicious scripts with real intent to do harm, and how many sites achieve that goal?. Do we know how many users got hacked because the lack of sandboxing in linux when doing typical web browsing?

My view is that the article blows the problem out of proportion. Security relies to a great extent on user behaviour.

User A visits known domains only, and opens emails from known senders only. They only share card and personal details with known and trusted parties. They only install the strictly necessary apps and from known devs only.

User B visits any site regardless of domain. He shares email, phone number and card data at the earliest chance (ie when trying to buy illegal drugs from a clearnet shady store). He installs dozens of apps, often from unknown parties.

Regardless of OS, browser and other context, user A is going to be safe most of the time and it's very rare that they in serious trouble. User B is an accident waiting to happen...

If the above is true, maybe we don't need a technical solution like OS sandboxing. It would be more effective to create a standard for all browsers where the first screen you see is a big message saying: "Welcome to your 3 second online tutorial: Visit known websites where possible. Don't share your real personal details unless strictly necessary. Share details with known and trusted parties only. In case of doubt, don't trust the other party."


> Security relies to a great extent on user behaviour.

User behavior is unreliable. We're moving from passwords to passcodes because of how poorly we're picking and using passwords.

> User A visits known domains only, and opens emails from known senders only. They only share card and personal details with known and trusted parties. They only install the strictly necessary apps and from known devs only.

It's not enough these days. Known senders can be compromised and send spoofed messages. Supposedly trusted parties have data breaches. Supply chain attacks target known devs, like SolarWinds.

> user A is going to be safe most of the time and it's very rare that they in serious trouble.

Security is about adapting to emerging threats. Threats that were rare before become widespread, especially if they work. Based on what I've seen it's already happening.

Sandboxing and capabilities seem like a step in a right direction. In any case, on a machine and an OS that you own and can modify, you could disable it if you desire. Just like you could disable a firewall.

> It would be more effective to create a standard for all browsers where the first screen you see is a big message saying: "Welcome to your 3 second online tutorial [...]"

If all that it took to train users about online safety was a brief pop-up, we wouldn't have security problems. Yet here we are.

It would have the same effectiveness as cookie banners, consent pop-ups, terms-of-service checkboxes, and are-you-over-18 buttons. None for the users, but someone will keep their hands clean.


Being careful sort of works, until the New York Times serves malvertising from an ad network: https://news.ycombinator.com/item?id=11296252


"being careful" in my view includes blocking ads, reducing unnecessary remote connections to servers and not running unnecessary code. You can't avoid everything, but I think you can get by pretty well being careful. That does mean being more careful about doing things most people don't consider very risky though.


Most people barely have any idea they run any code and what in general code means.

Seeing questions like - I was traveling and have a huge bill on internet traffic, I wasn't downloading any movies just watched them online, why you charge me? - gives me personally good portion of understanding that users can't be relied on and need accoring guidance.


The OS may or may not be involved, but I view systematic ad blocking as a layer of sandboxing, because the user isn't in the loop.


Author is totally correct to promote ChromeOS as a modern, sandboxed, low-attack-surface alternative.

Unfortunately, Achilles heels of ChromeOS are in the usual places: browser extensions and integrations that can access any web page you view, or do anything they want with your Google account, which can wreak havoc for people with a lot of stuff in there.

Now what I will say for consumer Linux distributions is that it's eminently possible to easily reduce your attack service in a few ways: judicious configuration, and strip down all unnecessary software packages! I was sysadmin for a long time before I realized that I was a glutton for apps and I had a tendency to consume them and hoard them without regard for what I really needed to run on a day-to-day basis. I would install software, try it once, and finding it boring, I abandoned it, but it was all still installed.

So I began perusing my list of installed software and I'd just remove anything that was obviously unnecessary. I'd even remove stuff that I didn't know what it did, just to see if something broke, but things rarely broke! My position was validated!

If you remove unneeded software, you will automatically reduce the number of network services running too. That thing you installed probably started up a server, started listening on local and global addresses, and may have even poked holes in the firewall for itself. Do you need that? Can someone make a lateral move from your pwned router into your desktop machine through your neglected services? Get rid of them.

Linux affords much more latitude to a security-minded administrator than Windows. Windows will not let you turn stuff off without causing breakage or it will just gratuitously come back in an update and be turned back on. The Linux philosophy is beyond that nonsense.

Yes, Linux out of the box, or Linux in the hands of an ordinary end-user, it's insecure. But it has more potential than Windows.

For a while, I pondered whether I could outsource my personal, home IT work to an MSP. But MSPs don't do personal networks, they do B2B. I think Geek Squad is possibly missing a bet here. Not that I would let Geek Squad into my Linux homelab, but it's a thought.


I imagine handing all your network traffic to literally any third party that cares only about profit is going to be a security nightmare.


It is a well known fact that many Linux software will break with proper LinuxSE enforcement, just like Windows software not designed to be tamed by group policies.


*SELinux


Browser extensions have whitelists and the click to activate mode.


I have been thinking about this recently in fact. I hate the way Android collects telemetry, but any process and likely most websites I use on Linux can access my clipboard, filesystem, list of programs, OS monitoring tools, even log my keypresses if they want to.

The question is what can I do about this?


Can't you just lock your browser down to prevent that? Don't allow JavaScript to run in your browser (or at the very least don't allow it by default) and suddenly 99% of the tricks websites use to gather that kind of data about your system stop working.


> and likely most websites I use on Linux can access my clipboard

That is totally false. Any good web browser is asking for permission first.


This is scary news to me. I usually assume Firefox and Chromium on linux do a reasonable job of balancing security and usability. Is the situation really that bad?


It is not that bad.

A browser sandbox escape would be red-alert news and fixed within hours.


An npm install is much more dangerous than the browser, imo. But the fact remains, there is basically no usable security on linux desktops and it is a shame (especially when built on top the exact same core we have very secure systems)


Unless there is an exploit that lets a webpage escape the sandbox, they will have access to none of those things listed without permission


all processes, yes, but no websites


Qubes-os is considered pretty secure.

https://www.qubes-os.org/


Could & should things be better by default? Sure, but in practice, all of the proprietary software I don't trust but need to run for work all have a web app that is sandboxed in the browser which is generally good enough for my threat models.

Hilarious though that the author thinks is Nix is “imcomprehensible”. You can even do things like nix shell nixpkgs#proprietary-crap and ditch the application after using it to do one thing.


I mean, it's about your threat model isn't it? The way I think of it, there's basically three ways for someone to break into my computer: remote code execution with no external preparation (as might be caused by ssh exposed to the internet with a common username and password), installing malicious programs, or opening malicious files that exploit buffer overflows or whatever. The first way might be protected by a sandbox or might not: if it is ssh that's pwned, presumably in a sandboxed environment you'd have given ssh all the permissions anyway, right? The second way is far less common a vulnerability on a desktop Linux system than on other OSes, including Android, because 99% of packages people are going to want to use on their desktop are going through the distro maintainers, which isn't a perfect process but seems to be pretty good. Contrast that to Android where malware gets through the Play Store all the time. iPhone seems to be about as good as Linux in this regard though. I'm not totally sure that sandboxing would help in this regard either. If someone exploited my PDF reader it'd probably have access to all my other PDFs wouldn't they? However if someone exploited my video player they wouldn't, and in either case they probably wouldn't have access to my browser, so yeah sandboxing would help there.

But also included in the threat model is what you want to protect. Obviously bank passwords, tax information, etc, but I also want to protect my privacy in general, and right now there's no OS that's as privacy preserving as Linux. Furthermore, I don't trust sandboxing implementations in proprietary OSes from protecting my privacy either: they might say they protect my data against malicious apps trying to steal my stuff, but then if the OS itself tracks my location, search history, files, etc, sandboxing doesn't do much good, does it?


The misconception here is that most of the valuable stuff actually lives inside the browser. That's where users access their email, banking information, use password managers, etc. Breaking out of the sandbox is a relatively low priority goal for hackers looking to compromise browsers. Which is why this is much less of an issue than the author assumes. Browsers have to be safe regardless of whether they are sand boxed. Which is why browsers use sand boxing technology inside the browser. Sand boxing your browser is relatively low value in terms of security benefits.

This is less true on mobile because people use lots of apps there. And if you have some game implemented by a twelve year old and somebody hacks it and then installs whatever on the phone, that is a big issue. So Google and Apple spent a lot of time fixing that on mobile and then applied the same level of standards to their desktop operating systems. MS has of course over the years had their fair share of issues like that so they've gotten a lot better at this as well.

Server side linux is pretty secure out of the box. Pretty much the whole internet runs on Linux at this point. Mostly servers only run one process that matters; which is whatever they are serving. Or a bunch of dockerized processes in some cases. Which are indeed sand boxed.

Desktop Linux is server side linux with some extra stuff. Not a very homogeneous target to hack because there are lots of different software packages that people use in different combinations and versions. And the baseline security of Linux is of course not horrible like it used to be with Microsoft.

And of course things like snap and flatpak actually do use some sandboxing. So, use that if it makes you feel good. Of course that sandboxing isn't perfect as I'm sure people will be itching to point out. But the problem that addresses is pretty minor to begin with. And there is of course a certain amount of resistance against using either of those with some people.

So, not much of a problem and it's been kind of solved anyway.


I feel like there’s a significant cultural shift that has gotten us here:

For the most part we no longer share computers.

We used to be constantly confronted with poor security practices: someone else accessing a file they shouldn’t, or installing some piece of software that took over a system function as a prank, or just miseducation leading to a critical config problem.

I don’t know who set the wheels in motion first (though I suspect it was game makers) but we now have complete sovereignty over our devices and therefore only really think about security against internet intruders. Imo this leads to a much more abstracted and relaxed security mindset overall (including in the minds of the devs).


Yeah we know don’t let the normies in or we will have to fix it. And I don’t I don’t want to buy a Mac.


We use the hell out of systemd sandboxing for just running _everything_. Why fire up an extraordinary docker container when you can just create a container on the fly?


As a long time Linux enthusiast and engineer this needs to be said and it's been overlooked for way too long. We need to figure out how to move past some of the traditional UNIX ways of doing things like apps keeping plaintext passwords in 'hidden' dotfiles.

Even simple things would go a long way like making it straight forward to install packages from distro repositories in userspace. Every time you install a desktop app via the package system you're rolling the dice by giving the package root access. In fact that should be the default behavior for installing the vast majority of Linux apps. If an app doesn't legitimately need root access for anything we shouldn't grant it to begin with.

Don't get me wrong Linux distros do a great job at maintaining security overall but these projects are massive now and in the age of supply chain attacks and hostile nation-state actors all it will take is one package to slip through to cause a major security incident. The Solarwinds hack was the proverbial canary in the coalmine. If it can happen to them it can happen to anyone. It's not a matter of if now but when.


tl/dr: author wants sandboxing everywhere


Flatpak: but it does stop someone installing a keylogger from a ffx exploit

Systemd:

    $ # I use silverblue btw
    $ THINGS='Protect|Restrict|Capability|Lock|SystemCallFilter|Private|MemoryDeny|ReadWritePaths'
    $ AT='/usr/lib/systemd/system'
    $ grep -ilE "^($THINGS)" "$AT"/*.service | wc -l
        64
    $ grep -ilE '^Exec' "$AT"/*.service | wc -l
        292
its a bit more than "No Linux distribution actually seriously uses them very much" and it provides a standard solution to "Meanwhile, developers don’t know the contexts in which their software is being run, and need to release portable programs. Developers therefore can’t assume what permissions their program will need with the rest of the system."

"and it certainly isn’t as comprehensive as ChromeOS’s minijail." Kinda disagree.

- - -

Lot of words, no citations for those words.

- - -

Worth noting that, even among secure options there are points where there is slack: Android uses a uid/gid per app [auid], but chromeos doesn't [cuid]

[auid]: https://android.googlesource.com/platform/frameworks/base/+/...

[cuid]: https://chromium.googlesource.com/chromiumos/overlays/eclass...


Android is another example of OS that gets sandboxing right, I think (though not a Desktop OS).

Anyway, that's also something I've been thinking lately after distro hoping for about 6 years... My average Linux install isn't particularly more secure than Windows. I'm mindlessly sudoing my way through stuff all the time. Clearly there is a Linux malware market to be explored.


Where is that market?


LD_PRELOAD is not an keylogger. Its an LD_PRELOAD is an Equivalent of an DLL hijack. To be save from this /home and /tmp are mounted as noexec to prevent all of these attacks. Libraries that contain executable code (i haven't found one that doesn't have at least a byte) require the executable bit, which is disabled with noexec.


And yet it's the one I trust the most. Weird, isn't it?


little snitch seems to be the correct approach. almost every compromise that isn’t target will attempt odd looking network connections.

we need more little snitch like things, including for filesystem access.

we also need the ability to run little snitch in paranoid mode, where the approvals happen on a separate device, sign each message with a key not on the primary device, and the validation is baked deeply and irrevocably into the kernel. smartphone face up on desk left of keyboard would work well for a second device.

linux lsm seems to work[1], and building the kernel is easy locally[2] or on cloud[3].

hopefully we see more and better use of lsm and custom kernels. we all should want our most trusted public key baked irrevocably into the kernel.

personalized linux is the next frontier!

1. https://github.com/nathants/mighty-snitch

2. https://github.com/nathants/mighty-snitch/blob/master/kernel...

3. https://github.com/nathants/mighty-snitch/blob/master/kernel...


Am tired of seeing low effort posts about why Linux is insecure.


There's Security Enhanced Linux (SELinux) maintained by the NSA, AppArmor and others. If security is what you want, you can have it.


Ah, security theatre.


Hardly.


> It’s interesting to consider a Linux distribution that fixes these problems. I’ve thought a little bit about a secure linux desktop. But alas, what a waste of time that would be, right?

It's called QubesOS.

https://www.qubes-os.org/

It's, if not an operating system directly, then a meta-operating-system and set of tools to allow you to easily interopreate a range of siloed VMs on top of a stripped Xen hypervisor. You can easily create, destroy, and use VMs, which can either be fully standalone, fully disposable (no disk state persists past the reboot), or, in the usual configuration, have a persistent home directory and template-based ephemeral root directory. Perks of that, you can update the template VM (via some reasonably thought out methods) and update all the software in the derived VMs as well.

I've been using it as my primary desktop OS now for... oh, two years and change, perhaps? It's a bit like a virus, in a way - you install it on some random test machine, get a feel for it, and then install it on more machines, and it just kind of spreads around your network until you realize you've been working in Qubes for some while now and it's fine.

The security model is basically, "Anything in a VM can be assumed to access anything else in that VM." So you silo things off, and if you're doing anything with unknown data (random PDFs, perhaps), you open it in a disposable VM (and there are some neat "render safe" capabilities that will then provide, back in your trusted VM, an image based version of a PDF file minus any bonus capabilities it might have come with). So you silo things. For instance, I do most of my "casual web browsing" in a disposable VM and I reboot that regularly. A full compromise of that browser stack gets you... the other tabs I have open, and perhaps a random file or two I downloaded. It doesn't get you anything of interest, as I also don't log into core accounts in that browser (I have another VM for core web activities that doesn't visit random sites).

The main downside is no GPU acceleration of anything (framebuffer only), but it's somewhat less limiting than I'd assumed, and most of my machines maintain a dual boot of Ubuntu for anything GPU intensive, though I honestly use it a lot less than I'd assumed.

And while it sounds like it would be resource intensive, and it can be, it's worth trying if you have less RAM than you might think. My "daily driver laptop" is a 2C/4T 5th Gen i7, with 16GB RAM. Gutless wonder, and I didn't expect much out of Qubes on it, but it actually works just fine (though I admit I'm not nearly as hard on computers as I used to be in terms of what I expect out of them).

Clipboards are mentioned, and Qubes neatly solves that too, because you have separate keyboard shortcuts to move clipboard contents around - it's not hard to copy/paste between VMs (or move files between them), but "root in a VM" only gets you things pasted into that VM (unless you pop Xen and migrate to dom0, at which point it's rather game over, but... theoretically, that's a harder exploit chain than getting out of a browser).

Also, as far as browser go, disable your Javascript JIT engine. No, I did not say "Disable Javascript." I said, "Disable your Javascript JIT engine." There are ways to do it on most browsers, and in exchange for slightly reduced daily performance and slaughtered Javascript benchmark performance, you remove an awful lot of complex attack surface in the browsers.

My problem with containers and sandboxes is that they rely an awful lot (entirely?) on the Linux kernel not being able to be exploited, and my general assumption for years now is that a local root exploit is worth about $0, because they're reasonably common and easy to find. It's not exactly true, but it's closer to true than not, so I assume any arbitrary code running on my machines, if it so desires, can compromise the kernel. Sandboxing and containers are convenient to prevent well behaved application deployments from interfering with each other, but they're not sufficient against potentially malicious applications. It's a pessimistic view, but the last decade of computer security is pretty clear that the pessimists a decade ago were far, far too optimistic.

Anyway, long post, short summary, "We probably shouldn't have put computers in everything."


> The main downside is no GPU acceleration of anything (framebuffer only), but it's somewhat less limiting than I'd assumed, and most of my machines maintain a dual boot of Ubuntu for anything GPU intensive, though I honestly use it a lot less than I'd assumed.

You are effectively making Qubes useless. The presence of any software that has access to the EFI partition outside Qubes renders it a highly vulnerable entry point for any form of malware, as it gives direct access to Xen.


Oh please. Yes its a potential. But if you're just running stuff provided by Ubuntu & not random crap from wherever this so called threat vector will never happen.

Let not the perfect be the enemy of the good.


Sounds like OP wants to run qubes. How cute. I wish them luck, because it also has "escape" vulnerabilities.


Qubes offers this level of protection, but it’s not very popular and the virtualization imposes quite a lot of overhead.


I'd like to talk, this is something we want.



Email me at contactatbjornpagen.com!


[flagged]


C was created over 50 years ago. I wonder if the creator ever imagined that it would become and remain the language used to develop the kernel that powers the majority of the world's servers and phones half a century later (though Rust is just starting to be used in the Linux kernel.)

Even C++ is 38 years old.

And these are just the languages; the systems our current OSs are inspired from were designed without much security in mind: understandably so, considering Unix was also created over half a century ago, long before the Internet. GNU was created nearly 40 years ago, and even Linux released over 30 years ago, only 2 years after the world wide web was invented, so cyber attacks were not very common.

At least with iOS and Android we've been able to have a better permission model than most desktop OSs (e.g. applications have to ask permission to use the camera), as well as better sandboxing (one application can't just read files from other applications: users explicitly have to share things from one app to another, and can revoke permissions at any time.)

But I agree with you that they're ultimately built with primitive languages and questionable design decisions. That's not a criticism of anyone who designed those systems and languages: the fact that they're widely used half a century later shows how incredibly good they were for the time! But a lot has changed in 50+ years (and especially the last 20-30 years with the Internet), and we've learned a lot.

We do get new languages, though I feel like we still have much to improve: the fact that Electron is so widely used for desktop applications (even ones that aren't web apps) shows how far we have to go for languages and their frameworks. But at least there are people trying to make better programming languages and frameworks.

Unfortunately the outlook on OSs seems grimmer. There are some hobbyist OSs that are extremely impressive, but I'm not sure any of them look likely to replace the OSs we use for servers, desktops, phones or other devices. With all we know today, we could design and build an OS that is far more secure than what we have today. But the costs of doing so would be enormous, so I understand why no organization has taken on this task.


what do you suggest then?


This technologies basically do what Docker does, but without the overhead of a virtualized Linux kernel

There’s really no reason why a system upgrade requires root privileges,




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: