Hacker News new | past | comments | ask | show | jobs | submit login
Flatpak – A Security Nightmare (2018) (flatkill.org)
218 points by fao_ 29 days ago | hide | past | favorite | 120 comments

> A high severity CVE-2017-9780 (CVSS Score 7.2) has indeed been assigned to this vulnerability. Flatpak developers consider this a minor security issue.

The actual wording in the linked release is "This is a minor security update, matching the behaviour on master where we avoid ever creating setuid files or world-writable directories." The wording is perhaps a little ambiguous, but my interpretation of that is, "this is a minor update that fixes a security issue," not "this is an update that fixes a minor security issue." The author also implies that simply including a setuid binary would allow the app itself to get elevated privileges. This seems at odds with the way flatpak uses containers to sandbox the app, and in fact when I looked at the linked CVE, the actual vulnerability is that a seperate attack could use a setuid binary installed as part of a flatpak to get privilege escalation. Don't get me wrong, it's still a sever vulnerability. But it is much more difficult to exploit than the author first led me to believe.

Add to that the first paragraph, which is literally a complaint that flatpack does not make things better compared to rpm.

Indeed the one about security updates is valid and my biggest grievance with such an approach. Note, however, that propietary software like sublime could easily use statically linked or embedded libraries anyways, as they do not have to adhere to any packaging guidelines. So this applies only to free software that has been packaged with the distro.

> Indeed the one about security updates is valid and my biggest grievance with such an approach.

And that stems from misunderstanding.

Flatpak has a concept of runtimes. If you have some basic library that would be updated by your distribution, you have pretty much big chance that it would be part of the runtime and updated by runtime maintainers, not by the application authors.

I.e. you would get your security updates, subject to the maintenance period of the runtime (for example, the org.freedesktop.Platform has 2-year support).

If you have some basic library that would be updated by your distribution, you have pretty much big chance that it would be part of the runtime and updated by runtime maintainers, not by the application authors.

Unfortunately, this is not true. Many of the Flathub Flatpaks compile custom versions of a lot widely-used libraries. E.g., the most recently updated Flathub package of an open source project (as of writing), compiles by itself [1]:

Boost 1.73, Eigen 3.3.7, OpenCV 4.3.0, exiv2 0.27.2, marble 20.04.1, lensfun 0.3.2, ImageMagick 7.0.10-13, libgphoto2 2.5.25, libusb 1.0.23, jasper 2.0.16, libksane 20.04.1, sane-backends 1.0.27, libqlr master, qtav 1.13.0, x265 3.3

Most of these are standard dependencies that are in most Linux distributions.

[1] https://github.com/flathub/org.kde.digikam/blob/2d0b5b369382...

Widely used does not mean, that they are good candidate for a platform library. Many of them do not have stable API, so you either keep everything on an ancient version (hopefully compilable against), or stuff breaks.

E.g. Boost. Fedora 32, which was released this April, but has version 1.69 (released in Dec 2018), while the current one is 1.73. Similarly, ImageMagick is updated every few days. In these cases, it is better to let the applications decide which versions they want, and keep up with updates.

This is, btw, the current main problem with the CVE system - something having a CVE doesn't actually mean there's acute risk in it. It requires a lot of technical prowess to actually confirm whether a specific CVE actually is exploitable or not. Then you have to figure out how vulnerable your context of use actually is. Then whether somebody is actually actively using this vulnerability.

It's a much longer chain of risk decisions than you would first be lead to believe.

CVSS Scores aren’t very accurate, to be honest. Actually, assigning any kind of score to a vulnerability is fairly difficult even if you know what you’re doing.

Unless this has been updated since 2018 (it doesn't have a date but also looks like it still from 2018) it should be marked [2018].

Also: Here is previous discussion from around the time this website was created: https://news.ycombinator.com/item?id=18180017

> Unless this has been updated since 2018

I think this might also apply to Flatpak.

Has Flatpak addressed any of these concerns? If so, how?

That's not how this works. Old content is marked with its year, irrespective of its relevance.

This page is really old, but I disagree with "the users are misled to believe the apps run sandboxed".

I think most people use flatpak (and/or snap, appimage, etc.) as a way of installing applications in a way that does not contaminate the host OS, and a way of removing them again without leaving any traces behind.

For example, it's an extremely convenient way of installing Steam without having to worry about its obscure, 32-bit, dependencies.

That they are sandboxed, or not, is very much secondary, for most users, I believe.

As a user and someone who knows nothing about flatpak other than "it can install apps" what you said is true. I use flatpak and appimage because they allow me to use software without cluttering my /bin and other root subdirectories. And also because I can install with them software that my distro does not include in its repositories. All resides in my home and it's easy to remove without leaving any traces. I didn't know flatpak apps were supposed to be sandboxed until I read this article when it was posted a few years ago. Whenever I want to sandbox something I use firejail.

Someone might want to fix the Wikipedia, then: “Flatpak is a utility for software deployment and package management for Linux. It is advertised as offering a sandbox environment in which users can run application software in isolation from the rest of the system.”


What I know about flatpak was from some Linux you tubers who led me to believe it was a sandbox. For that matter, some hn users posted similar things in the last month when talking about snaps iirc

Flatpak has a sandbox; but for some apps, it has to be a more open, otherwise they would not work as intended. All that's open is visible as a flatpak permission; if you want, you can tighten it up and see, what breaks.

On the other side spectrum are people, that are disappointed, that they cannot use SDKs installed at random places in their host system with editors packaged with Flatpak. Well, duh...

> That they are sandboxed, or not, is very much secondary, for most users, I believe.

It's been very much not secondary to the PR effort behind it. A few years ago, when the first usable releases started popping up, sandboxing was the touted as Flatapk's biggest advantage and basically the reason why it was the future of Linux software distribution on the desktop. That's why that page gives sandboxing so much importance.

> It's been very much not secondary to the PR effort behind it.

I disagree, have not seen container mentioned in PR materials for flatpak even once. I actually was not sure they were using that for sandboxing, I just knew they did some manner of sandboxing as it was making things break in some cases.

> I disagree, have not seen container mentioned in PR materials for flatpak even once.

Until one or two years ago hardly a month went by without one of these hitting one of the various Linux communities. I'm not sure how you missed them. Just one example that I literally got to by googling "flatpak is the future":


And sure enough, right in the middle of that piece, is this bit:

"Applications delivered by either run in a virtual sandbox. This makes them safer to use. They can also run on any desktop distro. Neither users nor ISVs have to worry about the underlying distro or its version because all the needed components for the application come already bundled."

A great talk on Flatpak sandboxing was released in March. [1]. The original post is from 2018 (Flatpak 1.0). Not sure if these issues are completely fixed yet, there are a lot of release notes up to version 1.63 (stable). [2] It seems that the maintainers and community are working on improving security continuously. My main hope is that they optimize for privacy, security and user control of the packages. Overall I'm impressed by flatpak with it's cross-distribution packaging.

[1] https://www.youtube.com/watch?v=3rCIEzfZw1I

[2] https://github.com/flatpak/flatpak/releases

Did you see it live? Not a great experience on Youtube, audio problems unfortunately.

I found it through search on Youtube, had been looking for which way to package applications on Linux. Packaging really takes a bit of time, reading documentation and figuring out how to do it. After packaging a flatpak you need to add it to a local repository on your device before you can test install it, which isn't as intuitive as clicking on a .appimage file, but brings some benefits. The documentation is quite readable: https://docs.flatpak.org/en/latest/first-build.html.

Agreed for audio quality. It's hard to listen to with only the left channel working. It might be worth reuploading it with balanced stereo.

Am I the only person who actually doesn't really mind that flatpaks and snaps have less user-control over behavior and/or security implications?

I really enjoy them. Install it, forget about it, let it update automatically.

Theoretically could something unwanted happen, yeah probably, but I enjoy the convenience they provide and typically you're installing fairly well-known software (Microk8s, VS Code, and Chromium come to mind).

My one gripe with snaps is that AppArmor can occasionally cause headaches with Microk8s permissions.

> I really enjoy them. Install it, forget about it, let it update automatically.

This is the same thing that happens when installing from the distribution's package manager? I install it, forget about it, and run the update command whenever I remember, once every few weeks.

The amount of available software depends on your distribution, but on Arch practically everything is available in the official repositories or the user repositories.

And when security issues are fixed, updating my system fixes them for all applications instead of just the ones which update their flatpaks (and from the article, it seems like many don't).

> And when security issues are fixed, updating my system fixes them for all applications instead of just the ones which update their flatpaks (and from the article, it seems like many don't).

That is an ideal, the reality is somewhat different. It takes constant effort to ensure this. See https://wiki.debian.org/EmbeddedCopies for details.

Not sure why you'd cite that wiki page, as it describes how Debian maintainers should be removing embedded copies of dependencies and (if necessary) patching the software to use the system versions.

Sure, it's not perfect, and some packages end up having embedded dependency copies, but that's a far cry from flatpak, where everything is an embedded copy, by design.

Flatpak has that same problem?

I guess with Flatpack it is the responsibility of producer of the package.

With apt/yum/etc it is the responsibility of the distro maintainers to package it.

And which one of them has a better track record pushing out security fixes, especially for older software?

Containers are a security nightmare because developers tend to have a release-and-forget mindset. Packagers are the ones doing the thankless work of constantly tracking issues, backporting fixes, dealing with dependency issues, etc. Take the packagers out of the equation and the result will be entirely predictable--much more stale software sitting on the hard drives of oblivious users, just as we see with containers.

> backporting fixes

IMHO this should be thankless. If you want updated software then update it.

By backporting fixes, you create an untested and unsupportable version. They pull patches that may or may not have dependencies on other changes to the software, which they avoid because they don't want to update, and generate a whole bunch of noise for application developers.

No software exists in a vacuum. It's the job of the packagers to create a platform that works as a whole. That's why distributions exist in the first place.

If the upstream releases a new version and the packagers just throw it into the distro, they've created an untested and unsupportable system.

> No software exists in a vacuum

This is true. So stop trying to pull apart tens or hundreds of other people's work hours on some attempt to create some frankenware.

The rest of what you wrote doesn't make sense. Of course it would get tested whether they backport or not. Of it doesn't, then they aren't testing what they are doing now.

The current state is that we don't get much cross pollination as the version being pushed by RedHat isn't the same as the version being pushed by Debian, so bugs are introduced and resolution doesn't help each other.

Your distro doesn't live in a vacuum. Stop acting like it does.

I'm not quite sure what's rubbed you up the wrong way here, but I'll address these points in order:

> This is true. So stop trying to pull apart tens or hundreds of other people's work hours on some attempt to create some frankenware.

I'm not involved in this, so I'm not entirely sure what you want me personally to stop doing, but I definitely appreciate the efforts of distributions to create systems that work together as a whole, and don't spontaneously fail because an upstream author has put out a new release and is now forcing me to choose between carrying a vulnerability and breaking compatibility.

> The rest of what you wrote doesn't make sense.

I beg to differ. Perhaps I should simplify my language.

> Of course it would get tested whether they backport or not.

By whom? There's a reason Debian Stable has a painfully long release cycle, and it's partially because they're taking that time to make sure the complete set of versions you get when you do an `apt-get install <whatever>` works together, as a coherent system. If you change `libfoo-1.0.0` to `libfoo-2.0.0` and just push that out to everyone, the combination of `libfoo` with any of its dependencies or dependents is now untested.

You can make a separate argument for rolling release distros if you like, but the problem doesn't go away.

Again, this is why the concept of "distribution" came into being in the first place: because throwing together arbitrary versions and expecting the result to work is madness. Or, if you prefer, Debian Unstable.

> The current state is that we don't get much cross pollination as the version being pushed by RedHat isn't the same as the version being pushed by Debian, so bugs are introduced and resolution doesn't help each other.

Correct. And if upstream authors took responsibility for their previous releases and backported patches themselves, they wouldn't have divergence in the first place. But that's not necessarily workable, so we've got the next best thing: other people keeping the lights on for them.

> Your distro doesn't live in a vacuum. Stop acting like it does.

I'm not claiming anything of the sort.

> And which one of them has a better track record pushing out security fixes, especially for older software?

given the amount of security issues that are being fixed silently, even sometimes unadvertently, in newer releases of any software, I really believe that doing this instead of forcing ppl to update just creates a less secure world.

Which distribution provides security updates for all their packages for let's say three or five years? Debian Stable has exceptions, so does Ubuntu and it also makes a huge distinction for thousands of packages in the Universe repository, which for the most part don't get any support at all. Fedora releases are AFAIK only supported for a year or two.

And then you also get the issue of bugs that are actually security issues, but weren't labeled/identified as such, and therefore never get fixed in those stable distributions.

RHEL and CentOS have 5 years of updates for their core packages.

According to their git repository[1] the last time they updated the WebKitGTK library was half a year ago. In the meantime there have been multiple upstream releases, fixing multiple security vulnerabilities[2-6]. Or does this git mirror not reflect the current state of the version they're shipping?

[1] https://git.centos.org/rpms/webkit2gtk3/commits/c8 [2] https://webkitgtk.org/security/WSA-2020-0001.html [3] https://webkitgtk.org/security/WSA-2020-0002.html [4] https://webkitgtk.org/security/WSA-2020-0003.html [5] https://webkitgtk.org/security/WSA-2020-0004.html [6] https://webkitgtk.org/security/WSA-2020-0005.html

Looks like that package is part of the AppStream collection and therefore does not have the same guarantees as the core packages. That's at least what some quick googling told me.

RHEL and CentOS have pretty good backporting support for packages that they support, but most installs of them that I have seen use/include packages from other collections that are not supported, which is of course the wrong way to do it.

> And which one of them has a better track record pushing out security fixes, especially for older software?

Why the question? I was only trying to state what I think is fact, not sell one approach or another.

Flatpak has a package split into three parts: a program itself, a runtime, and a SDK. Runtime includes common libraries a program is usually required, and SDK includes a development counterpart of them.

Flatpak program is pinned to a runtime version, e.g. org.freedesktop.Platform//18.08 (with 18.08 being a major version) of which receives security updates periodically. When user installed a package, Flatpak will install a program and its runtime to Flatpak directory.

If a program only depends on libraries in a runtime (and not requiring any bundled dependencies) then packager won't need to worry about upgrading those libraries at all, as updating a runtime will update those dependencies.

> and run the update command whenever I remember, once every few weeks.

Yeah definitely, I just like not having to think about running the apt update commands. It's not a huge difference.

You can definitely set up automatic updates. I don't know the method off-hand but I've done it (on fedora) in the past.

unattended-upgrades :)

I just wish they separated out the functionality of app portability, app sandboxing, and app installation. It’s much less debuggable and comprehensible packaged together. Kind of reminds me of systemd, actually.

I don't think you can separate sandboxing and installation. Any nontrivial and useful sandboxing relies heavily on how / where the app lives and how it communicates with the world. This includes things like dbus paths too, not just file paths.

Well, it’s a problem the floss community will have to deal with sooner or later—it feels pretty awkwardly papered over with snap.

The other way is to rely on labels rather than names. But I don't see every distro switching to selinux with per-flatpak/snap policies, so it's going to be have to be joined installation and sandboxing.

Err, I'm still having issues grasping the problem—why not just enforce portable installation and locally writable files? There's no reason user-facing apps need to be installed to anything other than a subdirectory of home, there's no reason to locate the app resources anywhere but as a subdirectory of the app installation, and the XDG filesystem standards for writing seem pretty solid at this point. You could then restrict all access by default and just prompt when it attempts to use a resource.

I was originally very hopeful for snaps, but my experiences with them have been consistently painful. Whether that's due to a fundamental issue, or just a new ecosystem that hasn't matured yet is beyond me.

If you don't care about user control, why would you be using Linux at all? When you hit a bug in a flatpak or snap, what are you going to do? E.g. on Linux you now just can't input Japanese text in Slack and there's no way to fix this.

> If you don't care about user control, why would you be using Linux at all?

I mean, that isn't what I said necessarily.

I use Linux because Windows is bloated, slow garbage and OSx is crippled, locked-down Linux with a UI I'm none too fond of.

I'm not a power-user. I run Ubuntu with i3, that's about the depths of my linux-wizardry.

> When you hit a bug in a flatpak or snap, what are you going to do?

Install the apt repo or build from source I suppose?

Could you elaborate on the Slack-Linux-Japanese thing please?

I have no problems with it, and was wondering if maybe an update broke it?

See e.g. https://forum.snapcraft.io/t/cant-use-input-method-in-snap-a... . Certainly it was broken for years, and if it's been fixed then that's only very recently.

> E.g. on Linux you now just can't input Japanese text in Slack and there's no way to fix this.

Say what

fcitx, ibus...

I still love flatpak, and much of what this article states is hostile and outdated. The "XDG portal" specification makes filesystem access less popular then before. Also, "fakepak"? Really?

I don't understand why people favor Snap or Flatpak to Appimage, which looks to be the simplest of the three. Would anyone mind sharing their experiences?

My experience is that AppImage solves problems of Flatpak and Snap, but doesn't introduce any problems from these two.

I don't see why AppImage wouldn't be able to be updated if support for this would be added to the app itself.

When trying to download software that doesn't exist in my distro's repo, I always look for AppImage first.

Because AppImage doesn't solve the problems.

- No sandboxing

- Doesn't solve the "run everywhere" problem, AppImages usually are compiled agaisnt Ubuntu LTS's, which isn't 100% binary compatible with other distros, e.g. fedora

- Its a game of bingo whether the AppImage has a library bundled or not, and whether the app will work your host version or not.

- Performance, appimages are a compressed filesystem that getss mounted by fuse each time the app is run, which is slow, additionally subsequent launches of the app aren't even faster because we get fresh inodes each launch.

- Bad system intergration, appimaged is an abomination.

- AppImages promote a bad security practice of marking binaries from the web as executable, let's not replicate the main way malware is distributed on Windows.

Appimages don't update automatically.

And they aren't integrated in to the OS like snap or flatpak. I can install/update a flatpak and it shows up in the program menu. For appimage I have to open my filemanager and click on the program like a prehistoric windows user.

Also last time I read the OP article the tldr is "Flatpak has security options but many packages turn them off making them exactly as secure as appimage/binaries from a package manager"

The appimaged daemon automatically generates desktop entries for AppImages placed into a designated directory:


The main problem with AppImages is that there is no easy way to update them in bulk.

I think this is something that can eventually be fixed. Indeed, I'm on Linux Mint and the integration is actually pretty good. When you first run an appimage it asks if you'd like to add the program to the start menu. Icons are correct and everything.

I can also imagine some possible ways to allow appimages to auto update, though I suspect there'd be a lot of resistance to that.

I'm not really aware of a single way appimage would be better than flatpak. To get the integration and updating working you would need to install a tool and not having to install something for appimage is the only benefit I see over flatpak. It makes it easier to just quickly download and runs something but less convenient in the long run.

Also in the future I can see apps being built to actually use the permissions system in flatpak so it doesn't have to be disabled. Flatpak is a step forwards for linux where as appimage is restoring bad practices from windows.

I've been thinking about this, and please correct me if I'm wrong, but I think it's a good choice for certain classes of closed source apps. For example, I'm building a video game, and I'm going to distribute it through Steam. The Linux version is going to be an appimage to guarantee maximum compatibility. Flatpak and Snap make no sense in this context. Auto updates are handled through Steam, as is integration with your system like creating icons and what not.

Also, from the research I have done (which I'll admit is inadequate), the process of building an appimage seemed more straight forward than either flatpak or snap.

I'm not really sure how a flatpak in this case would be any better than just a binary of the game with all its libraries included. I'd like to see stream distributed as a flatpak with access cut off from the rest of the computer so malicious game drm can't scan through your files.

I've found appimages to be a bit of a pain in the past because they're portable, so you need to have a place to put them

As opposed to what? To how I run GIMP in the past? Certainly not.

> Look ma, I've discovered a program can do nefarious things

What else is new?

I mean, there's an argument that this is worse. If a security bug is found in libc, in the "old world," your distro would have an update out in a day, and you'd be completely safe. With flatpak, you also depend on whoever's maintaining the GIMP flatpak (and all your other apps), who may not be motivated to do updates any faster than GIMP's normal release schedule, potentially leaving you vulnerable for much longer.

> your distro would have an update out in a day, and you'd be completely safe

If you do not restart your services after updating the package, your services continue to run against the exact libs they were started with, even though the library is updated on disk and anything new that starts will use the fixed version. For something like the libc, that is pretty much every service bringing it in and still running the unpatched version of the library.

Updating distro packages is of course usually a good thing, but alone it does not make you "completely safe".

Also they have a patched version "in a day" because they embargoed public release of the vuln details usually for months, creating a gap where a (growing...) set of people know a secret that can compromise your system and you don't. When I get that update I don't feel "completely safe".

libc is part of the Freedesktop runtime, so the GIMP Flatpak doesn't need to update.

And for a lot of smaller apps, they're unable to realistically support being distributed in many different distributions. Flathub is cross-distro and easier for the app author/developer/maintainer to support.

Is it perfect, no... is it actually much more secure, not usually. All of that said, is it available in pretty much every linux distribution and often a newer version than what would be in the repositories, absolutely.

This is also the case for server containers, and I do worry about the day that some critical low level library that is in thousands of containers has a security bug. Yes, many shops are using CI to push out new containers regularly. Others are deploying containers from third-parties, without any regular updates or path to fixing security problems.

> > Look ma, I've discovered a program can do nefarious things

> What else is new?

I think that this dismissal (which presents as a quote something that doesn't seem to appear in the original article) is too hasty. Surely exactly the same could be said of any submission involving, say, privilege escalation? Of course every program and means of distribution has security risks, and their existence isn't news; but documenting the specific risks is surely worth it.

Well, as every direct reply said, it is true that these App Store models suffer from possibly outdated libraries with unpatched security bugs. But it's nothing new. Everybody knows that. It's case with Docker and all other fat bundle formats.

> Sadly, it's obvious Red Hat developers working on flatpak do not care about security

Presenting it as a novel privilege escalation with such wording IMO deserves hasty dismissal. These kinds of "hype-ridden disclosures" (i.e. allocating domain name) deserve to be ridiculed because they're meaningless distractions. See [1] for discussion.

[1] https://lwn.net/Articles/668695/

All the other formats talk about how easy they are to use, and when you mention security, they throw up their hands and say "I dunno". Flatpak, on the other hand, its entire shtick is how applications are sandboxed for security.

And flatpak is actually sandboxed for security. So what's your point?

The only news the author of this hype website told us is that

1) there was a vulnerability,

2) lots of application have broad privileges and

3) apps have bundled dependencies.


1) Bugs happen. Deal with it.

2) Flatpak actually tells you which privileges an app is going to use. What do you suggest as a solution? Is it technical problem or social problem? What is the state of art? I'd say the state of Art is Android in that it asks for permissions individually as they're required. Still, this problem is not fundamental issue in Flatpak. It certainly doesn't deserve "Flatkill" logo and domain.

3) Is Flatpak actually an exception to the rule?

- Making Python app? Use pyenv, pipenv, venv or ${popular_venv_of_the_year} and bundle all your dependencies!

- Making .NET app? Use NuGet and bundle all your DLLs!

- Making Java app? Use Maven, Gradle, SBT and bundle all your JARs!

- Making Rust app? Use Cargo and glue everything together.

- Etc. Etc.

So yeah, for every programming language we're encouraged to treat our dependencies as unique, fix their versions to prevent possible breakages and take the responsibility for monitoring security issues. Developers just like it. I don't like it but it's everywhere.

And as other posters said, Flatpak actually have runtimes which carry core dependencies like libc that get updated independently of the app.

So what is the novelty here deserving the flatkill domain? Where is that juicy security vulnerability?

Aside from security updates being delivered between late and never if they are ever compromised 1/3 of users will be effected in the first hour and 100% effected within 24 hours even if on the 25th hour they discover it and take it down.

In the old school situation distributions would still be between days to weeks deciding to pull in a new version released through normal channels.

So Flatpaks also install under /var? That's horrible, I thought it was only Snaps that made that decision :( Does anyone know what the official reasoning for this is?

It's mind boggling. The entire premise of being "sandboxed" means unable to escape the box and access the rest of the system. By that, one would think a Flatpak or any similar package would keep itself contained to the user's home directory, so that even if it broke out of the sandbox its path of destruction would be contained to the user's files (still not a great thing but not system-owning).

Where the app is installed and what it can access are separate things.

Also, it's fairly easy to reinstall the OS but people are more concerned about their home directories being damaged or exfiltrated.

The safety of your home directory relies on the integrity of the operating system.

The point is that if you run an app with FS access to your home directory, you are already screwed. OS integrity does not matter in that case.

> So Flatpaks also install under /var? That's horrible


At a minimum they need access to a directory the user cannot subvert. If they were entirely contained to $HOME, then an attacker who found an unlocked shell can subvert all the apps. If the packaging format allows running apps or daemons then that means a system compromise.

It is also extremely common for $HOME to be a network drive or even a USB stick, so better to cache stuff in /var which can be assumed to be local storage.

Flatpaks use a git-like tree of hashes, which cant just be shoved into random places on your filesystem. It has an exports directory you can symlink or add to your path.

In that case it should be in /opt, where application bundles not managed by the system's primary packaging system belong.

This isn’t technically correct when Flatpak is installed via your distribution. Then it is managed by your packaging system and it’s data should live in a /var/lib.

/opt is for software that doesn’t adhere to the FHS and can’t/shouldn’t be installed in a prefix. Red Hat and Arch install software to /opt via their official package managers. /opt isn’t safe to stick arbitrary software even though it’s commonly used for that — /opt/local is the non-FHS equivalent of /usr/local.

> This isn’t technically correct when Flatpak is installed via your distribution. Then it is managed by your packaging system and it’s data should live in a /var/lib.

Besides, this is silly: they go in /var/lib/flatpak, which is for flatpak's program state information. (Exported desktop files and whatnot go to /var/lib/flatpak/exports/, which is pointed to by `$XDG_DATA_DIRS`).

I mean, if you want to go all Militant Unix Admin on it and install apps to /opt, you can `man flatpak` and set `FLATPAK_SYSTEM_DIR=/opt/flatpak`, but unless your system spans a dozen hard drives I'm not sure what that's going to achieve.

I guess this is sort of an ownership conundrum: a flatpak app is run _via_ flatpak. On its own, it's just a heap of files.

Agreed, that's what /opt is for.

or you can install in your user directory, current cli of flatpak ask you for this and you can set the default install option for the user directory also

Independent of the technical issues being discussed I find this site to be an interesting example of the politics around open source development. It is run anonymously (out of albania, presumably to avoid whois lookups) and does not link to responses or opposing points of view. In addition it is posted regularly on various technology sites such as reddit, HN, etc.

Flatpak mixes together application packaging, sandboxing, desktop integration, and an update mechanism. I do not understand the need to have all four in the same tool. I even think it limits the usefulness of each component.

Would a combination AppImage, firejail, and a couple of shell scripts be worse?

Much worse. I encourage you to try some properly sandboxed Flatpak apps. Here's a fun one:


Note that the app has no access to your home directory by default, and this fact is invisible to the end user. The file open and save dialog is handled out of process by a portal, which is a service that allows an application to (with permission) poke holes in its sandbox. By tying a few different things together, this is seamless: the user gets a file chooser dialog and chooses a file, as usual. (As a bonus, it's always a native dialog). Behind the scenes, the application is granted access to read or write to that file.

I don't think this would be easy to manage if Flatpak was split into a bunch of different moving parts, and I can't imagine it providing any meaningful benefit.

With that said, it isn't as monolithic as folks seem to think. It's built on bubblewrap and ostree and a whole bunch of freedesktop specifications.

> The criticial vulnerability "shell in the ghost" was fixed in flatpak about one month after linux distributions.

I can't find any reference to that vulnerability except this page.

The name isn't an exact match but maybe it's this glibc vulnerability:


Edit: nope, this is Ghost in the Shell, from last February


I don't think either one is right.

That first one is from Jan 2015, but according to Wikipedia the first release of Flatpack was Sept 2015. The second one doesn't even seem to be a vulnerability, but rather the general concept of web shells. One of the vulnerabilities mentioned in the article (CVE-2019-0604) is for Microsoft SharePoint, so that's unrelated to Flatpack. Also note the difference in order of the words "ghost" and "shell".

Packaging your software in Flatpak/snaps/appimage:


You have 100% control of your dependencies.


You have 100% responsibility for your dependencies.

In short: more control implies more responsibility.

The moment you delegate your security or stability concerns to a third party, all bets are off. Period. Package maintainers, RedHat, whoever.

Maybe the patch fixes a vulnerability, maybe it creates one. Maybe the company pushing security is really pushing mindshares of ecosystems for subscription revenue and industry influence.

Build from source; keep your packages as minimal as possible; keep your permissions as granular and restricted as you can manage; and pray to your gods of choice.

I can't understand why people are so anxious to embrace so much unneeded complexity. Things that "just work" always just work until they just don't.

Often because you only have so much time in the day, so you can't actually verify all the code and hardware.

Once you realise you're going to need to deal with running stuff you don't entirely trust, you it makes some sense to extend this beyond the bare necessities and dip your toe into productivity improvements.

Problem is, much of the industry is full of charlatans that copy and buy as opposed to understand. You end up with orgs that spend $20 million security products but refuse to pay for the expertise to configure it.

The reality is we have, we have a lack of skills, especially around InfoSec and most people just mimic what they've heard, rather than really think about it. On average, companies don't compete for talent, and they don't have enough skills and discourse to hire better than they have anyway, so it doesn't really matter what they run.

I'm pretty sure "build from source" is out of reach for most people (from a technical ability standpoint), and for those who could do that, most have better things to do with their time.

If your threat model includes intentionally malicious packages from Red Hat, Debian, or whomever, ok, but understand that you're a miniscule fraction of a percent of the user base. And you'd better be auditing the source that you compile as well.

The words "threat model" gets thrown around, but I'm not actually sure "our threat model didn't include our OS vendor" would be considered an acceptable excuse.

Sure, we have reasons to trust Redhat;

- You can look at their working practices ( Good coding hygiene, Peer Review, Tools, etc )

- You can make the commercial argument ( they make money by doing the right thing, it's not in their interest to hack us )

But ultimately we're still explicitly trusting 1000s of people here, so reality is much more nuanced.

- Hardern your OS, i.e. only install the bare essentials.

- Monitoring for processes or networking traffic doing unexpected things.

Thus, you evidence that you reasonably trust the vendor but you'd expect to see an alert were to prove false.

Probably a better use of "threat model" would be, we don't need to place trust in developer end user devices because we instead place our trust in our SCM, CI/CD and the SDLC process.

> The words "threat model" gets thrown around, but I'm not actually sure "our threat model didn't include our OS vendor" would be considered an acceptable excuse.

I can't recall any threat-modeling session I've been a part of, or any infosec security professional I've talked to, who has seriously considered the OS vendor as a threat. "Our threat model didn't include the OS vendor" absolutely is an acceptable excuse.

Your mitigations (only install bare essentials, network monitoring) are good advice in general, and people should be doing those things regardless of any perceived threat from the OS vendor.

Aside from that, really the only way you can absolutely harden yourself assuming a hostile OS vendor is to... not use an OS, and build your entire system from scratch. Which, again, sure, might be a perfectly reasonable thing to do for some ultra-sensitive applications, but -- again -- that is a teeny tiny fraction of a percent of what's actually happening out in the world.

First point, depends on what you work on I guess. Most of the world doesn't patch, but that doesn't mean managing your dependencies isn't important.

On the last point, doing it all yourself probably isn't actually better. Most people don't have the time in the day to invent the universe, so you're always back to figuring out how you establish trust.

That it is difficult and most people have "better things to do", fine, I can agree that this can't work for like, all computer users.

But if you really care about security, and if you are managing a linux OS where you have a lot of deep access to the system, I think saying "I have better things to do with my time than understand the system I'm managing; that's what vendors are for" is a lazy perspective.

Consider this: https://github.com/systemd/systemd/issues/6237

Order of events here: - RedHat creates systemd to handle init and daemons--something that used to be handled mostly by shell scripting and much simpler tools.

- Debian and RHEL based systems adopt systemd. That's most Linux systems.

- About 6-7 years later, an issue is noticed and reported by a user. If a user creates a service to be run under a username beginning with a number, which some distros (not RHEL or Debian though), systemd validates the name (why?) by matching it against a regex.

-If validation on any field fails, they categorically ignore that field and run the service.

-In this case, because it is the user name, it runs the service as root instead of the user name.

-Their response? "This is how the system handles all fields. We do it for consistency and compatibility. Closed."

-That was 3 years ago. This was never fixed. "It's not a bug! Just don't give systemd a potentially valid user name that looks like that!"

So using the daemon system promoted by RedHat (who we trust!) has created a situation where a process--possibly a server or something public facing--could be run as root without you immediately realizing it. You don't care to understand your system, but you do have to operate it. RedHat won't do that part for you. So, you could create these conditions unknowingly.

It wasn't malicious, but what's important is that it isn't necessary.

Look at what kinds of questions did the RedHat maintainers ask?

"I wonder what tool allowed you to generate this file?" A totally useless question, given these files are frequently generated manually. "It's the tools fault! Whatever it is. Even though it might be a valid username on your system."

A better question might be "Why do we have to validate/parse the username field at all?" After all, parsing bugs are well-known for creating security vulnerabilities, and if the name is invalid, then there will be no user, and the service will fail in the same way it does when the username string is syntactically valid but there is simply no user. In short, they have a product that "helps you manage your OS; and from RedHat (who we trust!)", yet we find that it increases complexity; does unnecessary work; and results in a bug that could allow a process to run root when it should not.

This scenario is so similar to the flatpak sandbox issue. "It's not a bug. It's just that most people are misconfiguring it." Fine! But you chose to embrace the added complexity of a system (flatpak) that you just don't actually need. It has pros and cons. It's not a matter of trusting RedHat not to be malicious--it's trusting them to make that judgement for you. Their judgement is not perfect.

There is a medium that exists between "audit every line of source in the system yourself" and "trust implicitly everything a vendor says and does". I think that RedHat is reputable and trustworthy, and they are trying to push Linux forward, but they are simultaneously reinforcing their position as industry experts through creating software that "eats linux". It's not malicious, it's just the nature of their existence. Your interests and their interests are not the same, so it makes sense to consider things for yourself.

> Yes, it's possible your linux box has been compromised if you use flatpak, we are literally talking about several months old public exploits. Ever opened an image in flatpak Gimp? The criticial vulnerability "shell in the ghost" was fixed in flatpak about one month after linux distributions.

There were some heavily up voted comments and posts on hn this year about how much people hate how snaps auto update themselves. Does this change anyone’s mind, or do those points still stand?

Those points are completely unrelated to flatpak taking a long time to release patches. Whether flatpak forced updates or not, from TFA it seems as though it's still months behind Linux distros.

This article is written in bad faith.

RE: The sandbox is a lie. No it's not. The technical capabilities are there.

The problem is that the application needs to be written a certain way in order for the sandbox to be enforced and the app not to break. This is a not a technical problem but a political and resource one. Meaning how do you incentivize developers to re-write their apps so that they are compatible with that. Given that there are very few money to be made in linux applications I can see why developers are not rushing to re-write. Hence flatpak's allow for an escape hatch on the sandbox so existing apps continue to work. Even Google, with all of it's influnce on the android ecosystem, and all the money involved in it had to back off on scoped storage last year because of the developer outcry. Flatpak is moving towards the right direction. But because of lack of resources it 'll take more time to get there.

RE: You are NOT getting security updates

Again the author mixes technical capability with the limitations of a volunteer driven ecosystem to arrive to a bad conclusion. In 99% of cases flatpaks are provided by third party volunteers. They have no obligation to stop their life in order to test and provide timely updates. This is also true for normal deb/rpm packages by the way. Maintainers of complex packages such as calibre are unsung heroes. However flatpak's architecture allows for software vendors to host their own repositories and it's easy for the user's to subscribe to them. Hence if a software vendor wants to provide timely updates to their users the capability is there and waiting.

RE: Local root exploit? Minor issue!

Hello Nitpicking. thayne did a better job than I could do in their post (https://news.ycombinator.com/item?id=23523887). At the end of the day an update was provided for the users addressing this. As a user I don't care about how the update was sold. I saw an update available and I updated my system.

TLDR: Flatpak technical capabilites are fine (or working towards the right direction). The ecosystem does not have the resources to emberace those capabilities fast enough and that's been used to diminish the technical work been done.

This is a tool. How good it is, is measured by how useful it is to me. If it does not provide something I need it to do, then it's less good for it. If it fails to provide the services it advertised as providing, then it's straight-up bad. I don't care why this happens. Might be technical, might be political, might be resource based. Doesn't matter. The tool either does what I need it to or doesn't. And flatpak does not provide sandboxing or security updates, and that it doesn't do this is independent of whether it's a technical or non-technical problem.

It's one thing to criticise flatpaks on the grounds of not getting the job done because nobody uses it properly and another to say there's no sandbox.

I will use appimage. I refuse to use flatpak or snaps. That's my take as a linux user and admin.


I get the optics aren't great.

But, if you check the source, this is a single HTML + CSS page with zero scripts or forms. Does it matter, technically?

Considering ISPs and Nation states have injected additional scripts into plain HTML payloads, it does matter, technically.

It really, really does.

How? I don't trust the site's author, so from a security standpoint there's nothing gained by distinguishing him from an impersonator.

Do you trust your ISP/Government/Router to not inject spyware/malware into the pages you visit?

That's literally what I just said. I don't trust the site's author any more than I trust an ISP/Government/Router.

I have no reason to trust this site's author any more than I would trust someone impersonating this site's author. What does https gain me?

No one should be linking to N-Gate about an issue like this, and it's embarrassing to see that this page was put up as recently as 2017.

It's just repeating "but I don't need it" over and over, while occasionally breaking off to make mind-bogglingly silly claims like "the security of the things I build are someone else's problem", and "we should just magically fix transport-layer bugs clientside."

And a few unsubstantiated jabs at LetsEncrypt for good measure, because pretending that everyone else is terrible at their job is a lot easier than paying even the slightest attention to what's been the general consensus of the entire Internet security community for over a decade.

I apologize for being a little more blunt and snippy about this than might be necessary, but seeing articles like these tick me off in a weird way (which I'm sure N-Gate would regard as a source of pride). It's a good reminder that people can make just about any poorly-thought-out unsubstantiated argument sound reasonable by just adding a lot of snark and then hoping that readers won't realize there's nothing logically coherent behind their "isn't everyone else except me stupid" hook.

I am sure that N-Gate does a lot of amazing work outside of their blog, and I'm sure that if I met them in person I would think they were very smart and charming -- but would it kill them to occasionally post anything on their site that isn't just a bunch of flippant contrarianism disguised as technical discussion?

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact