Hacker News new | past | comments | ask | show | jobs | submit login
Malware Found in the Ubuntu Snap Store (linuxuprising.com)
342 points by dafran 10 months ago | hide | past | web | favorite | 220 comments



There is no review process or central restrictions on who can upload to the Ubuntu Snap Store, so in a sense, this isn't surprising. https://docs.snapcraft.io/build-snaps/publish

Does the name "Ubuntu Snap Store" carry a connotation that code is reviewed for malware by Ubuntu, the way that the Apple, Google, Amazon, etc. mobile app stores are? Or does its presence in the software center app imply a connotation that it's endorsed by the OS vendor?

I was at a PyCon BoF earlier today about security where I learned that many developers - including experienced developers - believe that the presence of a package on the PyPI or npm package registries is some sort of indicator of quality/review, and they're surprised to learn that anyone can upload code to PyPI/npm. One reason they believe this is that they're hosted by the same organizations that provide the installer tools, so it feels like it's from an official source. (And on the flip side, I was surprised to learn that Conda does do security review of things they include in their official repositories; I assumed Conda would work like pip in this regard.)

Whether or not people should believe this, it's clear that they do. Is there something that the development communities can do to make it clearer that software in a certain repository is untrusted and unreviewed and we regard this as a feature? The developers above generally don't believe that the presence of a package on GitHub, for instance, is an indicator of anything, largely because they know that they themselves can get code on GitHub. But we don't really want people publishing hello-worlds to PyPI, npm, and so forth the way they would to GitHub as part of a tutorial, and the Ubuntu Snap Store is targeted at people who aren't app developers at all.


I like Arch's package management model, where sources are split into the official repositories, which are manually approved, and the AUR, which everyone knows are not officially endorsed or reviewed, and to check the sources and PKGBUILDS for anything sketchy before installing.

The processes for installing from the two are also different enough that the user can't mistake one for the other: official packages are a pacman -S away, but installing from the AUR either requires a git clone and a makepkg -sri, or an AUR helper that bugs you to review the PKGBUILD.


Also, compare the wording on the snap store:

> Safe to run - Not only are snaps kept separate, their data is kept separate too. Snaps communicate with each other only in ways that you approve.

Versus the AUR:

> DISCLAIMER: AUR packages are user produced content. Any use of the provided files is at your own risk.


And then, a Snap doesn't behave as the user expects (e.g. like a "native" application). And the first solution on stack-whatever, is to install it using the "--classic" switch. Whereupon, there goes your sandboxing.


The difference being that Snaps always run in a semi-encapsulated environment (a container), whereas the AUR just executes in whatever security context you're issuing commands from. PKGBUILDs are arbitrary shell scripts and they can do anything that the user executing them can do.

I'm not trying to defend the perception that Snaps are immune from malware, but there is a real difference in the default safety of a package off the Snap Store and the AUR.


If the application is trustworthy, it doesn't matter. If not, you should think twice about running it even in a container.


To be clear, I agree. Containers on Linux are very weak security boundaries and should not be considered safe sandboxes for untrusted or dangerous code. In fact, post-Spectre, only physically independent hardware unattached to the network should be considered a reasonably safe sandbox.

However, something is better than nothing, and it's just not true that there's no difference between running something from the AUR and running something in a "confined" snap. There is some crap in the way at least.


Good point. Though the fact that you read your PKGBUILDs before running them (you do read your PKGBUILDs, right?) at least compensates for this.


> Snaps always run in a semi-encapsulated environment (a container)

Even with the `--classic` switch?


I wasn't aware of the "classic" switch, looks like they added it early last year. They appear to call this "classic confinement mode", and it sounds like it functions essentially like a normal package manager, though they insist on saying it's a "relaxed security model" instead of "no additional security model at all", which appears to be the truth of the matter.

Their site claims that only pre-vetted Snaps can be distributed with "classic confinement", so that's something at least. If that's true, it would allow the comparison between Snaps and the AUR to hold -- Snaps would either be pre-vetted and akin to official package repositories, or unvetted but executed within containers (which is still not really an ironclad security guarantee, but better than nothing).

It is deeply sad to see something that supposedly exists to facilitate and promote a sandboxed distribution model give up and cop out so blatantly though. They should've just named the flag "--make-snaps-worthless-you-should-be-using-apt-instead".

Is there a technical reason to prefer "classic" snaps over packages from the official repos? It seems like the default install path may be different and the libraries/installed files possibly better segmented on the filesystem, but ultimately that's little consolation.


> ... and the AUR, which everyone knows are not officially endorsed or reviewed ...

Uh, not everyone. I ran Manjaro for a bit and found that many of the things I ran were available via AUR. The usual thing I'd find in a search was usually something like: sudo pacman -Sy sudo pacman -S yaourt base-devel yaourt -Sy yaourt -S gpodder (That's the entire reply, BTW.) At some point I started to wonder what the provenance of these packages was and what the security implications were. I might have looked for information on the security risks of these packages but this is the first concrete claim I recall seeing about the subject. Probably a good thing I'm not running Manjaro any more.

I do run Ubuntu and have some snaps installed (Golang, VS code among others) and I'm now wondering if it would be possible for a malicious developer to substitute compromised snaps for the official ones. My understanding is that they update silently and automatically so I wouldn't even know about updates if I didn't check logs.


You can't install unofficial packages via pacman. And AFAIK none of the unofficial/AUR package managers, like yaourt, are in the official repositories so the point where you cross the line is very clear and distinct and any wiki guide to installing from the AUR makes it clear you are going into unofficial territory.

I haven't used Manjaro but they seem to intentionally hide the distinction between the official and unofficial repos, which is a bad idea.


Also, yaourt has a lot of warnings and prompts at each step along the way to make absolutely certain that you understand what you're getting yourself into with the packages you're installing. The process is certainly not for beginners. Ubuntu seems to go for a more user friendly process instead.


In ubuntu, it's even less obvious. Main, restricted, and universe are all checked together by apt, and treated the same


But main/restricted and universe are both vetted—the only people who can upload to universe are Debian developers (indirectly, via Ubuntu importing from Debian) and Ubuntu developers, and the process of becoming either of these is nontrivial.

Ubuntu's equivalent of the AUR, if I understand the AUR right, is PPAs, which are definitely not enabled by default and are fairly obvious about their third-party-ness.

(Main vs. restricted and universe vs. multiverse is just about licensing, not access control or vetting.)


It took me a good amount of googling to verify that, but you're mostly correct, there are only 132 developers with upload rights to the universe repository. Though I would argue that the distinction isn't just licensing, since Canonical themselves only support main and restricted.


Universe is Canonical's dumping ground. There are millions of vulnerable Redis instances in production today because Ubuntu doesn't feel inclined to issue an update for a major CVE affecting the redis-server package shipped for 14.04.

There's multiple layers of problems here, because as we see with Ubuntu, just because the code was built and uploaded by a trustworthy person doesn't mean it's automatically safe or secure (especially for more nefarious infections that bury themselves deep in the source tree). Remember the pwnage that brought down kernel.org for a few months? That was only a few years ago, but the infiltration had been quite serious, and if I recall correctly, there was some concern that infected code had made it into officially distributed tarballs.

In practice, I don't know that the distinction between distributing packages that are trivially exploitable and distributing packages that have exploits pre-baked in is really that big of a difference. Automated scanners often pick up and automatically exploit exposed instances within a few days.

What it boils down to is that admins need to take responsibility for their own workload and what they choose to execute, no matter the guarantees of the distributor or the claims that $Sandbox_Y is magically impenetrable (which was silly enough before, but completely farcical in a post-Spectre world).


I mean that the distinction between main and restricted and between universe and multiverse is licensing. It's a 2x2 matrix:

              supported   only community
    free       main        universe
    non-FOSS   restricted  multiverse


Oh yeah, definitely misread that


When I began exploring rebuilding an Arch install from source using ABS it all seemed to blindly trust everything coming from the arch repos as not being compromised. There was zero signing of anything. I had hoped the package maintainers responsible for the housekeeping of all the associated metadata would have been signing it all with their respective keys.

If someone were to compromise an upstream Arch server I suspect it wouldn't be especially difficult to inject malware or trojans somewhere even those building from source would receive.


I'm pretty sure all packages in the official repositories are signed:

> Official packages: A developer made the package and signed it. The developer's key was signed by the Arch Linux master keys. You used your key to sign the master keys, and you trust them to vouch for developers.

source: https://wiki.archlinux.org/index.php/Pacman/Package_signing


I'm not talking about the binary packages.


A pop up of the pkgbuild is almost worthless. It would require the user to personally examine at the very least the source the pkgbuild is pulling from and the pkgbuild script itself and be able to spot malfeasance including subtle attempts.

Since doing otherwise is a few clicks away and sufficiently subtle attempts are unlikely to be noticed by even observant parties this is about as bad as the windows hunt down an exe model which has been proven for decades NOT TO WORK.

The AUR isn't filled with malware because arch is a very small target compared to windows full of observant people.

It cannot possibly scale even to the levels ubuntu aspires to achieve.


Well as a lazy arch user i installed pacaur and just use official and AUR sources without much checking. It's just convenient that there's an AUR for everything


Be warned that malware like this is in the AUR all the time. It's so common it's not even newsworthy. They are usually pretty good at handling it though.


I've never heard of this happening and I can't find a single occurrence of it. I mean, I agree with "don't blindly trust everything in the AUR", but this seems wrong.


I maintain 14 AUR packages and have also never heard of this happening.


Also, the aur community seems to be very active.


Any sources/articles for this?


citation needed


The AUR is really handy, but you do need to be careful. Arch does not pretend to hold the user's hand, and you're not likely to get much sympathy from the community if you get bitten by recklessly installing stuff from the AUR.

Also, as far as I know, pacaur is no longer maintained. I switched to trizen, which prints the PKGBUILD on screen before allowing the user to opt in to executing it. Not going to pretend that I always review the PKGBUILD thoroughly, but I do generally skim them, applying more scrutiny as packages become more obscure.


For packages with many votes this is somewhat fine, but you should still skim the PKGBUILD as the maintainers of even popular packages may change in time.


I'd recommend checking both PKGBUILD and clicking "View Changes" to see who (and what) the last few authors have been up to.

It's relatively common for people to be added as co-maintainers after posting even just one helpful comment (!) in an unpopular package, so it's worth double-checking to make sure a big change hasn't been made recently without the author's permission.


If this is your means to secure your system you may be in for a rude awakening.


> Does the name "Ubuntu Snap Store" carry a connotation that code is reviewed for malware by Ubuntu, the way that the Apple, Google, Amazon, etc. mobile app stores are?

As far as I know, Apple is the only company that manually reviews the code of apps, and even they let some (in my opinion) malware through [1]. Everybody else just does some heuristic anti-malware checking and then publishes the app.

1: Uber was permanently fingerprinting devices, even though Apple was disallowing this kind of tracking in their ToS.


Apple is reviewing code? I don't think the blob submitted to Apple includes actual source code. The way I understand it, they (briefly) tap through the app manually, and (like other stores) apply some automated heuristics on the binary.


Firefox addons are reviewed. That's why malicious firefox addons are a lot less common than malicious chrome extensions.


Can confirm that Mozillas review process is the most sofisticated i’ve ever experienced. The good thing is that when they have something to complain you have a competent person on the other side that really helps you. Not like Apple who only send you some boilerplate to all your responses. The downside at Mozilla is that it takes them ages to even start the review, it took me once over 2 month to get my extension in their catalog.


Why do they allow so many extensions that rely on third-party services though? I don't like this trend. Especially given there's no network sandboxing of any sort, giving permissions for an extension to access your data locally is very different from allowing it to send them to a third-part, untrusted server.


Apple doesn’t get the source code of apps. They check what apis you use so they can prevent you from using non public apis and they run some cursory checks. But there is a lot of crap on the App Store.


I really wish we had app stores actually require vendors to submit source code and build instructions so that the app store would build it themselves and publish it. Something like F-Droid even if the source code is not publicly available.


It's difficult to get useful code review out of colleagues working for the same company. The idea that Apple et al should have a competent reviewer audit each submission is simply not a practical thing for any type of repo that accepts software developed by third parties.


Sometimes a small comment in HN makes one think in a whole new way.

I agree with you that useful code review is a tough nut to crack. Professional editors exist for writing, and science has the peer review process (also flawed).

Reading code, is a whole different ball of wax from writing it (and from optimizing it in some cases) - I can think of few people who are great at both. I have to wonder if we will ever get to the point where "review" sits in an outside role/function that isn't already overloaded (team lead, architect, management).

Does the fact that we don't have dedicated code reviewers speak to its immaturity or (in)effectiveness.


I assume other companies have testing processes that would pick up mining scripts like this. It sounds like "Ubuntu Snap Store" is similar to the AUR (which in fact can have lots of malware) in function. It's just the name is misleading.


I do tend to believe that the presence of a package in the Debian repositories is a limited representation of quality/review, as there is a package-maintainer and apparent community decision as to whether or not to keep it in the distro.

Is that perception correct?


That perception is correct. It's limited because in practice Debian developers (being almost entirely volunteers!) don't have the resources to read and audit each line in an upstream release, so certainly intentionally obfuscated backdoors from a previously trustworthy upstream would almost certainly get through. But the type of attack in this article, with a new binary and an unwanted line of shell script to run it, would be very unlikely to get through.

There's also a limited set of people who can upload new packages and a separate team that reviews those, so duplicated functionality / low-quality apps are unlikely to make it into the archive in the first place. Yet Another 2048 Clone would probably not be allowed in unless it was part of e.g. an official GNOME game set.

It also helps that Debian insists on recompiling everything from source and does not redistribute binaries from an upstream source, even if freely-licensed source code is provided.


Thanks a lot for clarifying these things. Do they do any identity verification so people can be held accountable after the fact if something shady were to be discovered?


Yes.

> All work in Debian is performed by developers that can be identified. For those using Debian to be able to trust Debian, we feel it is important that our users can identify those that are working on the project and that development is as transparent as is possible. [0]

I don't personally use Debian very much these days -- my desktops all run Fedora, my servers (with a few exceptions) run CentOS and RHEL -- but I used Debian exclusively for many, many years and out of all Linux distributions Debian (IMO, of course) comes the closest to doing things "the right way". In and of itself, that is pretty amazing, I think, considering that there isn't really all that much that has changed in its 25 year history! In other words, they somehow managed to get things right the first time around.

There are a few things that could perhaps be done a little better or different but -- considering that Debian is an all-volunteer project -- I think they manage to do an awesome job with the limited resources available to them.

[0]: https://wiki.debian.org/DebianKeyring


Yes, that's part of why Debian uses PGP keys for package uploads and insists that your key is signed by other Debian developers and not simply anyone in the web of trust. (I am aware of one Debian developer who contributes / is known to the community by a pseudonym, but I'm told that a few other senior people in Debian know this person's legal name for this exact reason.)


While full auditing is impossible for any distribution, Debian has a multiple people eyeballing code.

Apart the package maintainers and contributors, the Security Team can also review critical packages and step in if something looks suspicious.

But, most importantly, the release cycle and the long freeze before releases is all about STABILITY and SECURITY.

Anybody can upload backdoored code on npm/PyPI etc, infect someone and then remove the malicious release without being detected.

Releasing something malicious or with serious bugs before a freeze cycle and going undetected for months is not impossible but much more difficult.


Your perception is entirely incorrect. Debian maintainers don't have the time (or often the knowledge) to review upstream changes. Do you think the Debian Linux, GCC and Xorg maintainers exhaustively review and understand every patch? They don't.

Instead, the reason you don't see malware pushed to those repositories is because the incentives in the free software world don't align to make them happen in the first place. The moment some project would embed phone-home advertising it would be forked and replaced by all the major distros, so it doesn't happen.

There's also an alignment of incentives between upstream and packagers. If e.g. Xorg tried to embed something evil the volunteer contributors to Xorg would pro-actively sound the alarm and tell distros before they shipped that code.

None of this is true in the iOS and Android stores where you have proprietary paid-for apps where the incentive is to extract as much value from the user as the app store in question will allow, and where the upstream maintainers aren't free software advocates but some corporate employees that'll do what they're told at the cost of the wider software community.

It's an adversarial relationship, not a cooperative relationship.


The particular problem with Snappy is all that's submitted is a single binary blob. Usually with free software source code is submitted & built that has multiple mirrors. That alone can make a big difference.


> Debian maintainers don't have the time (or often the knowledge) to review upstream changes > Do you think the Debian Linux, GCC and Xorg maintainers exhaustively review and understand every patch? They don't.

This is plain false. While it's impossible to guarantee a 100% code reviews, the number of bugs and vulnerabilities found, reported upstream, and patched by distributions (especially Debian) shows that code is being reviewed.


Every line of code should have been reviewed by at least one DD. But the system is self policing, so it's hard to guarantee that that's the case. But Debian certainly leans towards being a curated collection of software rather than a wild west.


Self-policing? Aren't only DDs allowed to upload to the repositories? From what I understand, dak (the Debian archive management software) won't publish a package which hasn't been approved by a sponsor DD.


The part that's self-policing is that nobody verifies that a DD has in fact reviewed the code that they're signing and uploading (and as another reply points out, for large codebases like the Linux kernel, the maintainer almost certainly doesn't and just trusts the upstream signature).


Ever since I learned about it and then read up on it Anaconda has been my go-to way of using Python. Python packaging is a huge mess.


Glad to hear it. We take these matters very seriously, including good hardening flags etc: https://www.anaconda.com/blog/developer-blog/improved-securi...


I think a big difference between Github and package managers is that on Github everything is prefixed with a user name. It takes two clicks to find out who made jakob/TableTool. It’s obvious that the author is a random dude on the internet.

But the brew cask package “table-tool”? That sure sounds official!


The lesson I learned from this comment is "if you assume security, you won't get security."


While code reviews are helpful, can they really prevent malware?

(since an app can always download and execute extra code after it's installed.)


Yes - the code review can say "This app has functionality to download and execute extra code without the user's active participation/consent, which isn't allowed."

iOS enforces this in several ways. Any executable page of code must be signed by Apple (unless your phone is jailbroken), so you simply can't ship native code outside of the App Store delivery path. Apple looks at what functions you link against and bans "private API", and functions like dlsym() that let you open arbitrary symbols from a runtime string are forbidden. Apple usually disallows things that look like they're downloading and interpreting some language at runtime (though I'm not clear on the current rules for this, and I think things like e.g. Python shells are fine as long as it's user-supplied code). The only exception is JavaScript inside a webview, and that doesn't give you any access to the system without having native code to expose things to JavaScript, and Apple can review that native code.

Debian will enforce this too, for computing-freedom reasons as opposed to platform-control reasons: it's impossible for Debian to say "yes, this is free software" if the code isn't available for Debian to audit. And it's obviously impossible for Debian to check it for malware / unwanted functionality. Applications like Firefox or pip can download and install code at the user's request, but applications that automatically download part of their core functionality cannot go into Debian without being patched to allow Debian to compile and ship those parts as part of the package.


The problem with snaps is that they didn't take security really seriously on desktop: https://www.zdnet.com/article/linux-expert-matthew-garrett-u...

>"X has no real concept of different levels of application trust. Any application can register to receive keystrokes from any other application. Any application can inject fake key events into the input stream. An application that is otherwise confined by strong security policies can simply type into another window," he wrote.

They might have wrapped X protocol to provide more security and control. Instead they decided not to.

They might have created a system which is as bulletproof as on iOS where you can install any apps and be 99.9999% sure that they won't steal your data unless you allow them to. But they created this instead.


Flatpak has been making moves in what I think is the right direction, which is containerization and dynamic privileges[0].

Wayland will also solve a few of these problems.

Personally, I'm of the opinion that the Linux security model is a horribly outdated ticking time-bomb and people really aren't taking it seriously enough. It drives me kind of crazy; a lot of people act like X security is no big deal, like it's fine that our primary security model for Linux is just based on file permissions. I think that once we have a better permissions system in place people are going to look back with 20/20 hindsight and say "Well duh, of course apps should be isolated from each other and the system in general. Everyone knows that."

There are two permissions that my desktop/web/mobile environment doesn't ask me for that would prevent most attacks like this: network access and cpu access.

Network access is obvious. It kind of boggles my mind that apps can by default just access the network and make a request to any server that they want. Blocking that alone would take care of a huge number of crypto miners (and spyware), because they all need network access to operate. There are almost no good reasons I can think of for a desktop app to have network access by default.

The less obvious permission that I think is probably worth exploring is CPU access. I don't necessarily know what a control for that would look like in a standard permission system, but if an app wants to start going crazy with my CPU, whether they're being malicious or just innocently inefficient, my OS/browser/phone should probably bring it to my attention and give me the opportunity to either permanently throttle them or set some kind of ground rules.

[0]: https://www.youtube.com/watch?v=4569sjVer54


On Windows, the equivalent to the X problem was called a Shatter attack: https://en.wikipedia.org/wiki/Shatter_attack

Vista and subsequent reduced the problem by introduce levels, so that lower-privileged applications can't interact with higher-privileged, but as far as I know they can still interact with applications at the same level.


That was fixed on Windows 8 and later with all store improvements.

And Microsoft is still quite confident that eventually Win32 will join Win16, even if it takes considerably longer that they were initially willing to wait for.


The issues with X11 you mention is part of what Wayland tries to fix. And why early on seemingly benign things like screenshot tools broke.


Yes, I understand that there're people in the community who try to fix the problems. But it's really unfortunate, that Canonical tells us that it's secure whereas it's not: https://snapcraft.io/

>Snaps are containerised software packages that are simple to create and install. They auto-update and are safe to run. And because they bundle their dependencies, they work on all major Linux systems without modification.


From what I gather, Snappy is mostly a marketing gimmick by Canonical. If you want to packages apps, you should use something like FlatPak or AppImage.


Do you know whether they more secure than snaps? I like the idea of running whatever I want on my hardware without compromising security very much.


Wow that explains why "Shutter" (screenshot tool) doesn't work with wayland!!


Doesn't Xauthority solve this? I thought we could use xauth to generate an unprivileged cookie and launch the program using it. Then it could not meddle with other X clients or even the clipboard.

Of course you should also prevent the program from reading the original privileged Xauthority data. Running it as a different user does the trick.


Xauthority isn't fine-grained. Once you get a cookie, you haven't any restriction to what you can do with the X server.


We could generate an "untrusted" cookie. This prevents clients using it from meddling with "trusted" clients.

It's not really fine grained and also doesn't prevent untrusted clients from meddling with one another, but seems like a starting point for someone inclined to add more security to X.


In a typical linux distro all apps are run under the same user which means they can do whatever they want to each other and user's files. So X server being secure or not doesn't really change anything.

By the way Android, unlike Linux, runs every app under a different user account.


In this case though it's not really a security issue as you describe and more of an abusive use of resources. I don't mean to be facetious but that's also what Electron apps do to some extent


It might not be a full-fledged security issue, but it's at least a user control issue. Why can't we easily set hard CPU/RAM/Storage limits for Electron apps?

Regardless of whether you love or hate Electron, its rise in popularity has clearly shown that a number of HN users feel like they don't have complete control over their computer's resources - that their only choice is to either avoid an app entirely or slow down their computers.

A user should be able to pick up an application and easily say something like "you can have up to 2 CPUs and 250mb of RAM. If you want more, come back and ask me." And honestly, if Google couldn't trust that most users would give it unfettered access to 4 gigs of RAM, I bet their engine would magically get a lot more efficient really quickly.


How does that help? You already know that app is not going to work (well) with 250M, nagging the user for more RAM doesn't solve anything. Either you run an app written with resource constraint environments in mind or you don't.


It may not be common, but I occasionally run into apps that will work will with limited resources but that will happily expand outwards if given the opportunity. Perhaps a bad example, but the code for this cryptominer itself checks how many CPUs you have before it starts.

You are often right that you either supply the resources an app needs or you don't. However, there are a growing number of apps I'm seeing that act more like goldfish - they grow to the size of the container they are put in.

I also occasionally run into apps where I'm OK with bad performance, I just don't want them to interfere with other tasks that I have.

I might decide that I'm OK with a version of Slack on my work computer that runs poorly and that occasionally starts caching stuff to disk - as long as the rest of my computer doesn't slow down. Not every app that I'm using needs good performance - some are more important than others. This is especially true for background apps like a backup system, file sync, update, anything where I don't really care if a task takes longer to finish.

It also might be worth exposing some kind of more fine-grained policy; something like "I want this app to have full access to my CPU if it's in the focus, but if I minimize it, I want you to reduce its resources or even suspend it."

And of course there is the (perhaps naive) hope that as CPU and RAM become a resource where users control access in the same way that they control location or camera access, developers might start to include resource-heavy features as progressive enhancements. This has... sort of... worked on the web with resources like location. So it's unlikely, but possible.


But this particular issue doesn't have anything to do with X!


One word: lxc


And the explanation for that word?

It's still nothing to do with X, and it's not certain that it would plug the "hole" that is being used here.

This "exploit" requires unfettered (or at least not completely throttled) access to the CPU and a way to send the spoils home via the network.

Would this actually be stopped with "lxc"? I doubt it'd do much to curb the CPU abuse, and network access is something games frequently demand (in something like this for e.g. leaderboards or a social sharing feature), so I'd bet that nobody would bat an eye.


Sorry, missplaced my answer. It was regarding the guy above yours concerning apps in general can create havoc to a system. I mean that using containers like lxc can create a sandbox.


Containers under linux aren't in themselves very secure at all. This isn't inherently so. Supposedly solaris can do much better for example.


What security guarantees cannot be hold by container technologies such as LXC, cgroups, namespaces and docker?


There's not much to do with X without breaking the protocol.


... without additional overhead and engineering effort. You can sandbox an X application by running a dedicated X server in the sandbox and pass only a secured channel out.


No need for that overhead, simply disallow untrustred clients, trusted clients that explicitly request to drop down to untrusted (e.g. browsers could do that) and clients the user explicitly marks as untrusted to access other clients' data. Xorg already has most of this functionality (AFAIK all remote/socket connections can be made untrusted) and last time i heard about it, it is something some X developers are going to look into improving.

This way you don't have any overhead, you preserve backwards compatibility with the protocol, existing programs that need the functionality of accessing each others' data (window managers, panels, tools like xdotool, screenshotting tools, etc) can keep working like they always were and you still get to implement sandboxing (with or without snaps/flatpack/etc) without breaking the Linux desktop.


If nothing can be done, then there should be no claims of security.


I think securing X would reasonably be viewed as a non starter for those who intend to replace X with wayland.


In the sense that Wayland is what the X.org maintainers came up with as the solution to X11's security design flaws. (There was a previous attempt in XACE, which essentially extended SELinux's security model into the X server, but somehow that never caught on.)


Wayland solves some of these issues.


used a proprietary license

Does the license actually mention it mines? I am reminded of a lot of "freemium"/"ad-supported"/etc. software that makes its author money via ads or whatever else --- and you agree to that if you read the license --- and it is a bit shady to name the miner 'systemd', but it seems rather overboard to call this "malware"... when I see that term I think of software that self-propagates and exfiltrates personal data, delete/encrypts files for ransom, etc.

Also from the page:

Size 138.8 MB

I'm not really familiar with the latest trends in (bloatware?) development, but a simple game like that taking >100MB would make me suspicious --- even 10MB is in the "questionable" range, and ~1MB would be closer to what I consider "typical". 138MB is bigger than the installed size of Firefox, and that's a far more complex application...


>a simple game like that taking >100MB would make me suspicious

Nah. Games often feature a bunch of textures and video and sound files. Bad compression or too high resolution on those is quite common, which is why games _are_ often that large.

Also proprietary software usually ships a bunch of libraries - games often ship with a premade engine, which are also often quite large.

As a datapoint, I have a copy of "Strata", which is a simple /minimalistic puzzle game, and it _is_ 78MB.


I remember the Facebook app being less than 20 megabytes in size half a decade ago. Now it’s almost half a gigabyte


And for the life of me I can't understand why people use the Facebook app. The mobile web page loads faster, it's automatically sandboxed by being just a browser page and it can do almost anything that the app can do.

Besides on iOS at least, if you click on a link from the Facebook web page, you can take advantage of whatever content blocker you have installed.


> And for the life of me I can't understand why people use the Facebook app.

When I last used it, it just felt less clunky than opening it in a mobile browser tab (Android). I bet most people who do use the app would agree. A mobile browser's address bar is also kind of ugly, so apps often just feel more 'immersive' and therefore 'better'. The ultimate reason however is probably because Facebook is somewhat good at marketing and managed to sell their apps to users better than the mobile version of their site. There are probably more reasons at hand too but I can't remember anymore as I now use neither.

> The mobile web page loads faster

Not really in my experience. Also, apps tend to have more support for gestures than mobile pages. Instagram is probably an easier example. It's more swiping, and less precise tapping. Browsers are also kind of yucky to load if you have many tabs open (Brave seems to be better with this though). Scrolling in apps can often be somehow more pleasant.

The gap between mobile pages and official apps has probably narrowed, but I don't think it's quite correct to imply that the mobile page is always better for everyone (especially those who are still enjoying the 'rewards' of Facebook).

Do you use the Google Maps with its app, or in a browser?


Google Maps while navigating requires features that require an app.


If you log into Facebook on web, then visit any other site, they send your browsing info to Facebook via Like button.

The Facebook app is more sandboxed, since it can't snoop on your web browsing.


The Facebook app can and does track your physical location, among other things.

I think the real answer is "Don't use Facebook" or "If you must use Facebook, do it through Tor Browser".


The facebook app for quite a while was actually sending facebook data about your phone calls and sms so guess again.


Only on Android....they couldn't do that on iOS.


The like button can't track you if you block it ;-)


I use a seperate web browser on my phone for Firefox and Gmail. Everything else, I use Firefox.


For which platform? Google Play Store says the full Facebook app is 73 MB; Facebook Lite is 1.7 MB.


Which is crazy – with my own apps even after they’re packed with features I can barely get above 4MB.

Going to 100x that? Insane.


One word: electron.


Ues, electron is in opposition to less is more. It’s an abuse of memory and hdd. How can someone invent such a bloatware product?


The website version of the game was 150 KB.

You'd be hard pressed to find an engine or runtime (except electron as some people are saying it actually is..) to get a game like that (literally moving boxes and text) up to that size.

Even if he used static images and ttf fonts the size is way off. Pngs are a couple to couple dozen kilos a piece. Fonts are a few megs at most each. The single biggest font 'file' I know of/used for real (except for experiments people might do with the file formats) is Noto Sans CJK ttc file and it's not a single font but a collection (and it covers all of CJK[0] which is an insane range).

Entire Minecraft is under 300 megs and that includes the launcher, the language packs, and the entire JRE that is 140 megs in itself (!).

On gamejolt there is a (very nice) small low poly game called The Very Organized Thief, it was made in Unity3D and is just 13 megs in a zip (EDIT: and 35 unpacked).

I couldn't find a low poly game in Unreal Engine 4 nor one that is under 100-200 megs (EDIT: when packed) so maybe Unreal Engine 4 has that high static cost but I'm not sure right now.

In any case: 2048 taking over 100 megs is actually crazy, especially since it's a game so simple you can rewrite it in almost any engine overnight. He/she could have done at least that much.

[0] - https://www.google.com/get/noto/help/cjk/


For ordinary games, maybe - but this is 2048 we are talking about. Sound maybe, but I doubt there are any videos or textures.


For some high performance games, storing assets with weaker compression means less CPU cycles spent.


Not necessary. Larger assets take more time to read from disk.


That might backfire on an HDD and it's a very convoluted scenario (loading from disk and decompressing when hogging all of the CPU).

Many people even with a quad core i5/i7 might have a small SSD just for Windows and important stuff and any large game goes onto the HDD, the fact you'd do such a trade off and make the game size swell only makes the effect worse and user more likely to use HDD.

Linux (the kernel) is compressed by default (that's why the filename is vmlinuz, vm for virtual memory support, z for compression) and it doesn't impact the startup enough to have many/most distros take it out.

Simple compression might not bring lots of savings but it'll at least help a bit. And with BMP instead of PNG a game would just blow up in size to crazy proportions. There also is some (still patented?) lossy tech to decompress on the GPU, not the CPU: https://en.wikipedia.org/wiki/S3_Texture_Compression

Ooodle from RAD game tools (kind of a veteran gaming middleware company with some prominent game devs employed at it and prices being 'contact us') also has some really fast compressors and decompressors (but I've never used them and I don't want to ask for a testing SDK if I don't consider buying their product).

You can also trade time on your end for optimization that is then literally free to the end user, e.g. use pngopti.

Tokyo Dark (a VN-ish game from Japan) was notorious (for like 30 reasons but this one is very annoying) and loaded entire several hours long 500+ meg game off of the disk at once at start up for no reason other than it being simple to do or done by their Construct 2 set up. Just some compression or pngopti would help a lot for that, I remember cutting like 5 or 10 percent of game size using just pngopti.

John Carmack said that to make Rage's MegaTexture system assets work (and they still ended up being huge) he used some 200 GB server to optimize it for hours.

Crash Bandicoot also used some smart packers that took ages to run at the time on multi thousand dollar workstations (90s).

All in all: I mean to say that compression and size optimization might still be worthwhile, is cheap and doesn't have to imply a big cost to the end user, more often than not the cost is very asymmetric and the compressor pays way more than the decompressor (e.g. 7zip ultra takes hours to pack many gigs of files and uses 17 GB of RAM but the unpacking of such an archive takes just 10-30 minutes and is actually limited by my HDD speed).


I would be really surprised if the miner is contributing any major portion of that size. I just did a very basic search and a random miner I found has binary sizes <1MB

https://github.com/xmrig/xmrig/releases

edit: apparently the actual game is based on https://github.com/gabrielecirulli/2048 which is HTML+JS, so probably the Snap was bundling in Chromium/Electron, which explains the size.


Yeah, the Coinhive miner is ~115 KILObytes.


Snaps ship with quarter a distribution worth of shared libraries in them, don't they?


> I'm not really familiar with the latest trends in (bloatware?) development, but a simple game like that taking >100MB would make me suspicious

This is very much the idea of these awful (IMHO) ways of distributing software. Bundle all of your dependencies, share nothing, expose users to the risks of exploits in the libraries you've bundled (and maybe statically so no one can even figure out you have done that).

Please stop this madness.


There is no support for specifying the license yet, so all snap packages appear as "proprietary".

Nothing to look here.

edit: https://forum.snapcraft.io/t/snap-license-metadata/856/53 Still it is unresolved, they probably use the deprecated licence feature in VLC.


You can. The store used to default to "Proprietary" if the developer didn't set it. Now we default to "Unknown" if not specified by the developer.


That is not true. You absolutely can specify a license: https://snapcraft.io/vlc


Off topic: why would someone install VLC over Snap instead of the version from Ubuntu repos?


Because the same snap can run on releases of Ubuntu all the way back to 14.04. 14.04 has VLC 2.1 in the repo. 16.04 LTS has VLC 2.2. The snap store has 3.0.1 in the stable channel, with 4.0 (dev release) in the edge channel. Versions that will never be in the archive of those older releases.

As LTS releases age, the contents of the repo age with them. PPAs are one solution, but they're undiscoverable and not straightforward for new users to setup. Ubuntu has a ton of users who are 'sticky' on old LTS releases.

This enables the VLC developer to have one package that targets millions of users across lots of releases of Ubuntu - and other distros too.


Thanks!


It would still look quite shady. It's also burning electricity, hogging your CPU and putting heat and stress onto your CPU which is way worse than just displaying an ad which costs literally nothing (or 1 or 2 cents for fetching a single png and link).

I use one donationware app and it's cleanly marked it display an add and it's explained they fetch it from their own site via a dumb static image request on startup (it sits under the main GUI, it's not a splash or wait or anything) and for a donation you can get rid of it. That's leagues above the experience an average website gives you without adblocking software these days.

As for the size - I've no idea what happened, the original page[0] tested by [1] reports as 153 KB. Maybe he just wrapped it in electron?

[0] - http://gabrielecirulli.github.io/2048/ [1] - https://tools.pingdom.com/


It would still look quite shady. It's also burning electricity, hogging your CPU and putting heat and stress onto your CPU

Like any other Electron based app...


This is false equivalence. The electron app is ostensibly working for your benefit, just in a very inefficient way, not mining a currency using your resources to ship off to someone.

You could argue that making users pay for savings in development by their RAM, Disk, CPU, GPU and electricity costs is also bad (and even bad for the planet) but that's a less shady business strategy for saving cost than covertly mining. I have argued that and similar points with varying success at [0]. Then again someone took it upon themselves to downvote this[2] 30 minute comment of mine already and offered no counterpoint so maybe bloatware is what people want and they will go out of their way to justify it.

Back to this topic: to stop mining requires stopping being shady and turn it off, to stop burning resources with electron requires huge staff changes, technology changes, mentality changes, etc. and goes against what Joel Spolsky said about rewrite from scratch[1] (but I do feel the irony every time I see Atom or something claim they have now way more performant code by rewriting a part of the program in C++).

It's also very unfair IMO to compare pure hearted electron users (whatever you think of people who use it and the tech itself) to someone who stealthily installs malware (which borders on a crime and is way more shady and immoral).

Coincidentally I found a very neat app for music conversion/extraction from videos today, it's light, starts instantly, the GUI never lags even for thousands of files, it's total size is 40 megs and I looked into it and it's just ffmpeg dll, some other dll for other media stuff and the main GUI is in Delphi (and to add insult to injury the author is a Pole, which is ironic because I am also a Pole and I was taught Delphi in high school too - as my first language - and use Lazarus from time to time to play around with Pascal and native GUIs again).

[0] - https://news.ycombinator.com/item?id=15948290

[1] - https://www.joelonsoftware.com/2000/04/06/things-you-should-...

[2] - https://news.ycombinator.com/item?id=17055872


and goes against what Joel Spolsky said about rewrite from scratch[1]

Joel at one time was the product manager(?) for Excel. Microsoft also got lambasted for trying to use an internally built cross platform solution to use the same codebase for Mac Office and Windows Office back in the mid 1990s. Microsoft even decided that it was better to use native tooling for both platforms.

There is a difference between "rewriting" and using native tools for each platform. Would you still want something using Java+Swing?


Why not? Is Java + Swing very heavy or something? Is NetBeans in Swing or something else? I've used NetBeans for C, C++, PHP and HTML5 on my old Core 2 Duo with 2 Gigs of RAM on Fedora during my uni years and it was fine by me. It lagged some but on that machine everything lagged and it was a full on IDE (not just a code editor) with some really nice features.


A Core 2 Duo should not lag running an IDE. I ran Visual 2008 on a Core 2 Duo with Sql Server running and it had 4GB of RAM.

Netbeans and almost every other Java IDE at the time was one of my major turn offs about using Java compared to using .Net + Visual Studio.

In fact, I still use the same Core 2 Duo 2.66Ghz with 4GB of RAM running Windows 10 as my Plex Server and it can transcode up to two streams simultaneously. I'll still use it interactively when I'm working from home and I'm mindlessly browsing the web while I am waiting on something.


You say "shady" but you don't explain why. I don't understand the distinction. I care about the quality of an app and how much it costs in $/CPU/RAM. Why should I care why it uses the CPU it uses?

If someone (legally) cloned an app, cut CPU in half, and replaced have the savings with mining, why would you prefer the original app?


Mining in the background is worse than ads because it will be silently ruining the battery life of your laptop or silently running up your electric bill on your desktop.

Note that at 12c per kilowatt hour 200W of extra draw on a machine that's always running is 2.4c per hour.

Over the expected 5 year lifespan of a machine this could cost you $880 in the EU this would be more like $1980 because electricity is on average more expensive there.

Stealing up to 2K from users isn't much more friendly that cryptolockers.


Think about how much ads steal from you, by tricking your brain into buying stuff you don't need.


A Monero miner is one of the more innocuous forms of malware ,compared to a C&C trojan or a keylogger. Some websites will mine monero in the background. Because it's just a js script, it's not much different than a banner ad except it's less intrusive, yet somehow 'currency miner' has more negative connotations than 'ad server'. That is the downside of decentralized mining and asic resistance is you end up with a lot of zombie miners.


> Because it's just a js script, it's not much different than a banner ad except it's less intrusive

Tell that to your electricity provider


The same thing can be said about JS ad analytics scripts and ads. Or sites that turn the entire webpage into a JS 'web app' when it'd work fine as HTML with static images and text.


Not the same thing. JS increases the usage a bit. Badly broken JS may use a lot of CPU - but the author still has the incentive to fix it. But mining is a completely different category - it's designed to peg your CPU at 100%, because that's what's profitable.


And what's worse, for every dollar you spend on electricity for CPU-mining, you (or, in this case, someone else) receive 5 cents worth of cryptocurrency.


Or to your laptop/mobile battery lives...


I wonder if it makes sense to implement resource controls for websites so that users can define how many CPU cycles are allowed at the maximum which give web developers incentives to write less resource-hungry web apps.


Unlike flahub where either original develop or flathub admins take control

Canonical's Snapcraft literally says "Get published in minutes"

Any random guy would publish his malware with near no review

https://dashboard.snapcraft.io/snaps/

Yes, they maybe win the counter for published apps compared to flathub. Congratulations!


I really don't see the use case at all for Snappy. I mean FlatPak makes sense for devs who want to "package- once, run everywhere", but Snappy is Ubuntu-only. The thing is Ubuntu through Debian is really good at having lots of up-to-date packages. Why abandon that for some crummy app store?


A month or two ago I went digging because I wanted to disable auto-update for something I installed through snap (it stores configuration inside the versioned directory, so one day everything was just gone because snap auto-upgraded it the night before). Completely disabling auto-update is apparently not possible, and by design.

According to the devs involved, on mailing lists and bug reports, the point of snap over apt/etc is the auto updates can't be disabled, so end-users can't put off or forget about security updates. Even adding a way to delay or configure when an update happens seemed to take a lot of convincing before it was added.

(In the end I just disabled the snap service entirely to stop auto-updates. Only downside seems to be that I can't query or install new things through snap without it.)


The "Users by distribution" table at the bottom of the Spotify page is worth a look: https://snapcraft.io/spotify

Snaps are not Ubuntu-only. You can find install instructions for many distros here: https://docs.snapcraft.io/core/install


Snap is a proprietary format that is canonical-centric, don't tell me that the community make choice to make client opensource but official store both hardcoded (initially) and closed source.

On the other hand,flatpak is a freedesktop project, done using open standards like OCI and ostree.


No. Snap is not a proprietary format at all. It's a squashfs file with a small amount of metadata. You can make a snap by plopping a file in a folder, add a simple snap.yaml which describes the application and then "make" a snap with the common mksquashfs tool.

There is room in the world for flatpak and snap to co-exist. We created snaps as an evolution on from clicks on the Ubuntu phone, and it covers use cases that flatpak wasn't designed for.


Remember ubuntu one storage solution? An opensource client does not make the whole thing free software and open standard.


Thanks I did not realize it was cross platform. It still disregards too many free desktop standards for me however. Feels like Unity when it comes to Canonical’s NIH syndrome.


> Unlike flahub where either original develop or flathub admins take control

Is this actual policy? How do they determine who is the original developer?


By reaching them via official website/email. You are aware that most software have official website, have readme with some copyright note, some authors file...etc

Yes, it's a policy Quote

If there’s an app that you'd like to be distributed on Flathub, the best first course of action is to approach the app’s developers and ask them to submit it.

https://github.com/flathub/flathub/wiki/App-Submission


Whoa, I made Hextris (https://github.com/hextris/hextris, one of the games removed from the store) a few years ago! Is there any precedent in OSS developers being held responsible for misuse of their code?


From the license you chose:

The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free software for all its users. We, the Free Software Foundation, use the GNU General Public License for most of our software; it applies also to any other work released this way by its authors. You can apply it to your programs, too.

When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things.

Part of the problem with letting people have freedom is that they have the freedom to make decisions that impact communities in a negative way. But it's usually worth the tradeoff.

This is probably the most relevant line though:

For the developers' and authors' protection, the GPL clearly explains that there is no warranty for this free software.

i.e. you have nothing to worry about, but you also probably can't do anything to punish the misuse. After all, misuse is subjective.


I thought that well known software licenses deal with such concerns.


This is exactly why you should not run random docker images and snaps. Docker images are also run as root in many cases. It is better to build app images from scratch and understand what exactly goes into the image.


People tend to take docker images at roughly the same level of audit as a large open source package installed from their favorite repository without realizing that those two things are entirely different.

They feel it is a convenient binary distribution format for software.


Don't run random Docker or Vagrant Boxes seems to be something that isn't really emphasized by the providers of these services.

I ran into this realization the other day. I wanted to give Mint a try. I run over to Vagrant's site which prominently displays a "Discover Boxes" link. But gives zero indication from the main site that these Boxes are not provided by any kind of official maintainer or Hashicorp itself but are community uploads I suppose... at least I can't find any vetted information about who the uploader's are and why I should trust them.

This should be a big read flag in the quick start guide that screams: Don't just download any old box from our site and then load it up with all your customer data and put it into production. Instead it's buried deep in the documentation: https://www.vagrantup.com/docs/vagrant-cloud/boxes/catalog.h...


Why not run random docker images? As far as I understand, docker container are pretty solid. Not super solid but solid enough.


Those random docker images are rarely used in isolation. They typically handle your data and often your customers data.

Beyond that, numerous escape exploits in linux containerization (and docker specifically) have popped up over the years, and many more are going to pop up over the coming years. This is not a mature space.

Running random binary code distributed from an non-curated source, even in a "container" is going to end in heartache.


And that’s why you should usually build your own images, and only trust docker images from the same people whose binaries you’d also trust with all your data.

(e.g. a docker image from RedHat may be okay, one from zhenghe8 likely not)


> Those random docker images are rarely used in isolation. They typically handle your data and often your customers data.

Thank you. This is something that I truly hate Google for. They constantly spread this mentality that isolation = nothing bad can ever happen to your data. And then they build a horrible permission system on top of that idea and leave everything else up to the user.

And as a result, the Google Play Store and Chrome Store are the most malware-filled app stores that I'm aware of.

Despite that, you still had people giving Firefox shit for not isolating add-ons, which however were thoroughly reviewed and as a result quite clearly less often subject to malicious intents.

And then something like the Web of Trust fiasco, where the add-on as a feature sends your browsing history out to the internet and then bad things happen there (the WoT devs sold the browsing data in an anonymized form that was shown to be deanonymizable rather easily), against that neither isolation nor a review can help, so we shouldn't act like any security technique is perfect. We still need users to think for themselves, even if that's bad for Google's business model.


Docker was never made for security, only for isolation and it can have vulnerabilities.


I have used the following image[1] on a server of my old company where we had a problem and no one with privileges was available. It gives you a root shell. I added my public key to the authorized_keys of root.

[1] https://hub.docker.com/r/chrisfosterelli/rootplease/


Docker does use user namespaces by default, that's pretty bad.


*doesn't


It's only a matter of time before some major successful linux system attack is delivered via snap/flathub.

Distributions and their package maintainers serve an important role. In the interests of consuming more & faster people seem to be ignoring that.

I wish we had enough resources in the free softare community for all software to be packaged and maintained in the distributions by independent parties unaffiliated with the creators as a rule.


In theory you can have the last bit with flatpack (and perhaps snap), the issue here is auditing what goes in the repositories, not the package format.


Have you ever gone through the rigamarole of getting a project into a major distro like debian?

A bunch of the guidelines governing that process are simply unenforcable in a model where the developer builds and publishes the release.

I am not of the opinion that those rules are irrelevant to the stability and security of our systems.

There's a significant push to establish a more app-store model for linux distributions, taking the distributor largely out of the loop for software that isn't part of the base system.

This has both positive and negative consequences.

Today the negative consequences are largely hand-waved away with something along the lines of "containers will protect you".


I do not really see the difference between the app-store model and the distribution repository model, at the end of the day both are models that teach the users to only download stuff from a specific place. This introduces gatekeeping middlemen that personally i'd rather do without. But if there is going to be a central """trusted""" place, i'd prefer it to be distribution agnostic.

(although even that isn't ideal since in practice it ends up with the major popular distributions pushing agendas to the smaller distributions through whatever requirements are there for "compatibility")


It makes a substantial difference when we're talking about open software the distributor builds from source. The major distributors set a relatively high standard for the build process.

The same cannot be said for developers in my experience, who often don't even see a problem with the build process accessing the network - and frequently will publish releases built from the same host environment they use for their general daily computing.

From the perspective of a distributor, the packages should be perfectly buildable (and for official releases, preferably so) with a toolchain of known provenance in a clean environment without network connectivity.

The priorities are quite different for the developer of software and the distributor of systems incorporating that software. It's similar to the tension between system administrators and developers.


This is the Snap distribution system working as intended with unfortunate consequences.


Can somebody explain the need for this new package management thingy when apt exists and works nicely? Why have 2 softwares to do the same thing.


I agree; I have zero interest in installing software using anything other than my system's default package manager. The whole point of package management is centralization, keeping track of chaotic dependencies and install states. Having multiple package managers, that don't know about each other's state, on the system defeats the purpose entirely.


There is no review process for sandbox apps, the manual review is for apps with access system wide.


Can anyone clarify if this is a possibility for apt packages as well? As far as I understand, there are 4 types of apt repositories (for Ubuntu): Main, Universe, Restricted, Multiverse.

I guess Main is safe since it's handled by Canonical, but the rest?

Moreover, a lot of installers simply add a custom repository to sources.list.

What are some good practices for a novice user, regarding apt?


So, most source-based package managers are going to have higher standards & catch something like this. Not every line is going to be audited, but demanding free licenses, active git repos, and wide userbase goes a long way to keep stuff clean. Obviously many valuable packages are left out & you will be tempted to install the .deb files.

I would say if you are at all concerned about safety: don't install apps through .deb file that developers sometimes push. They are generally safe, but there is always a potential that these files are malware.

For instance, lots of people use Atom as their text editor, but Atom does not make it possible/easy for packagers to build Atom from source[1]! Everything used to come with a configure, build, & install script, but I guess it's not hip enough anymore.

[1]: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=747824


Indeed, building from source is falling out of fashion, with everyone creating a package manager these days. I find some of these very opaque, for example Rust has it’s own package manager which downloads and runs binaries at its own discretion.

https://github.com/rust-lang-nursery/rustup.rs


I only get my packages from the central repos. I would be very wary of downloading snaps unless I knew exactly who was distributing them.

Repositories are why Linux repos are free of malware. With more snap-based packages being made available then we are going to see a lot more of this sort of thing.


If you use snaps, be aware that they update automatically on their own. You have the option to set upgrade time windows but you cannot completely disable automatic updates and use your own custom solution to update and administer your system.

Discussion of this issue with snap developers here: https://forum.snapcraft.io/t/disabling-automatic-refresh-for...


This is very unfortunate. Despite the negative press snaps is getting, I was planning on releasing a commercial product as a snap. One of the key reasons for doing so was that users had control over what the app is allowed do (via plugging/unplugging slots), a feature flatpack lacks as far as I can tell.

However this changes everything. Disallowing owners from controlling their software and hardware is not something I want to encourage. We have enough of that from Microsoft (and to some extent GNOME Shell developers).

Does Flatpack also force developers' intentions upon their users? Or is AppImage my only recourse?

I want a solution that doesn't mistreat customers.


Apple's strategy for their store looks better and better every day.



From looking at the link, it looks like it just sends information from the device to a web server. It doesn't look like it would have access to anything that any other app wouldn't have access to without explicitly asking for the users permission.


I prefer the BSD port system.


Is there a significant difference between BSD ports vs. Portage vs. Arch Build System? Especially in terms of security I think that they all have the same model.

FreeBSD ports has a ton of packages, though. Maybe they have incredible quality control, but I would bet a few of those have some malware in them. That goes for all Linux/BSD build systems, obviously, just huge ones make it more likely.

https://repology.org/statistics/newest


The main reason I brought it up is that like the Apple "Store" the OpenBSD Ports (in my case) are a curated selection of software. I do mostly use pkg_add to install, as it is recommended and takes less time, but Ports is more recognizable; I still prefer it to Apple even if I'm not inclined to install everything form source like Gentoo does. The main difference over the Arch Build System in my experience is that most (all?) of the packages are also in Ports. Arch's AUR was/is helpful but my experience with it was always sloppier and I've grown disillusioned with user maintained packages and build systems.

edit: Yes, pretty much the same idea though.


Nice case study in why Arch types are adament that you should properly take the time to read your PKGBUILDs.


Arch makes it clear for users that AUR packages are not official.

Ubuntu snaps very easily installable, just a quick command away, which can give users a false sense of security.


It great to see that the community acts fast and educates others.


While this is obviously malicious, I think I would favour paying for things with a few CPU cycles, as long as it was voluntary and overt.

Want to read this article? Please click here to mine a cryptocoin for 30 seconds. Great, thanks! Here's a cookie so we won't ask you again to mine for a whole month.

I would much rather have this than being shamed into looking at ads. It always struck me as utterly bizarre to be told that not wanting to see ads is somehow immoral.


That may be fine on desktops but absolutely unacceptable on mobile and laptops, due to battery usage.


Wouldn't be a payment if CPU cycles didn't have some scarcity.

My laptop would be fine with 30 seconds of a CPU spike, though. Most phones could probably also tolerate it.


As someone who hasn't yet used Ubuntu 18.04, is the snap store something I'll be using in 5 years time instead of APT, is it just another attempt by Canonical to jump on the app store bandwagon, or is it something completely different?

Excuse my ignorance but I'm intensely suspicious of "stores" on open source operating systems.


You might be using it instead of APT for some things, but it won't replace APT.

Say you want the newest version of LibreOffice for whatever reason. This is a typical use-case where Snaps will come in handy. They have most dependencies bundled into the application, so you don't have to worry about your whole system getting wonky by installing newer versions of those dependencies to go with the newer version of the application.

This is also meant to serve as a way for devs to release software without much hassle. So, they don't have to open-source their code, hope that someone finds it, packages it for Ubuntu and in like five years time is available to end-users through the repositories.

They also don't have to worry about building a .deb, .rpm, Arch's format and whatever else there is, including accounting for the differences between distros. So, Snaps are supposed to work on all distros the same.

Ultimately, this will bring in more proprietary applications.

Well, and Snaps are sandboxed, so there's some protection, which makes those proprietary applications somewhat more acceptable, but as this piece of news shows, it's not complete protection.

Is it another attempt of Canonical to jump on the app store bandwagon? Most definitely yes. There's a competing format, Flatpak, which does pretty much the same, also AppImage which is somewhat older and without sandboxing, and Canonical is mainly just pushing their own format, because they'll have control of the store behind it.

Like, it's not impossible to hook up other Snap stores, but Canonical has established their infrastructure as the primary source and then how many users are going to look elsewhere?


Thanks for the info. I like that Canonical are trying to solve the problems with package managers, but it's a shame we can't focus on making APT and friends more capable of the positive parts of app stores without bundling things into binaries akin to Windows executables. Allow multiple versions of libraries able to be installed, provide path translation (so systems that store binaries in /opt and others in /usr can still work from the same .deb file) and make sure repositories are more frequently updated. This might require Ubuntu to move to a rolling release, at least for some of their products (maybe they could keep releasing LTS versions every two years but have a rolling release for everything else). Sandboxing is arguably nothing to do with the package manager or app store - it's the kernel's problem. And bundling dependencies together is bad for security - critical vulnerabilities (recent example being in SSH) would not be able to be patched with one update, leaving unmaintained code that bundled a vulnerable dependency permanently flawed.

I think I'll wait for this most recent example of the trend to make everything into an app to blow over...


Is there some workable way to just add rando user-requested distros (or, more importantly, Debian) to a PPA? Is there some alternate/sane way to distribute packages for Gnu/Linux without smothering my development process in molasses?

I don't even mind creating a VM for every single distro a user requests, and doing a huge automated binary compilation fest for every release. The only thing I care about is that the software is distributed through channels which make it explicit that the current stable version is the only version I support.


It is not the job of the developer to package software, that is for maintainers. The best you could do is specify dependencies and tell them how to compile. If it is proprietary, then you might have to support a few popular formats(debs and rpms) and make sure they also work on other distributions without much effort. For example Arch User Repository has google-chrome that repackages it into the native package format but since all dependencies are known, the binary packaged for Debian also works in Arch Linux.


I've had bad experiences with Snap.

I understand that with Snap devs have to bundle their own dependencies and take care of upgrading, which is bad if I understood correctly.

In my case, a few programs I had installed needed to be connected to other snaps, and they would suddenly stop working for no apparent reason. Only by trying to launch the misbehaving program from the command line I'd find out I had to update the connected program(s).

Has never happened to me with Apt, so my opinion so far is that installing .deb files is vastly superior, at the moment.


It's too bad there isn't some sinister cabal of trusted individuals within the Ubuntu project that can review packages for quality and package them securely and in an auditable fashion.


A snap package may ask for elevated permissions. In that case, it goes for manual review.

But if it does not ask for special permissions, then it goes in automatically. Because the app is quite confined.

Compared to other package repos, here it's somewhat better.


That'd be a walled garden, can't have that.


How does one figure out who a given snapcraft packager is? E.g. Sublime Text says it's packaged by Snapcrafters. Who is that?


Presumably it's https://github.com/snapcrafters , but what links the Snap Store identity to that GitHub org?

Where does snapcraft.yaml get executed? On my computer? On Canonical's infra? On the packager's computer?


The build service at build.snapcraft.io is what builds it. Anyone can hook up their github repo (containing a snapcraft.yaml) to build and have to automatically rebuild the snap when changes in the git repo occur. It then pushes the snap to the 'edge' channel in the store. Developer validates that build and then pushes to stable for all users.


Thank you.

As a user, how (other than asking here) was I supposed to convince myself of the identity binding between “snapcrafters” and the GitHub org and to convince myself that trust in the correspondence between snapcraft.yaml and what I get when I install a snap is rooted in Canonical’s build service and not in trusting an individual uploader not injecting different binaries?


And what is to stop someone from pushing malicious code to GitHub and you guys distributing malicious packages to end-users via your 'edge' and 'stable' channels?

And who's liable here?


The risk here is not just going to the Snap Store. At least right now on Ubuntu 18.04 if you type a command that's not installed but is provided by a Snap app, the shell suggests that you install the snap the way it suggested an apt command previously.


One need to take special care about snaps as they need to be a sandboxed gui apps. According to OMG ubuntu report this incident installed system services. And we know that snaps can ship even kernel modules.


I'm not familiar with Ubuntu snap store, but how does it compare with Google play store in terms of security?

For example, do apps need to request permissions for accomplishing specific tasks, or is there any kind of sandboxing involved?


The snap packages can either ask for elevated privileges or ask for standard access. In the former, they go through some manual review.

I do not know if there are any auto checks when the package is added automatically.


Snaps was initially tooted as the bestest securest container based application solution by Canonical back then. It is impossible for the app the steal your data, they said. Because of "secure encapsulation" and such. So, that means that there is no need for a review process for uploads, just to make installing packages even more easier than it already is?

I'm sorry, Canonical and Ubuntu are the point were Open Source Software apparently breaks with its traditions. No review on binary blobs uploads most certainly made with OSS when marked "proprietary"? They are kidding, right?


Ubuntu 18.04 is horrible on this.

The default GUI package manager, "Ubuntu Software" shows up snap packages just like ordinary packages. It was uploaded by somebody who is not bright at the domain and badly configured for locale. It can only handle ASCII characters. Probably reviewed by nobody.


>For example, the 2048buntu snap was submitted as proprietary, so we can't actually see the package contents, except for the init script which you can see above.

Unless the Snap Store uses some kind of DRM, I don't see how that can be the case. Just install it and see the contents in your filesystem?


It probably refers to distribution of binaries without source code, which MIT license allows. Wording could be better


All snaps appear as "proprietary" because there is not functionality to specify the license in the package yet. Apparently it is work in progress, trying to decide whether to allow free text licences or support a set of license and then allow to pick one from the list.


Presumably they mean they can't see the source code.


One more confirmation that Ubuntu cannot be recommended anymore.


I think it would be useful if such declarations followed up with an alternative suggestion.

I have no alternative suggestion, because Ubuntu is still what I would recommend to new users even if it's not what I personally use anymore, because they have the track record for getting new Linux users on board and helping them out (via the prominent search results for common problems people come across, etc). On the whole, I also think their design has been consistently decent (including Unity).


> Nicolas Tomb used a proprietary license for at least some of his snaps. For example, the 2048buntu snap was submitted as proprietary. The game in question, 2048, uses a MIT license

No! MIT is not a proprietary license!


The article doesn’t say it is. The snap uses a proprietary license which is possible since the game on which the snap is based uses MIT - which allows redistribution under a proprietary license.


I don’t think that’s what they’re saying. The developer repackaged the MIT licensed game with the miner under his own, proprietary, license


2048 = MIT 2048buntu = Proprietary




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: