Does the name "Ubuntu Snap Store" carry a connotation that code is reviewed for malware by Ubuntu, the way that the Apple, Google, Amazon, etc. mobile app stores are? Or does its presence in the software center app imply a connotation that it's endorsed by the OS vendor?
I was at a PyCon BoF earlier today about security where I learned that many developers - including experienced developers - believe that the presence of a package on the PyPI or npm package registries is some sort of indicator of quality/review, and they're surprised to learn that anyone can upload code to PyPI/npm. One reason they believe this is that they're hosted by the same organizations that provide the installer tools, so it feels like it's from an official source. (And on the flip side, I was surprised to learn that Conda does do security review of things they include in their official repositories; I assumed Conda would work like pip in this regard.)
Whether or not people should believe this, it's clear that they do. Is there something that the development communities can do to make it clearer that software in a certain repository is untrusted and unreviewed and we regard this as a feature? The developers above generally don't believe that the presence of a package on GitHub, for instance, is an indicator of anything, largely because they know that they themselves can get code on GitHub. But we don't really want people publishing hello-worlds to PyPI, npm, and so forth the way they would to GitHub as part of a tutorial, and the Ubuntu Snap Store is targeted at people who aren't app developers at all.
The processes for installing from the two are also different enough that the user can't mistake one for the other: official packages are a pacman -S away, but installing from the AUR either requires a git clone and a makepkg -sri, or an AUR helper that bugs you to review the PKGBUILD.
> Safe to run - Not only are snaps kept separate, their data is kept separate too. Snaps communicate with each other only in ways that you approve.
Versus the AUR:
> DISCLAIMER: AUR packages are user produced content. Any use of the provided files is at your own risk.
I'm not trying to defend the perception that Snaps are immune from malware, but there is a real difference in the default safety of a package off the Snap Store and the AUR.
However, something is better than nothing, and it's just not true that there's no difference between running something from the AUR and running something in a "confined" snap. There is some crap in the way at least.
Even with the `--classic` switch?
Their site claims that only pre-vetted Snaps can be distributed with "classic confinement", so that's something at least. If that's true, it would allow the comparison between Snaps and the AUR to hold -- Snaps would either be pre-vetted and akin to official package repositories, or unvetted but executed within containers (which is still not really an ironclad security guarantee, but better than nothing).
It is deeply sad to see something that supposedly exists to facilitate and promote a sandboxed distribution model give up and cop out so blatantly though. They should've just named the flag "--make-snaps-worthless-you-should-be-using-apt-instead".
Is there a technical reason to prefer "classic" snaps over packages from the official repos? It seems like the default install path may be different and the libraries/installed files possibly better segmented on the filesystem, but ultimately that's little consolation.
Uh, not everyone. I ran Manjaro for a bit and found that many of the things I ran were available via AUR. The usual thing I'd find in a search was usually something like:
sudo pacman -Sy
sudo pacman -S yaourt base-devel
yaourt -S gpodder
(That's the entire reply, BTW.) At some point I started to wonder what the provenance of these packages was and what the security implications were. I might have looked for information on the security risks of these packages but this is the first concrete claim I recall seeing about the subject. Probably a good thing I'm not running Manjaro any more.
I do run Ubuntu and have some snaps installed (Golang, VS code among others) and I'm now wondering if it would be possible for a malicious developer to substitute compromised snaps for the official ones. My understanding is that they update silently and automatically so I wouldn't even know about updates if I didn't check logs.
I haven't used Manjaro but they seem to intentionally hide the distinction between the official and unofficial repos, which is a bad idea.
Ubuntu's equivalent of the AUR, if I understand the AUR right, is PPAs, which are definitely not enabled by default and are fairly obvious about their third-party-ness.
(Main vs. restricted and universe vs. multiverse is just about licensing, not access control or vetting.)
There's multiple layers of problems here, because as we see with Ubuntu, just because the code was built and uploaded by a trustworthy person doesn't mean it's automatically safe or secure (especially for more nefarious infections that bury themselves deep in the source tree). Remember the pwnage that brought down kernel.org for a few months? That was only a few years ago, but the infiltration had been quite serious, and if I recall correctly, there was some concern that infected code had made it into officially distributed tarballs.
In practice, I don't know that the distinction between distributing packages that are trivially exploitable and distributing packages that have exploits pre-baked in is really that big of a difference. Automated scanners often pick up and automatically exploit exposed instances within a few days.
What it boils down to is that admins need to take responsibility for their own workload and what they choose to execute, no matter the guarantees of the distributor or the claims that $Sandbox_Y is magically impenetrable (which was silly enough before, but completely farcical in a post-Spectre world).
supported only community
free main universe
non-FOSS restricted multiverse
If someone were to compromise an upstream Arch server I suspect it wouldn't be especially difficult to inject malware or trojans somewhere even those building from source would receive.
> Official packages: A developer made the package and signed it. The developer's key was signed by the Arch Linux master keys. You used your key to sign the master keys, and you trust them to vouch for developers.
Since doing otherwise is a few clicks away and sufficiently subtle attempts are unlikely to be noticed by even observant parties this is about as bad as the windows hunt down an exe model which has been proven for decades NOT TO WORK.
The AUR isn't filled with malware because arch is a very small target compared to windows full of observant people.
It cannot possibly scale even to the levels ubuntu aspires to achieve.
Also, as far as I know, pacaur is no longer maintained. I switched to trizen, which prints the PKGBUILD on screen before allowing the user to opt in to executing it. Not going to pretend that I always review the PKGBUILD thoroughly, but I do generally skim them, applying more scrutiny as packages become more obscure.
It's relatively common for people to be added as co-maintainers after posting even just one helpful comment (!) in an unpopular package, so it's worth double-checking to make sure a big change hasn't been made recently without the author's permission.
As far as I know, Apple is the only company that manually reviews the code of apps, and even they let some (in my opinion) malware through . Everybody else just does some heuristic anti-malware checking and then publishes the app.
1: Uber was permanently fingerprinting devices, even though Apple was disallowing this kind of tracking in their ToS.
I agree with you that useful code review is a tough nut to crack. Professional editors exist for writing, and science has the peer review process (also flawed).
Reading code, is a whole different ball of wax from writing it (and from optimizing it in some cases) - I can think of few people who are great at both. I have to wonder if we will ever get to the point where "review" sits in an outside role/function that isn't already overloaded (team lead, architect, management).
Does the fact that we don't have dedicated code reviewers speak to its immaturity or (in)effectiveness.
Is that perception correct?
There's also a limited set of people who can upload new packages and a separate team that reviews those, so duplicated functionality / low-quality apps are unlikely to make it into the archive in the first place. Yet Another 2048 Clone would probably not be allowed in unless it was part of e.g. an official GNOME game set.
It also helps that Debian insists on recompiling everything from source and does not redistribute binaries from an upstream source, even if freely-licensed source code is provided.
> All work in Debian is performed by developers that can be identified. For those using Debian to be able to trust Debian, we feel it is important that our users can identify those that are working on the project and that development is as transparent as is possible. 
I don't personally use Debian very much these days -- my desktops all run Fedora, my servers (with a few exceptions) run CentOS and RHEL -- but I used Debian exclusively for many, many years and out of all Linux distributions Debian (IMO, of course) comes the closest to doing things "the right way". In and of itself, that is pretty amazing, I think, considering that there isn't really all that much that has changed in its 25 year history! In other words, they somehow managed to get things right the first time around.
There are a few things that could perhaps be done a little better or different but -- considering that Debian is an all-volunteer project -- I think they manage to do an awesome job with the limited resources available to them.
Apart the package maintainers and contributors, the Security Team can also review critical packages and step in if something looks suspicious.
But, most importantly, the release cycle and the long freeze before releases is all about STABILITY and SECURITY.
Anybody can upload backdoored code on npm/PyPI etc, infect someone and then remove the malicious release without being detected.
Releasing something malicious or with serious bugs before a freeze cycle and going undetected for months is not impossible but much more difficult.
Instead, the reason you don't see malware pushed to those repositories is because the incentives in the free software world don't align to make them happen in the first place. The moment some project would embed phone-home advertising it would be forked and replaced by all the major distros, so it doesn't happen.
There's also an alignment of incentives between upstream and packagers. If e.g. Xorg tried to embed something evil the volunteer contributors to Xorg would pro-actively sound the alarm and tell distros before they shipped that code.
None of this is true in the iOS and Android stores where you have proprietary paid-for apps where the incentive is to extract as much value from the user as the app store in question will allow, and where the upstream maintainers aren't free software advocates but some corporate employees that'll do what they're told at the cost of the wider software community.
It's an adversarial relationship, not a cooperative relationship.
This is plain false. While it's impossible to guarantee a 100% code reviews, the number of bugs and vulnerabilities found, reported upstream, and patched by distributions (especially Debian) shows that code is being reviewed.
But the brew cask package “table-tool”? That sure sounds official!
(since an app can always download and execute extra code after it's installed.)
Debian will enforce this too, for computing-freedom reasons as opposed to platform-control reasons: it's impossible for Debian to say "yes, this is free software" if the code isn't available for Debian to audit. And it's obviously impossible for Debian to check it for malware / unwanted functionality. Applications like Firefox or pip can download and install code at the user's request, but applications that automatically download part of their core functionality cannot go into Debian without being patched to allow Debian to compile and ship those parts as part of the package.
>"X has no real concept of different levels of application trust. Any application can register to receive keystrokes from any other application. Any application can inject fake key events into the input stream. An application that is otherwise confined by strong security policies can simply type into another window," he wrote.
They might have wrapped X protocol to provide more security and control. Instead they decided not to.
They might have created a system which is as bulletproof as on iOS where you can install any apps and be 99.9999% sure that they won't steal your data unless you allow them to. But they created this instead.
Wayland will also solve a few of these problems.
Personally, I'm of the opinion that the Linux security model is a horribly outdated ticking time-bomb and people really aren't taking it seriously enough. It drives me kind of crazy; a lot of people act like X security is no big deal, like it's fine that our primary security model for Linux is just based on file permissions. I think that once we have a better permissions system in place people are going to look back with 20/20 hindsight and say "Well duh, of course apps should be isolated from each other and the system in general. Everyone knows that."
There are two permissions that my desktop/web/mobile environment doesn't ask me for that would prevent most attacks like this: network access and cpu access.
Network access is obvious. It kind of boggles my mind that apps can by default just access the network and make a request to any server that they want. Blocking that alone would take care of a huge number of crypto miners (and spyware), because they all need network access to operate. There are almost no good reasons I can think of for a desktop app to have network access by default.
The less obvious permission that I think is probably worth exploring is CPU access. I don't necessarily know what a control for that would look like in a standard permission system, but if an app wants to start going crazy with my CPU, whether they're being malicious or just innocently inefficient, my OS/browser/phone should probably bring it to my attention and give me the opportunity to either permanently throttle them or set some kind of ground rules.
Vista and subsequent reduced the problem by introduce levels, so that lower-privileged applications can't interact with higher-privileged, but as far as I know they can still interact with applications at the same level.
And Microsoft is still quite confident that eventually Win32 will join Win16, even if it takes considerably longer that they were initially willing to wait for.
>Snaps are containerised software packages that are simple to create and install. They auto-update and are safe to run. And because they bundle their dependencies, they work on all major Linux systems without modification.
Of course you should also prevent the program from reading the original privileged Xauthority data. Running it as a different user does the trick.
It's not really fine grained and also doesn't prevent untrusted clients from meddling with one another, but seems like a starting point for someone inclined to add more security to X.
By the way Android, unlike Linux, runs every app under a different user account.
Regardless of whether you love or hate Electron, its rise in popularity has clearly shown that a number of HN users feel like they don't have complete control over their computer's resources - that their only choice is to either avoid an app entirely or slow down their computers.
A user should be able to pick up an application and easily say something like "you can have up to 2 CPUs and 250mb of RAM. If you want more, come back and ask me." And honestly, if Google couldn't trust that most users would give it unfettered access to 4 gigs of RAM, I bet their engine would magically get a lot more efficient really quickly.
You are often right that you either supply the resources an app needs or you don't. However, there are a growing number of apps I'm seeing that act more like goldfish - they grow to the size of the container they are put in.
I also occasionally run into apps where I'm OK with bad performance, I just don't want them to interfere with other tasks that I have.
I might decide that I'm OK with a version of Slack on my work computer that runs poorly and that occasionally starts caching stuff to disk - as long as the rest of my computer doesn't slow down. Not every app that I'm using needs good performance - some are more important than others. This is especially true for background apps like a backup system, file sync, update, anything where I don't really care if a task takes longer to finish.
It also might be worth exposing some kind of more fine-grained policy; something like "I want this app to have full access to my CPU if it's in the focus, but if I minimize it, I want you to reduce its resources or even suspend it."
And of course there is the (perhaps naive) hope that as CPU and RAM become a resource where users control access in the same way that they control location or camera access, developers might start to include resource-heavy features as progressive enhancements. This has... sort of... worked on the web with resources like location. So it's unlikely, but possible.
It's still nothing to do with X, and it's not certain that it would plug the "hole" that is being used here.
This "exploit" requires unfettered (or at least not completely throttled) access to the CPU and a way to send the spoils home via the network.
Would this actually be stopped with "lxc"? I doubt it'd do much to curb the CPU abuse, and network access is something games frequently demand (in something like this for e.g. leaderboards or a social sharing feature), so I'd bet that nobody would bat an eye.
This way you don't have any overhead, you preserve backwards compatibility with the protocol, existing programs that need the functionality of accessing each others' data (window managers, panels, tools like xdotool, screenshotting tools, etc) can keep working like they always were and you still get to implement sandboxing (with or without snaps/flatpack/etc) without breaking the Linux desktop.
Does the license actually mention it mines? I am reminded of a lot of "freemium"/"ad-supported"/etc. software that makes its author money via ads or whatever else --- and you agree to that if you read the license --- and it is a bit shady to name the miner 'systemd', but it seems rather overboard to call this "malware"... when I see that term I think of software that self-propagates and exfiltrates personal data, delete/encrypts files for ransom, etc.
Also from the page:
Size 138.8 MB
I'm not really familiar with the latest trends in (bloatware?) development, but a simple game like that taking >100MB would make me suspicious --- even 10MB is in the "questionable" range, and ~1MB would be closer to what I consider "typical". 138MB is bigger than the installed size of Firefox, and that's a far more complex application...
Nah. Games often feature a bunch of textures and video and sound files. Bad compression or too high resolution on those is quite common, which is why games _are_ often that large.
Also proprietary software usually ships a bunch of libraries - games often ship with a premade engine, which are also often quite large.
As a datapoint, I have a copy of "Strata", which is a simple /minimalistic puzzle game, and it _is_ 78MB.
Besides on iOS at least, if you click on a link from the Facebook web page, you can take advantage of whatever content blocker you have installed.
When I last used it, it just felt less clunky than opening it in a mobile browser tab (Android). I bet most people who do use the app would agree. A mobile browser's address bar is also kind of ugly, so apps often just feel more 'immersive' and therefore 'better'. The ultimate reason however is probably because Facebook is somewhat good at marketing and managed to sell their apps to users better than the mobile version of their site. There are probably more reasons at hand too but I can't remember anymore as I now use neither.
> The mobile web page loads faster
Not really in my experience. Also, apps tend to have more support for gestures than mobile pages. Instagram is probably an easier example. It's more swiping, and less precise tapping. Browsers are also kind of yucky to load if you have many tabs open (Brave seems to be better with this though). Scrolling in apps can often be somehow more pleasant.
The gap between mobile pages and official apps has probably narrowed, but I don't think it's quite correct to imply that the mobile page is always better for everyone (especially those who are still enjoying the 'rewards' of Facebook).
Do you use the Google Maps with its app, or in a browser?
The Facebook app is more sandboxed, since it can't snoop on your web browsing.
I think the real answer is "Don't use Facebook" or "If you must use Facebook, do it through Tor Browser".
Going to 100x that? Insane.
You'd be hard pressed to find an engine or runtime (except electron as some people are saying it actually is..) to get a game like that (literally moving boxes and text) up to that size.
Even if he used static images and ttf fonts the size is way off. Pngs are a couple to couple dozen kilos a piece. Fonts are a few megs at most each. The single biggest font 'file' I know of/used for real (except for experiments people might do with the file formats) is Noto Sans CJK ttc file and it's not a single font but a collection (and it covers all of CJK which is an insane range).
Entire Minecraft is under 300 megs and that includes the launcher, the language packs, and the entire JRE that is 140 megs in itself (!).
On gamejolt there is a (very nice) small low poly game called The Very Organized Thief, it was made in Unity3D and is just 13 megs in a zip (EDIT: and 35 unpacked).
I couldn't find a low poly game in Unreal Engine 4 nor one that is under 100-200 megs (EDIT: when packed) so maybe Unreal Engine 4 has that high static cost but I'm not sure right now.
In any case: 2048 taking over 100 megs is actually crazy, especially since it's a game so simple you can rewrite it in almost any engine overnight. He/she could have done at least that much.
 - https://www.google.com/get/noto/help/cjk/
Many people even with a quad core i5/i7 might have a small SSD just for Windows and important stuff and any large game goes onto the HDD, the fact you'd do such a trade off and make the game size swell only makes the effect worse and user more likely to use HDD.
Linux (the kernel) is compressed by default (that's why the filename is vmlinuz, vm for virtual memory support, z for compression) and it doesn't impact the startup enough to have many/most distros take it out.
Simple compression might not bring lots of savings but it'll at least help a bit. And with BMP instead of PNG a game would just blow up in size to crazy proportions. There also is some (still patented?) lossy tech to decompress on the GPU, not the CPU: https://en.wikipedia.org/wiki/S3_Texture_Compression
Ooodle from RAD game tools (kind of a veteran gaming middleware company with some prominent game devs employed at it and prices being 'contact us') also has some really fast compressors and decompressors (but I've never used them and I don't want to ask for a testing SDK if I don't consider buying their product).
You can also trade time on your end for optimization that is then literally free to the end user, e.g. use pngopti.
Tokyo Dark (a VN-ish game from Japan) was notorious (for like 30 reasons but this one is very annoying) and loaded entire several hours long 500+ meg game off of the disk at once at start up for no reason other than it being simple to do or done by their Construct 2 set up. Just some compression or pngopti would help a lot for that, I remember cutting like 5 or 10 percent of game size using just pngopti.
John Carmack said that to make Rage's MegaTexture system assets work (and they still ended up being huge) he used some 200 GB server to optimize it for hours.
Crash Bandicoot also used some smart packers that took ages to run at the time on multi thousand dollar workstations (90s).
All in all: I mean to say that compression and size optimization might still be worthwhile, is cheap and doesn't have to imply a big cost to the end user, more often than not the cost is very asymmetric and the compressor pays way more than the decompressor (e.g. 7zip ultra takes hours to pack many gigs of files and uses 17 GB of RAM but the unpacking of such an archive takes just 10-30 minutes and is actually limited by my HDD speed).
edit: apparently the actual game is based on https://github.com/gabrielecirulli/2048 which is HTML+JS, so probably the Snap was bundling in Chromium/Electron, which explains the size.
This is very much the idea of these awful (IMHO) ways of distributing software. Bundle all of your dependencies, share nothing, expose users to the risks of exploits in the libraries you've bundled (and maybe statically so no one can even figure out you have done that).
Please stop this madness.
Nothing to look here.
Still it is unresolved, they probably use the deprecated licence feature in VLC.
As LTS releases age, the contents of the repo age with them. PPAs are one solution, but they're undiscoverable and not straightforward for new users to setup. Ubuntu has a ton of users who are 'sticky' on old LTS releases.
This enables the VLC developer to have one package that targets millions of users across lots of releases of Ubuntu - and other distros too.
I use one donationware app and it's cleanly marked it display an add and it's explained they fetch it from their own site via a dumb static image request on startup (it sits under the main GUI, it's not a splash or wait or anything) and for a donation you can get rid of it. That's leagues above the experience an average website gives you without adblocking software these days.
As for the size - I've no idea what happened, the original page tested by  reports as 153 KB. Maybe he just wrapped it in electron?
 - http://gabrielecirulli.github.io/2048/
 - https://tools.pingdom.com/
Like any other Electron based app...
You could argue that making users pay for savings in development by their RAM, Disk, CPU, GPU and electricity costs is also bad (and even bad for the planet) but that's a less shady business strategy for saving cost than covertly mining. I have argued that and similar points with varying success at . Then again someone took it upon themselves to downvote this 30 minute comment of mine already and offered no counterpoint so maybe bloatware is what people want and they will go out of their way to justify it.
Back to this topic: to stop mining requires stopping being shady and turn it off, to stop burning resources with electron requires huge staff changes, technology changes, mentality changes, etc. and goes against what Joel Spolsky said about rewrite from scratch (but I do feel the irony every time I see Atom or something claim they have now way more performant code by rewriting a part of the program in C++).
It's also very unfair IMO to compare pure hearted electron users (whatever you think of people who use it and the tech itself) to someone who stealthily installs malware (which borders on a crime and is way more shady and immoral).
Coincidentally I found a very neat app for music conversion/extraction from videos today, it's light, starts instantly, the GUI never lags even for thousands of files, it's total size is 40 megs and I looked into it and it's just ffmpeg dll, some other dll for other media stuff and the main GUI is in Delphi (and to add insult to injury the author is a Pole, which is ironic because I am also a Pole and I was taught Delphi in high school too - as my first language - and use Lazarus from time to time to play around with Pascal and native GUIs again).
 - https://news.ycombinator.com/item?id=15948290
 - https://www.joelonsoftware.com/2000/04/06/things-you-should-...
 - https://news.ycombinator.com/item?id=17055872
Joel at one time was the product manager(?) for Excel. Microsoft also got lambasted for trying to use an internally built cross platform solution to use the same codebase for Mac Office and Windows Office back in the mid 1990s. Microsoft even decided that it was better to use native tooling for both platforms.
There is a difference between "rewriting" and using native tools for each platform. Would you still want something using Java+Swing?
Netbeans and almost every other Java IDE at the time was one of my major turn offs about using Java compared to using .Net + Visual Studio.
In fact, I still use the same Core 2 Duo 2.66Ghz with 4GB of RAM running Windows 10 as my Plex Server and it can transcode up to two streams simultaneously. I'll still use it interactively when I'm working from home and I'm mindlessly browsing the web while I am waiting on something.
If someone (legally) cloned an app, cut CPU in half, and replaced have the savings with mining, why would you prefer the original app?
Note that at 12c per kilowatt hour 200W of extra draw on a machine that's always running is 2.4c per hour.
Over the expected 5 year lifespan of a machine this could cost you $880 in the EU this would be more like $1980 because electricity is on average more expensive there.
Stealing up to 2K from users isn't much more friendly that cryptolockers.
Tell that to your electricity provider
Canonical's Snapcraft literally says "Get published in minutes"
Any random guy would publish his malware with near no review
Yes, they maybe win the counter for published apps compared to flathub. Congratulations!
According to the devs involved, on mailing lists and bug reports, the point of snap over apt/etc is the auto updates can't be disabled, so end-users can't put off or forget about security updates. Even adding a way to delay or configure when an update happens seemed to take a lot of convincing before it was added.
(In the end I just disabled the snap service entirely to stop auto-updates. Only downside seems to be that I can't query or install new things through snap without it.)
Snaps are not Ubuntu-only. You can find install instructions for many distros here: https://docs.snapcraft.io/core/install
On the other hand,flatpak is a freedesktop project, done using open standards like OCI and ostree.
There is room in the world for flatpak and snap to co-exist. We created snaps as an evolution on from clicks on the Ubuntu phone, and it covers use cases that flatpak wasn't designed for.
Is this actual policy? How do they determine who is the original developer?
Yes, it's a policy
If there’s an app that you'd like to be distributed on Flathub, the best first course of action is to approach the app’s developers and ask them to submit it.
The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free software for all its users. We, the Free Software Foundation, use the GNU General Public License for most of our software; it applies also to any other work released this way by its authors. You can apply it to your programs, too.
When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things.
Part of the problem with letting people have freedom is that they have the freedom to make decisions that impact communities in a negative way. But it's usually worth the tradeoff.
This is probably the most relevant line though:
For the developers' and authors' protection, the GPL clearly explains that there is no warranty for this free software.
i.e. you have nothing to worry about, but you also probably can't do anything to punish the misuse. After all, misuse is subjective.
They feel it is a convenient binary distribution format for software.
I ran into this realization the other day. I wanted to give Mint a try. I run over to Vagrant's site which prominently displays a "Discover Boxes" link. But gives zero indication from the main site that these Boxes are not provided by any kind of official maintainer or Hashicorp itself but are community uploads I suppose... at least I can't find any vetted information about who the uploader's are and why I should trust them.
This should be a big read flag in the quick start guide that screams: Don't just download any old box from our site and then load it up with all your customer data and put it into production. Instead it's buried deep in the documentation: https://www.vagrantup.com/docs/vagrant-cloud/boxes/catalog.h...
Beyond that, numerous escape exploits in linux containerization (and docker specifically) have popped up over the years, and many more are going to pop up over the coming years. This is not a mature space.
Running random binary code distributed from an non-curated source, even in a "container" is going to end in heartache.
(e.g. a docker image from RedHat may be okay, one from zhenghe8 likely not)
Thank you. This is something that I truly hate Google for. They constantly spread this mentality that isolation = nothing bad can ever happen to your data. And then they build a horrible permission system on top of that idea and leave everything else up to the user.
And as a result, the Google Play Store and Chrome Store are the most malware-filled app stores that I'm aware of.
Despite that, you still had people giving Firefox shit for not isolating add-ons, which however were thoroughly reviewed and as a result quite clearly less often subject to malicious intents.
And then something like the Web of Trust fiasco, where the add-on as a feature sends your browsing history out to the internet and then bad things happen there (the WoT devs sold the browsing data in an anonymized form that was shown to be deanonymizable rather easily), against that neither isolation nor a review can help, so we shouldn't act like any security technique is perfect. We still need users to think for themselves, even if that's bad for Google's business model.
Distributions and their package maintainers serve an important role. In the interests of consuming more & faster people seem to be ignoring that.
I wish we had enough resources in the free softare community for all software to be packaged and maintained in the distributions by independent parties unaffiliated with the creators as a rule.
A bunch of the guidelines governing that process are simply unenforcable in a model where the developer builds and publishes the release.
I am not of the opinion that those rules are irrelevant to the stability and security of our systems.
There's a significant push to establish a more app-store model for linux distributions, taking the distributor largely out of the loop for software that isn't part of the base system.
This has both positive and negative consequences.
Today the negative consequences are largely hand-waved away with something along the lines of "containers will protect you".
(although even that isn't ideal since in practice it ends up with the major popular distributions pushing agendas to the smaller distributions through whatever requirements are there for "compatibility")
The same cannot be said for developers in my experience, who often don't even see a problem with the build process accessing the network - and frequently will publish releases built from the same host environment they use for their general daily computing.
From the perspective of a distributor, the packages should be perfectly buildable (and for official releases, preferably so) with a toolchain of known provenance in a clean environment without network connectivity.
The priorities are quite different for the developer of software and the distributor of systems incorporating that software. It's similar to the tension between system administrators and developers.
I guess Main is safe since it's handled by Canonical, but the rest?
Moreover, a lot of installers simply add a custom repository to sources.list.
What are some good practices for a novice user, regarding apt?
I would say if you are at all concerned about safety: don't install apps through .deb file that developers sometimes push. They are generally safe, but there is always a potential that these files are malware.
For instance, lots of people use Atom as their text editor, but Atom does not make it possible/easy for packagers to build Atom from source! Everything used to come with a configure, build, & install script, but I guess it's not hip enough anymore.
Repositories are why Linux repos are free of malware. With more snap-based packages being made available then we are going to see a lot more of this sort of thing.
FreeBSD ports has a ton of packages, though. Maybe they have incredible quality control, but I would bet a few of those have some malware in them. That goes for all Linux/BSD build systems, obviously, just huge ones make it more likely.
edit: Yes, pretty much the same idea though.
Ubuntu snaps very easily installable, just a quick command away, which can give users a false sense of security.
Discussion of this issue with snap developers here:
However this changes everything. Disallowing owners from controlling their software and hardware is not something I want to encourage. We have enough of that from Microsoft (and to some extent GNOME Shell developers).
Does Flatpack also force developers' intentions upon their users? Or is AppImage my only recourse?
I want a solution that doesn't mistreat customers.
Want to read this article? Please click here to mine a cryptocoin for 30 seconds. Great, thanks! Here's a cookie so we won't ask you again to mine for a whole month.
I would much rather have this than being shamed into looking at ads. It always struck me as utterly bizarre to be told that not wanting to see ads is somehow immoral.
My laptop would be fine with 30 seconds of a CPU spike, though. Most phones could probably also tolerate it.
Excuse my ignorance but I'm intensely suspicious of "stores" on open source operating systems.
Say you want the newest version of LibreOffice for whatever reason. This is a typical use-case where Snaps will come in handy. They have most dependencies bundled into the application, so you don't have to worry about your whole system getting wonky by installing newer versions of those dependencies to go with the newer version of the application.
This is also meant to serve as a way for devs to release software without much hassle. So, they don't have to open-source their code, hope that someone finds it, packages it for Ubuntu and in like five years time is available to end-users through the repositories.
They also don't have to worry about building a .deb, .rpm, Arch's format and whatever else there is, including accounting for the differences between distros. So, Snaps are supposed to work on all distros the same.
Ultimately, this will bring in more proprietary applications.
Well, and Snaps are sandboxed, so there's some protection, which makes those proprietary applications somewhat more acceptable, but as this piece of news shows, it's not complete protection.
Is it another attempt of Canonical to jump on the app store bandwagon? Most definitely yes. There's a competing format, Flatpak, which does pretty much the same, also AppImage which is somewhat older and without sandboxing, and Canonical is mainly just pushing their own format, because they'll have control of the store behind it.
Like, it's not impossible to hook up other Snap stores, but Canonical has established their infrastructure as the primary source and then how many users are going to look elsewhere?
I think I'll wait for this most recent example of the trend to make everything into an app to blow over...
I don't even mind creating a VM for every single distro a user requests, and doing a huge automated binary compilation fest for every release. The only thing I care about is that the software is distributed through channels which make it explicit that the current stable version is the only version I support.
But if it does not ask for special permissions, then it goes in automatically. Because the app is quite confined.
Compared to other package repos, here it's somewhat better.
I understand that with Snap devs have to bundle their own dependencies and take care of upgrading, which is bad if I understood correctly.
In my case, a few programs I had installed needed to be connected to other snaps, and they would suddenly stop working for no apparent reason. Only by trying to launch the misbehaving program from the command line I'd find out I had to update the connected program(s).
Has never happened to me with Apt, so my opinion so far is that installing .deb files is vastly superior, at the moment.
Where does snapcraft.yaml get executed? On my computer? On Canonical's infra? On the packager's computer?
As a user, how (other than asking here) was I supposed to convince myself of the identity binding between “snapcrafters” and the GitHub org and to convince myself that trust in the correspondence between snapcraft.yaml and what I get when I install a snap is rooted in Canonical’s build service and not in trusting an individual uploader not injecting different binaries?
And who's liable here?
For example, do apps need to request permissions for accomplishing specific tasks, or is there any kind of sandboxing involved?
I do not know if there are any auto checks when the package is added automatically.
I'm sorry, Canonical and Ubuntu are the point were Open Source Software apparently breaks with its traditions. No review on binary blobs uploads most certainly made with OSS when marked "proprietary"? They are kidding, right?
The default GUI package manager, "Ubuntu Software" shows up snap packages just like ordinary packages. It was uploaded by somebody who is not bright at the domain and badly configured for locale. It can only handle ASCII characters. Probably reviewed by nobody.
Unless the Snap Store uses some kind of DRM, I don't see how that can be the case. Just install it and see the contents in your filesystem?
I have no alternative suggestion, because Ubuntu is still what I would recommend to new users even if it's not what I personally use anymore, because they have the track record for getting new Linux users on board and helping them out (via the prominent search results for common problems people come across, etc). On the whole, I also think their design has been consistently decent (including Unity).
No! MIT is not a proprietary license!