
Why GNU/Linux Viruses Are Fairly Uncommon - jason0597
https://www.gnu.org/fun/jokes/evilmalware.html
======
ohazi
On a more serious note, I'm surprised there hasn't been much discussion about
potential malware in official Linux package repositories vs. developer-centric
source repositories like npm, rubygems, crates.io, etc.

One would hope that the bar is higher with strict maintainership rules, but
there are a zillion packages, and you can't vet them all. Also, practically
everyone installs binary packages, so until we have fully reproducible builds,
hiding malware in some obscure but heavily depended on package could actually
be relatively easy.

~~~
calais
This is a flaw with the repository model for software distribution: it confers
the authority of the OS developers to packages not scrutinized to the same
degree.

Users' metal models of trustworthiness don't track very well the actual
scrutiny software is subjected to. This might be a problem with any
distribution system.

~~~
ajross
FWIW, as far as "scrutiny" goes, I think the track record of Red Hat and
Debian/Ubuntu on this stuff is about as good as... well, basically any other
distribution system I'm aware of. The Android and iOS stores have hosted
malware, as has Microsoft's. Steam has pushed bad stuff. NPM is a straight up
train wreck...

I mean, I agree in principle that it's a hard problem. But the actual solution
we've landed on in the Linux world seems like... well, just not really the
first thing we should be worried about.

~~~
calais
That's definitely the feeling I've gotten from the Debian repositories, which
are very cautious about what to distribute. I was thinking about it in the
abstract.

------
edoceo
It's a joke, not analysis. Was hoping for the latter.

~~~
chc-sc
I think the real answer is that relatively few people use GNU/Linux

~~~
tombert
That's about 95% of it. There are other things that give Linux a bit more of
an edge in this space as well.

For example, every time I use Windows, it feels like every app is asking to
run as administrator. Admittedly, I haven't used Windows for about a year, but
in Linux, it's pretty rare that I ever do admin/sudo outside of the command
line, and I only ever use it when I know what I'm doing. Obviously this isn't
something that could not be fixed in Windows, and maybe it already has been.

~~~
jwkane
lets not forget the "curl blahblah.com | sudo bash" bit-o-insanity.

~~~
saghm
I've definitely seen plenty of `curl | bash` installations suggested before
(and can honestly say that I've run some, despite knowing the risks), but
_sudo_? Is that a thing that people actually do?

~~~
jwkane
[https://github.com/nodesource/distributions#installation-
ins...](https://github.com/nodesource/distributions#installation-instructions)
I'm sure there are plenty of others.

~~~
robrtsql
At the risk of appearing ignorant.. what exactly is the problem here? Do we
trust the nodesource deb repo but we do not trust the bash script which is
coming from the nodesource domain?

~~~
waste_monk
Deb packages have cryptographic signatures that can be verified to confirm it
actually came from nodesource (or whoever)

If you get MITM'd (admittedly difficult with TLS) or the site got compromised
(but not the build IX / developer's keyring) it would be possible to replace
the script with a malicious one.

Also, you can detect the curl|bash installation method server-side and serve
different content [1] on that basis, which is not possible with deb packages.

Finally, providing a curl|bash installation method implies that the developers
either do not understand packaging, or don't care about it. This is fine e.g.
if developers want to remain platform agnostic or just can't be bothered
packaging, but if you develop a curl|bash installer you send the message "I
want to distribute my software but don't care enough to do it properly / in a
standards compliant way".

Also, and this is more subjective, most curl|bash installers I've seen make
assumptions about their environments that do not necessarily hold in less
common distributions - which makes you wonder why they don't just develop e.g.
a deb if it only works reliably on Ubuntu and maybe Debian.

[1] [https://www.idontplaydarts.com/2016/04/detecting-curl-
pipe-b...](https://www.idontplaydarts.com/2016/04/detecting-curl-pipe-bash-
server-side/)

~~~
phaer
> Deb packages have cryptographic signatures that can be verified to confirm
> it actually came from nodesource (or whoever)

And how do you get the GPG key to verify such signatures from third-parties?
Usually via https from their website, no?

> Also, you can detect the curl|bash installation method server-side and serve
> different content

Yes, but you are supposed to trust people you get you packages from not to do
it. And so and it should only make a difference if their servers are
compromised _and_ they sign packages with a key stored offline.

I think the latter is much less common then one would hope, with release
processes in CI and such.

------
liability
I've always sort of suspected there might be a _' don't shit where you eat'_
component to it. If you're just being a dick by screwing with people for fun,
not profit, then maybe you screw with the windows users instead of your fellow
linux users. Maybe that's giving them too much credit though.

~~~
pfundstein
Most malware code I've seen is terribly written and barely works. Hobbyist
malware programmers use Windows and write for Windows for the simple reason
that it's all they know.

Some skiddies use Kali Linux but only because they managed to follow step-by-
step instructions on YouTube without which they're lost.

~~~
dead_mall
> Most malware code I've seen is terribly written and barely works.

Yeaah cause someone is just going to be so kind enough to share their perfect
example of what malware code should look like to the rest of the world..
(actually there's an F ton available on Github, quasar, pupyrat, etc) Don't be
so naive man. As a hobbyist malware programmer myself, I know that you don't
know what you're really talking about other than pointing out skids targeting
Windows (cause most available malware/RATs are targeted for windows) and use
Kali cause-so-many-yt-tutorials.

edit: fwiw, I find it much easier writing malware under *nix cause they come
with python and a bunch of dev libs, where as windows I have to dynamically
load libraries in sneaky ways. Also implementing rootkits & RunPEs can be
pretty damn puzzling -- definitely not that easy as you say

~~~
pfundstein
> As a hobbyist malware programmer myself, I know that you don't know what
> you're really talking about

Let's see some of your code.

~~~
kartoffelwaffel
you write malware in Python..?

~~~
pfundstein
I think you meant to reply to dead_mall

------
thyrsus
2005: The short life and hard times of a Linux virus
[https://web.archive.org/web/20140925152325/http://librenix.c...](https://web.archive.org/web/20140925152325/http://librenix.com/?inode=21)

...although market share certainly also plays a part.

------
zanny
Its a pet peeve of mine that the GNU/Linux world is 99% of the way to an
incredible degree of security but distros are totally disinterested in going
the last mile, often because its a lot of boring work.

Mandatory Access Control should have been game changing in the security
sphere, especially in the era of cgroups. Fine grained permissions controls in
the kernel and filesystem would stomp out almost all potential malware vectors
- both exploited software and injected binaries. GUI desktops could have
provided UI to prompt users for unknown programs trying to access specific
things and then send reports upstream of programs allowed for review. It would
practically be a self-building database of program file access if done right.

But that isn't the only avenue to it. You could pressure upstream to include
discriptor files in git repos of a standardized format of file access for each
binary that can be used to generate selinux / apparmor / tomoyo rules. You
could do it the really hard way and just have a sprint to surface test every
program in official repos and generate such files yourself. Programs should
almost never be accessing anything outside their XDG conf file and data dir -
they should be linking libraries to provide access to other stuff (input, gui,
etc) and file access should go through the system wide file picker.

As it is right now though pretty much every executed binary on most Linux
systems is allowed to do whatever it wants that it has user access privilege
to. Especially because most maintainers don't want to have to bother with the
complaints of malcontent software breaking constantly trying to read arbitrary
files. The Apparmor profiles of SUSE / Ubuntu or SELinux of Fedora are largely
written for a few specific programs, usually web browsers and file sharing
daemons, rather than be comprehensive. Its such a shame that all the
technologies exist and are in place to make this work (even Arch has Apparmor
support in its official repos now!) but there is no willpower / capital /
interest in going the last step and trying to be all encompassing in your MAC
profile and then lock down unknown programs appropriately. Android pretty much
does this already, its a shame desktop Linux totally skipped over this avenue
towards secure desktops.

There is of course an argument that if you lock down programs with network
access then thats all you need to secure your desktop but that position
falters in the face of arbitrary programs being run at random that can include
network access. The goal should be a default restricted profile - one where
arbitrary binaries cannot do whatever they want to your system, in the context
of having a comprehensive profile database of software being used that covers
99% of real world program usage and thus doesn't impose a sizable UX burden of
constant usage prompts for common applications.

~~~
wahern
The person _best_ positioned to lock down a program is the developer. Only
they know when and how certain resources _need_ to be accessed, and only they
are positioned to refactor the source code to maximize the cost+benefit of
stricter security policies.

But solutions like selinux, apparmor, and tomoyo are built on the premise that
it's the user, administrator, or packager best positioned. But this is a false
premise. These people are the _worst_ positioned to understand which
privileges are needed, when they're needed, and how to constrain them. They're
certainly not well acquainted with the software and how it operates. The only
tools at their disposal are external policy mechanisms which are often
extremely difficult and complex to use to achieve the desired level of access
with the external environment. And they're incapable of refactoring the target
software to improve the situation. It shouldn't be any surprise, then, why
these resources are underutilized.

seccomp is an improvement but it's _much_ too low-level. Other than the
obvious issues that a simplistic syscall filtering mechanisms is too brittle,
seccomp 1) doesn't support file paths, and 2) the inheritance semantics makes
it nigh impossible to refactor existing code which invokes other programs
while 3) also setting a high bar of minimal complexity for new code. Again, no
surprise why this resource is underutilized.

This is why OpenBSD's pledge and unveil are infinitely easier tools for
securing programs. They were conceived and refined with the goal of making it
_easy_ to lock down programs, not to maximize purely abstract requirements
like fine-grained control or administrator flexibility. And in any event tools
like file permissions and other access controls remain readily available to
augment the built-in privilege restraints.

Android is a poor example, especially for server systems, because Android
programs don't need to interoperate directly with each other. Whereas on
server systems the degrees of interoperability and dependence of various
pieces of software are extremely complex and varying. One alternative, forcing
everyone to write microservices, at best simply shifts the burden around; it
doesn't help to minimize that burden, nor does it permit us to incrementally
and organically improve the situation.

~~~
gabcoh
Claiming the developer is in the best position to lock down there own programs
is obviously making the assumption that the developer can be trusted. It would
be much easier to audit a single file declaring all of the permissions a
program has been granted than it would be to audit the entire source code of
the program itself. I think it would be a great idea to integrate MAC more
widely in package distribution systems.

~~~
wahern
> It would be much easier to audit a single file declaring all of the
> permissions a program has been granted than it would be to audit the entire
> source code of the program itself

The problem is that it's much more _difficult_ to write software that way,
which is why few people do it. And it doesn't actually solve the problem of
trust because it's exceptionally difficult to prove that those rules capture
and constrain the most security-relevant aspects of the program, so you're
back at square one in terms of trusting the developer and their skill.

Your argument makes the most sense if security were simply a matter of
enumerating filesystem and syscall access. But it's _rarely_ that easy.
Usually you need certain kinds of access at various stages of the programs, or
the types of access required are a function of the inputs to the program--e.g.
the files specified on the command-line or the configuration file. Handling
these requirements in the most appropriate ways tends to devolve to a matter
of writing ad hoc code in the context of the peculiarities of the program
architecture. Declarative solutions divorced from the structure of the code
don't work well. What's most important are the time, place, and manner of
constraining particular privileges, as opposed to merely _identifying_ all the
privileges and switching from "insecure" to "secure" at a single point in the
application.

------
bediger4000
It's conceivable that this is the real reason. Maybe Windows is a "dragon
king" statistical outlier in terms of malware. As an example, every once in a
while a file "type" that previously nobody knew could be executed causes
problems because someone finally figures out that it could be executed. MacOS
certainly has problems with malware, but not even to the same order of
magnitude as Windows.

------
ChrisCinelli
Are we talking of just virus or other types of malware and rootkits?

Not sure what "uncommon" means but I think
[https://news.ycombinator.com/item?id=20682546](https://news.ycombinator.com/item?id=20682546)
already documented that successful attacks are not rare.

------
JJMcJ
Outlook doesn't run on *nix systems.

------
pjmlp
On the other hand,

"Unpatched KDE vulnerability disclosed on Twitter"

[https://www.zdnet.com/article/unpatched-kde-vulnerability-
di...](https://www.zdnet.com/article/unpatched-kde-vulnerability-disclosed-on-
twitter/)

------
commandlinefan
Except that now everybody insists that you should be using package managers
like apt and yum instead.

~~~
Godel_unicode
Who are these people? In the Python community, no project ever used yum/apt.
Their versions of basically every package (where they even have it) are
hilariously out of date with what you should use. They are frequently missing
major security fixes, in addition to the usual functionality ones. My
understanding is that this applies to most other languages as well.

To say nothing of docker base images and popular docker containers...

~~~
commandlinefan
> are hilariously out of date

Well that's exactly the problem I run into with most packages (even outside of
programming dependencies like Python libraries), but if ./configure, make,
make install fails for any reason, the first google hit on the error message
is "why are you trying to install this from source? Use the repos, that's what
they're for"...

~~~
Godel_unicode
That doesn't match my experience at all; people usually realize that there are
tons of reasons not to use the repo version. Perhaps you want a newer Python
than the crusty old one that comes with 18.04. perhaps you want to test
something that doesn't exist in the repos (the horror!). Docker and Elastic
both recommend using their repos or building from source rather than using the
geriatric versions in the Ubuntu repos.

------
TheMagicHorsey
The security of GNU/Linux is based mostly on people not being particularly
interested in exploiting Linux users ... because they are more sophisticated
and because there are less of them.

If Linux ever becomes useable for the average consumer and its adopted widely,
we will see plenty of viruses for it.

I'm more optimistic about systems like Fuschia which are capability-based from
the kernel up.

------
Godel_unicode
Ctrl+f "iot" Ctrl+f "Mirai"

Oh, it's a joke. Got it.

~~~
tfolbrecht
Leaving your door wide open isn't anyone's fault but your own. All the
configurations and procedures for secure devices were there but manufactures
couldn't bother.

