
How the modern containerization trend is exploited by attackers - dsr12
https://kromtech.com/blog/security-center/cryptojacking-invades-cloud-how-modern-containerization-trend-is-exploited-by-attackers
======
LinuxBender
If you didn't build the container, then you are putting all your trust and
your companies private bits in the hands of Joe Random.

Please do not blame the technology. This problem existed long before
containers and will exist long after they are gone. This is people trusting
unknown anonymous third parties to build things that will run in their
datacenter.

~~~
notheguyouthink
Isn't that always true though? This is just one additional layer of trust.
Sure, there are reasonable layers we should care about, but you're rarely, if
ever, going to be doing everything and trust everything.

Ie:

> If you didn't build the container..

> If you didn't build the package on Debian..

> If you didn't verify the source code when compiling by hand..

etc.

~~~
Bartweiss
> _to be doing everything and trust everything_

Also, it's sort of weird how often people conflate these two things. There's
this idea that home-rolling is naturally safer, and it's simply not true.

Everyone doing anything with software is relying on layers someone else built,
and we should keep doing that. Layers I handle myself are layers that I know
aren't malicious, but that doesn't mean they're secure. The risk of malice
doesn't just trade against convenience, but against the risk of _error_. Using
somebody else's programming language, compiler, crypto, and so on doesn't just
save time, it avoids the inevitable disasters of having amateurs do those
things.

We live in a world where top secret documents are _regularly_ leaked by people
accidentally making S3 buckets public. I'm not at all convinced that
vulnerable containers are a bigger risk than what the same people would have
put up without containers.

~~~
xorcist
There's this idea that as long as everything is not rigorously proven secure,
we might as well grab binaries of file sharing sites and run them in
production.

This argument tires me. Every time some smug developer asks me if I have
personally vetted all of gcc, with the implicit understanding that if I
haven't we might as well run some pseudonymous binaries off of docker hub, I
extend the same offer to them: Get a piece of malware inside gcc and I will
gladly donate a month's pay to a charity of choice.

Sometimes I have to follow though the argument with the question if they will
do the same if I get malware on docker hub (or npm or whatever) but the
discussion is mostly over by then. Suffice to say, so far nobody has taken me
up on it.

The point is, that there's a world of difference between some random guy on
github and institutions such as Red Hat or Debian or the Linux kernel itself.
Popular packages with well functioning maintainers on Debian will be
absolutely fine, but you probably shouldn't run some really obscure package
just because some "helpful" guy on Stack Overflow pointed to it, and you
certainly shouldn't base your production on some unheard of distribution just
because the new hire absolutely swears by it.

~~~
msla
Right. All-or-nothing thinking is the bane of analysis, and philosophy in
general.

------
rlpb
The headline is misleading. It's not "the modern containerization trend" that
is the root cause of this. I expected to read about something about container
breakout or the difference between container confinement and VM confinement.

Instead, it turns out that it's the "store model" (Docker Hub in this case)
and malware injection into that store that the article is really talking
about.

The article also seems to talk about misconfigured systems permitting some
level of admin access to everyone. That's not really a new "container" class
of vulnerability though; it's the equivalent of leaving root ssh open with a
weak password or similar.

~~~
ploxiln
Even that isn't quite it - this is not a case of people accidentally
downloading and running malicious containers.

People are leaving kubernetes/docker/whatever open to the world, and attackers
are instructing their servers to download and run these containers.

The complaint is that Docker Hub is hosting the attack code for the attackers.
They could have hosted it on their own custom registry server if they wanted.
(But why bother if you can just host it on Docker Hub.) In the same vein, they
could use GitHub to host their attack code. Or they could put in in an S3
bucket...

------
raesene9
Whilst this article has some decent points, I feel it overblows/mis-understand
others.

It's fair to say that downloading and running images from Docker hub without
establishing trust is a dangerous practice.

Similar in danger to using npm, rubygems, nuget, Maven central etc. In that
there is only limited curation of content.

That attackers have "malicious" images on Docker hub isn't that relevant,
unless they can get people to execute them. If they were typo-squatting or
otherwise trying to trick users into running those images that would be more
relevant. Instead what seems to be being described is the use of Docker images
as part of attacks on other systems (e.g. Kubernetes installs with poor
security)

The bit around running a malicious container instantly leading to root on the
host is just wrong. With a standard Docker install, no customization, there
are some risks, however unless you do something like run --privileged, or
mount a docker socket inside the container, you're not guaranteed to be able
to get root on the host.

(BTW anyone who reckons this is trivial should give contained.af a look)

------
nassyweazy
_disclaimer: Security Engineer at Docker_

It is _VERY_ hard to do runtime detection of mining apps for two reasons:

1) it's mostly CPU usage intensive work and only if you know what's the
average amount of computer power needed by your application upfront will you
be able to make a policy decision on which image to stop and how to adjust
Cgroups resources. If you don't, you'll have to build a reference profile of a
trusted image anyway to be sure of what's the expected behavior.

2) There is no other "malicious" activity that might be reported by runtime
security tools (it generally doesn't trigger anything blocked by your
seccomp/LSM/filesystem-integrity profiles).

\------- How to protect against this -------

The best protection is at the build chain level. There are tools out there to
"bless" and/or verify an image's content/creator. Notary and Docker Trust
(higher-level abstraction based on Notary inside Docker) are two tools that
allow you to do:

    
    
      - key management
    
      - signer management
    
      - trust management
    

over Docker images.

It is crucial for people out there to make sure they only deploy trusted
images and make decisions on what to run (CI or Prod) based on signature
integrity of trusted images.

For a quick tutorial on Docker Trust and Notary, check this out:
[https://github.com/dockersamples/dcus-2018-hol/tree/master/s...](https://github.com/dockersamples/dcus-2018-hol/tree/master/security/trust)

Stay safe and do not run unsigned/untrusted images!

~~~
chatmasta
Isn’t the real problem mentioned in this article that people are running their
docker daemon unauthenticated on public endpoints? That’s not the default
behavior right? So people have actually gone out of their way to make
themselves insecure.

Look at the names of the containers in the article. Nobody is pulling these
themselves. The problem is attackers compromising docker hosts and pulling
arbitrary containers.

What safeguards does docker provide against exposing the daemon publicly,
accidentally or otherwise?

~~~
nassyweazy
The daemon is listening by default on a non-networked unix socket so if you're
exposing listening on the network, you're already out of the default behavior
(which is totally normal but that means that you've started regarding the
instructions/doc on how to do so, and our doc page on this matter also
includes security guidelines to enforce TLS verification/whitelisting daemon-
side).

There is currently no "superduper-safe-mode" that enforces `--tls-verify` at
the daemon-level to prevent lack of client verification/whitelisting. This can
be discussed, the issue obviously being the UX (that means getting proper
certs, specifying them in the config etc..).

------
johnchristopher
Containers were cool. No config, single deploy, etc.

But now I have to configure many yaml files, launch config and build script
that do god knows what to databases and other config scripts hidden inside the
container.

Some projects are really cool and easy to install while others are just pile
of hacks.

~~~
majewsky
Stateless services are all nice and dandy, and all the marketing you'll see is
about stateless services. The trouble starts when you:

\- run a stateful service that's not explicitly designed to work well in a
distributed fashion.

\- have services that need to be started in a precise order.

------
nemanjaboric
> By default, docker containers run as root which causes a breakout risk. If
> your container becomes compromised as root it has root access to the host.

Is this really true, unless you start container with `--privileged`?
Incidentally, I just read plan for better security defaults to avoid
`--privileged` (which is not default, AFAIK) on lwn:
[https://lwn.net/Articles/755238/](https://lwn.net/Articles/755238/)

~~~
blattimwind
It's about UID mappings between namespaces. When you are UID=0 in namespace X
and manage to get out of namespace X, then you are still UID=0 outside X, so
you're root.

It's possible to remap UIDs such that root in namespace X has UID=12340, and
when root gets out of X, then he's nobody.

~~~
raesene9
it's not quite as straightforward as just UID mapping. Assuming a standard
install of Docker, the container processes only have a limited set of
capabilities, have an AppArmor/SELinux profile applied and have a seccomp
filter also applied, which makes it harder to break out the the underlying
host.

------
chris408
Maybe this isn’t clear, but something I read a while back on this topic:

Running a container from dockerhub is basically the same as curl piping into
bash.

~~~
theamk
What? No.

Curl piping into bash will trivially steal all of your data at once.

Running a container from dockerhub is much safer, provided you do not give it
privileges using --privileged or bind-mounting system files like docker
control socket.

If your system is up to date and there are no docker 0-days active, the worst
"docker run --rm -it RANDOM-CONTAINER" can do is to use too much resources --
your local secrets would be safe.

~~~
orthecreedence
...unless said docker container is running an app server that has direct
access to your database.

------
himom
If a container system were formally-proven to provide all of these:

\- hard limits, priorization and accounting metrics of all resources, incl.:
IO, storage, compute, mem, net, kernel structures

\- provable isolation / no side-channel leaks

\- SELinux

\- Live migration of processes and storage to different hosts, suspend/resume

\- Type 4 hypervisor containers for different kernels, OSes, etc. configured
and managed seamlessly with the same API

Then and only then can the jumble and complexity of containers running on
hypervisors go away and be more like SmartOS with an ability to run bare-metal
without losing devops flexible capabilities of running T4 hypervisors under
everything.

~~~
yjftsjthsd-h
Okay, quick search failed me: what is a type 4 hypervisor?

------
theptip
One interesting point here, that Matt Levine has pointed out, is that
previously, upon penetrating a company's network, hackers would do malicious
user-harming things like stealing credit card or personal information.

Now, (some) criminals are just running mining software.

As a company, your bottom line might care more about the latter than the
former, but as a consumer, this is great news; if hackers lose interest in our
data and start stealing and re-selling compute cycles, then the chances of
catastrophic identity theft could go down dramatically. Sure, prices for web
services might go up a little, but they'll do so across the board.

------
drblast
I feel like the guys in Office Space here, but I'll buy some subscriptions to
Vibe if somebody can explain theoretically how you can convert a large sum of
ill-gotten cryptocurrency to usable money without getting immediately caught?

From the article it sounds like the attackers' wallet id is hard-coded into
the malware. I'm not familiar with monero, but aren't all transactions in
cryptocurrency public and permanent?

Wouldn't it be obvious from following that who is ultimately benefiting from
this?

~~~
sschueller
Monero has the "laundry" part built in.

I also don't think these criminals necessarily convert to fiat to aquire what
they need/want.

~~~
jandrese
When you're talking about millions of dollars of bitcoin or other altcoin
stolen...well, it's hard to smoke that much crack.

~~~
MrLeap
You can buy gold with crypto.

~~~
jandrese
Millions of dollars worth? Or is it some penny-ante exchange like most
Bitcoin-to-fiat services? Or will it have a strong know-your-customer rules
that make it problematic for these actual criminals?

~~~
LyndsySimon
I don't see why not. Transactions worth tens of thousands of dollars are
relatively common as I understand it.

------
squeaky-clean
I think I'm missing something about this attack, how does one actually get
attacked? From the article it sounds like a combination of bad firewall
configuration and an exposed Docker Daemon allows the attacker to install any
image they want into your host?

Why are so many commenters here saying things about running untrusted Docker
containers then? From my interpretation, the people affected didn't go to
dockerhub.com/docker123321/maliciousCopyCatImage and run it. Instead it was
injected into their otherwise-safe containers and they had no idea this ever
happened until things started acting weird. Am I missing something?

~~~
ericsoderstrom
I also had trouble following this article.

The Tesla vulnerability at the top seems to be simply a misconfigured
kubernetes cluster, through which the attackers were able to get at AWS
credentials. But then all of the subsequent examples are malicious images
hosted on dockerhub. Were malicious images involved in the Tesla exploit in
some way? What's the connection?

------
Bucephalus355
Here is one thing I have trouble understanding, despite working in
cybersecurity.

If I run a VM, I have to harden that VM. If I run a Docker container on top of
that, I now have to harden the Docker container as well. This is more work,
and a greater “surface area” of attack.

Forgive my ignorance, but do most ppl run Docker through managed PaaS services
now, so that they don’t have to worry about the double work of hardening the
VM? That’s the only way I see it making sense long-term, where the Cloud
providers manage the physical infrastructure like they do now _as well as_ the
EC2 / IaaS later.

~~~
cookiecaper
Most people just assume that Docker is a magic box that solves all of their
problems. They build Dockerfiles that depend on a grotesque cascade of hardly-
vetted parent images, because then their Dockerfile is "only three lines! Ha!
Take that, old guys!"

If people had been asking any of the basic system engineering and security
questions, we wouldn't be talking about this, or to be frank, most of the
stuff that passes for "DevOps" these days.

------
lbenes
Take WordPress press for example. If you've never before setup a LAMP stack to
run WordPress, you probably be running a WordPress container. And if you don't
regular check the log files and monitor system resources, then you're asking
for these problems.

I wouldn't run a server with a pirated version of Windows and I wouldn't run a
dodgy container from an untrustworthy source. Yet people do. I wouldn't blame
MS for a pirated Windows that came with malware like I wouldn't blame
containers for this either. It's all about Trust.

------
tbronchain
Although I understand reading the sources of all software we're running would
take way too much time to be reasonable, running a quick docker inspect /
docker history on all images we use is, in addition to be interesting,
probably a big first layer of protection.

Having a tool doing this for us - i.e a sort of docker anti-malware, that
would inspect images and containers for us, without necessarily go through all
the security stuff the official tool checks - would also be very handy.

~~~
jacques_chester
There is an entire genre of such tools now. Some names to look up are
BlackDuck Hub[0] (commercial) and CoreOS Clair[1] (opensource).

At Pivotal we use both -- BlackDuck is built into a number of our pipelines
and Clair is shipped as part of PKS (in the Harbor registry[2] contributed by
VMWare).

A lot of our customers also use other security scanning tools that have
expanded to include container scanning.

[0]
[https://www.blackducksoftware.com/products/hub](https://www.blackducksoftware.com/products/hub)

[1] [https://github.com/coreos/clair](https://github.com/coreos/clair)

[2] [https://github.com/vmware/harbor](https://github.com/vmware/harbor)

~~~
tbronchain
I didn't know them. Thanks for bringing them up!

------
jpzisme
You can easily avoid this by only using Official Images and Certified Content
from the Docker Store: [https://store.docker.com/](https://store.docker.com/)

------
godelmachine
I would like to get more research of this sort.

Would someone kindly point me towards it?

Thanks.

