
Steam in Docker - arno1
https://hub.docker.com/r/andrey01/steam
======
fasterthanlime
Interesting approach! I work on the itch.io app (functionality overlaps the
Steam client somewhat, but with a different content offering / different way
of running things) and we do address both concerns:

    
    
      * app isn't tied to / doesn't assume a Debian-ish distribution (we ship .deb, .rpm, a PKGBUILD, and a simple binary .tar.xz)
      * app uses firejail on Linux (sandbox-exec on macOS, different user on Windows) to "set up more fences around" games you download from the internet.
    

There's a bunch more features we want to add to the app (live video capture,
see itchio/capsule on github, synced collections, etc.) — but isolating
"downloaded apps" from the rest of the system seemed like a sensible
prerequisite on the road to doing that.

I don't want to spam links, but if you're interested in our approach, you can
probably search "itch.io sandbox" with your favorite search engine and stumble
upon it :)

~~~
arno1
Thanks @fasterthanlime !

Is there something that the firejail does better than the Docker?

I can see the firejail also uses the namespaces and seccomp-bpf.

~~~
fasterthanlime
I suspect that this question was covered in the HN entry for firejail earlier
today:
[https://news.ycombinator.com/item?id=12239840](https://news.ycombinator.com/item?id=12239840)
\- but in our case, it's just that it's lighter.

If I'm not mistaken, containers come with their own userland, if you want a
useful graphical container you're in for a few hundred megabytes of
dependencies, whereas sandboxing approaches (firejail,
projectatomic/bubblewrap - used by flatpak for example) just try and limit
what a process in the same user space has access to.

I wanted a solution that was low-overhead enough that it was a no-brainer for
users to turn it on. However, it's not perfect: our sandbox policy could use
tightening (as long as it doesn't break too much stuff), and having an
additional SUID binary around is definitely something to look out for.

I'm hoping that more interest gathers around sandboxes and that they become
more mainstream in Linux ecosystems. "Trusting package maintainers" only goes
so far, and doesn't really account for third-parties shipping binary packages!

------
cryptarch
Is this meant to make uninstalling Steam easier than it is now?

Or is this an exercise in getting GUI applications with slightly exotic
features (GPU access) to run?

I'd like to understand why this was made but it isn't described in the usage
instructions.

~~~
arno1
I think it should be pretty obvious why people put things into containers :-)

Few main points though, which pushed me making this Docker container:

1\. I want to set-up more fences when running the code I don't/can't trust;

2\. I don't want to spend time on figuring out how to install Steam (what
deps) in a non-Debian (or non-SteamOS) based distro;

3\. I like cleanliness: I can erase Steam and all its dependencies in a matter
of seconds;

4\. Like you said, it was an interesting exercise and it still needs some
polishing :-)

And few Pros from my PoV:

\- I can have Steam on my Ubuntu/openSUSE/[put any other distro I will want to
use] in a short time that Docker takes when downloads this Steam container;

\- Since Steam is meant to run in Debian (SteamOS) based distro, it is not a
problem anymore, since it is in a container now.

~~~
Gonzih
So you don't trust application code, but you trust image makers code?.

~~~
arno1
This thread is not about to what extent I trust things.

But security-wise, running it in a container is better, than running it
without isolation. IMHO.

And of course, no one asks getting this image built by the 3rd party, since
the Dockerfile is open, just build it yourself ;-)

~~~
kordless
There is very little reason to believe that escalated privileges are not
possible within a given virtualized environment, at least with current
technologies.

Containers are great for development and production on your own
infrastructure, or shared infrastructure like GCE or AWS. Security can be had
from doing inspected builds, self signing, etc.

For consumers, however, it's a completely different ballgame.

~~~
MichaelBurge
All those Docker commands usually run as root or something equivalent to root.
So a container breakout could lead to root on the host system.

I think kordless is claiming that using Docker here could increase the
severity of an attack; otherwise it doesn't seem like putting up another
barrier could hurt security, even if it is later broken.

~~~
kordless
My claim would be applied to all virtualized environments, including
containers and VMs - not just Docker. Microkernels have a decent shot at
keeping the security issue at bay, but even then it can't keep them out
forever.

Everything falls to hacking eventually. That's the nature of it, at least till
now.

I would note that Docker is primarily a tool for developers and operations
folk who are also the author of the software being run. Docker itself is not
the risk here, but using it for some use cases may very well be.

------
notthemessiah
I made a separate user for using Steam (and other games), and it involved a
little bit of routing when it comes to X11 and PulseAudio. My reason for doing
so was primarily because of how games create many dotfiles, and I wanted my
home folder clean.

~~~
AckSyn
I did something similar by installing Steam to a chroot environment.

"Those who do not understand UNIX are condemned to reinvent it, poorly." \--
Henry Spencer, programmer

This goes doubly for "containers" not understanding chroot/jails.

~~~
arno1
Well, there is a distinct difference between traditional chroot and the Linux
control groups and namespaces.

If only chroot was enough, noone would be investing their time to the cgroups,
namespaces, LXC, Docker, etc... :-)

------
voltagex_
This is hardcoding driver versions:
[https://github.com/arno01/steam/blob/master/docker-
compose.y...](https://github.com/arno01/steam/blob/master/docker-compose.yml)

Is there a better way?

~~~
arno1
I haven't come up with a better idea obviously. :-) Suggestions/PR's are
greatly welcomed!

~~~
flx42_
At NVIDIA we maintain this utility: [https://github.com/NVIDIA/nvidia-
docker](https://github.com/NVIDIA/nvidia-docker)

It automatically discovers the devices and the right driver files on the host.

The main goal is compute (CUDA), but we also demonstrated how to run TF2 on
Steam OS during our DockerCon 16 OpenForum presentation.

Nice job! :)

~~~
voltagex_
I'm really not up on how this all works, but isn't
[https://github.com/NVIDIA/nvidia-
docker/blob/master/ubuntu-1...](https://github.com/NVIDIA/nvidia-
docker/blob/master/ubuntu-16.04/cuda/8.0/runtime/Dockerfile#L17) hardcoding
driver versions in a different way?

~~~
flx42_
No, this is the CUDA toolkit, it doesn't depend on the driver version. You can
compile CUDA code without having a GPU (which is the case during a "docker
build").

Edit: in other words, your Docker image doesn't depend on a specific driver
version and can be ran on any machine with sufficient drivers. Driver files
are mounted as a volume when starting the container.

------
ingenter
I'm running steam in a systemd container, the main issues were sound and
notifications: I don't understand how pulseaudio works, so I had to give share
some system directories with the guest system to get the sound working.
Notifications were solved by sharing dbus. GPU was shared by sharing a single
directory in /dev with correct persmissions

~~~
arno1
@ingenter please refer to the docker-compose.yml file of the source
repository.

To make pulseaudio work in a container, you basically want to pass these
volumes from the host to a container:

\- /etc/localtime:/etc/localtime:ro

\- /etc/machine-id:/etc/machine-id:ro

\- $XDG_RUNTIME_DIR/pulse:/run/user/1000/pulse

And then, this environment variable:

PULSE_SERVER=unix:$XDG_RUNTIME_DIR/pulse/native

------
castratikron
Why would I want to do this?

~~~
tormeh
I don't know much about Docker, but I know Steam for Linux is built for
Ubuntu. Maybe Docker helps for distros like Fedora? The benefit is probably
highest in the case of NixOS, because Steam changes stuff behind the scenes
and so isn't really compatible with declarative package managers.

~~~
vertex-four
NixOS supports Steam wonderfully - it creates a half-container (alternate
filesystem root, shared networking) which looks like what Steam expects, has
full access to graphics drivers and your home directory, and can be executed
like any other app (run "steam"). The full Docker infrastructure isn't really
necessary.

------
oDot
Steam is exactly the kind of software that should be distributed using
Flatpak[0].

[0] [http://flatpak.org/](http://flatpak.org/)

~~~
hobarrera
Sounds like an excellent plan to end up with outdated libraries and full of
security holes.

------
chrisper
Why use docker instead of something like flatpak or snap?

~~~
snuxoll
Personally I'd love a Steam flatpak, but since the current images are based on
Fedora AFAICT (no surprise there, GNOME/FreeDesktop project) it would take a
lot more work since you'd have ta create a Debian/Ubuntu SDK to base it from.

------
em3rgent0rdr
I use this on an otherwise free-software-only system (Parabola/Trisquel),
since the only proprietary software I ever use is games, so I try as hard to
isolate proprietary code as much as possible (without performance loss, which
happens with VMs). This sort of goes with point "1\. I want to set-up more
fences when running the code I don't/can't trust;"

------
mastazi
How is the GUI accessed? X11 socket sharing? Because I guess that for gaming
both X11 over SSH and VNC would not perform very well...

~~~
arno1
The same way as every local application is accessing it, via a Unix domain
socket /tmp/.X11-unix:/tmp/.X11-unix / DISPLAY=unix$DISPLAY :-)

This will give you a frame rate identical to your host, so there is no
overhead running your 3D apps in the container.

~~~
Blaiz0r
Is there any overhead at all in using Steam (or anything) through Docker?

I'm not very familiar with using Docker.

~~~
arno1
The overhead is negligible (close to 0). ;-)

------
shmerl
I proposed the same idea for GOG and their Linux games a few years ago. At
that time they didn't get the point.

~~~
ekianjo
They still don't, since they ask their Linux users to install tons of
libraries on their own. Not that they really care about Linux anyway... (still
not GOG Galaxy client...)

~~~
shmerl
Libraries are OK to install. You can do the same in the Docker container. What
Docker does is better isolation from the rest of the system. You can do it
yourself with cgroups / lxc, but Docker gives higher level management.

------
marcosnils
Been there, done that. Here's a video where I run Counter Strike through steam
in a docker container.

[https://youtu.be/ZHWsR8TnKsw?t=801](https://youtu.be/ZHWsR8TnKsw?t=801)

PS: Audio is in spanish.

~~~
arno1
I bet there are dozens or maybe even hundreds of people who run CS in a
container. What, I believe, makes the difference is that when one shares with
the reproducible results. :-)

~~~
marcosnils
The video I posted is a complete tech talk about how achieved it :)

~~~
arno1
Cool :) Pity it isn't in English.. :)

------
rlpb
How does persistent state work with this? For example, what happens to my
saved games? What if I update the image (for example to pick up a Steam
update)? What will happen to those saved games?

~~~
nostrebored
Mount volumes that correspond to save locations. The volume should be
persistent so even if the image changes the data will remain.

~~~
rlpb
Aren't save locations different on a per-game basis? So does this need
configuration? If not, then how does it work without user intervention?

~~~
nostrebored
If you don't want to do something smarter, by nature of the file system that
Steam rests on, there must necessarily exist some folder s.t. all save
locations exist in a subfolder (recursive) of this folder. You'll use extra
space, but you can definitely do that easily.

Alternatively, you can keep an environment variable GAME_SAVE_MOUNTS to which
you append a -v flag with the desired mount saves. You can that start your
container with $GAME_SAVE_MOUNTS and have everything work out of the box.

------
skrowl
How is the performance on this compared to a bare metal install?

IE how many FPS do you get with a native Steam install vs Dockerized Steam on
the same hardware?

~~~
arno1
It's been already discussed in this thread. The performance impact should be
negligible. I am having ~110-150 FPS with my nVidia 560 Ti in CS:GO ,
1920x1080.

The precise testing haven't been done explicitly to measure increase/decrease.
But feel free to test it. ;)

------
Hurtak
How big is the % FPS decrease if you run the game inside container, compared
to just running it regularly?

~~~
arno1
I haven't spotted the decrease. On the opposite, even some increase :-) (Or
was that just a placebo effect? :) )

There actually shouldn't be any significant decrease since the Docker's
overhead is negligible.

------
lhlmgr
Could this improve or worsen the VAC mechanism?

~~~
arno1
It's absolutely irrelevant since Docker (cgroups) are just the Linux kernel
abstraction which helps to isolate resources of the processes. And with this
image the general idea is that it comes like a package, with "just take & run"
approach, eliminating the need to depend on the specific Debian-based Linux
distro (which is required by Steam and provided with this Docker image).

Then as discussed in this thread, it gives few security advantages, control
benefits since those isolated resources are controllable.

~~~
lhlmgr
thanks a lot for this clarification!

