IMHO this are the kind of simplifications which just make people more confused about a complex topic like that on the long run
And in addition the last points are also applying fully to micro vms but are shown as great benefit of docker. I can't like it unfortunately
The Docker Client and the Docker Daemon (together called the Docker Engine) are running on the Host OS.
Each container shares the Host OS kernel.
Now a container will have an entire copy of a basic OS it builds from (ubuntu, alpine, etc.), so its dependencies are self contained and it should run the same anywhere. However they're usually installed with apt-get or yum or apk commands in the Dockerfile, so if you don't do rebuilds and there's a security issue in a dependency, it can be difficult to detect.
Security problems are still isolated to that container, but that container shares a kernel. If someone can get root inside your container, and use a cgroup exploit, they could break out of the container. It's unlikely, and the layering does provide protection, but it's something to be aware of.
The process in the container runs at the same speed as a native process as if it were outside the container as well.
This is the first time I've been anywhere close to understanding the benefits of Docker (I've been watching and not understanding for years!). Thanks for your comment
Of course, the program would likely fail to start because things would be in unexpected paths, it'd see unexpected versions of libraries, etc. So now imagine that there's a Linux feature that allows you to remap filesystem paths for this process and all its children. This is called a mount namespace, but the old school feature called chroot achieves similar goals. If you then add on a few more isolation features, you have a modern container.
It's possible to do this because both of these systems are linux systems and the linux kernel's program-facing surface area maintains compatibility across a wide variety of versions, so the programs don't care very much about the underlying kernel version. With containers, the same kernel is just presenting a different view of the system to different applications.
It's not running systemd, its just running that one process you asked for. It be as if you booted ubuntu kernel and instead of running systemd to run all sorts of services, you run the single process you wanted. You're running on "ubuntu", but not really, i.e. the file system is provided by ubuntu, but you're not using the normal ubuntu environment with all the services that would start at boot.
You can also have docker containers run systemd if you want, but that's a much more advanced use case.
You're trying to containerize a simple football game. To run the game you need to provide a playing field that meets all the requirements of the rules, your dependencies.
You could go through all the rules one by one and write down what is required, like you need a line here of that length, another there that has to be at least x far away, but no more than y, and so forth. After the tedious task of collecting all your dependencies you give it a test run and it all looks good, but then boom the game suddenly stops. Whoops you forgot a dependency and didn't provide the goals, or added the wrong ones.
Alternatively you can take the easy route. You know they play regulation football in Ubuntu stadium, so you just take a copy of that. It comes with a lot of overhead - you don't need the ranks, concession stands or parking lots to simply play football without all the faff, but it's included because league football needs it.
Edit: When you do take the base image of Ubuntu stadium and docker run it, the default is set to play a game of football, because that's the most likely thing you want to do there. It doesn't activate the media coverage and stadium announcer, doesn't sell tickets and fills up the parking lot because none of that is required to play. You could if you wanted to, but you could also tear it all out and the default game would still run.