This seems to still be very much an AWS/Amazon project with no clear path to becoming its own independent thing. For example, you want vulnerability scanning on the OS? Well you can use an Amazon product for that, otherwise *shrug* [1]. So I guess as long as you plan to run Bottlerocket in AWS, you're fine.
I wish the Bottlerocket team would do 1 of 2 things. Either own up that this is just an AWS project, or start to solve for things like this and actually be a product that "runs in the cloud or in your datacenter" as they suggest on their website.
To be fair, I think "VM" on the OS for Flatcar / BottleRocket / CoreOS is not a requirement in the same way as on RHEL etc.
Do you want to know if you are patched? Are you running the latest version? If so, you have all the available patches.
I appreciate this can cause difficulties in some regulated domains because there's a "vm" box that needs to be ticked on the compliance worksheet.
Most of the reason we need VM on a "traditional" OS is to handle the fact that they have a very broad configuration space and their software composition can be - and often is - pretty arbitrary (incorporating stuff from a ton of sources / vendors and those versions can move independently).
But that's not how you're supposed to use a container OS.
If you do "extra work" to discover vulnerabilities in "latest", you are not really doing the job of a system owner (whose job is to apply patches from upstream in a timely fashion), you are doing the work of a security researcher.
It's not like something is stopping one from doing a vuln scan, right? Like, there's something that SSM's in (or uses the admin container) and then runs the scan. Couldn't you just do the same thing?
Genuine questions, I don't know if this is the case or not.
I just wrote a post on this. We have an eBPF + SBOM based security tool and it runs great due to hooking the kernel directly via Kube DaemonSet: https://edgebit.io/blog/base-os-vulnerabilities/
tl;dr: Amazon prioritizes patching really well, fixing real issues first
Indeed, but it's just an example. Imagine it said "For example, you want Feature X on the OS? Well you can use an Amazon product for that, otherwise shrug" instead if it makes it easier.
Bottlerocket does not off FIPS mode like most other enterprise *nix distributions.
Just to save anybody the trouble who needs FIPS approved encryption for host OSes that you use at work for various compliance programs. This makes Bottlerocket a non-starter for us. A very active issue has been open for over 2 years on this and the dev teams don't seem to be convinced that this is important. We even communicated with the dev team through our dedicated AWS reps and they have no interest in adding this.
Disclosure: I work for Amazon. I’m also the principal engineer for Bottlerocket.
FIPS support continues to be the top customer ask by a wide margin. Unfortunately the timing here is not kind for a new distro with no previous FIPS offering. New FIPS 140-2 certifications are no longer available, and new FIPS 140-3 certifications have to make it through a lengthy queue as the entire industry switches over.
If this were something the dev team could just power through, I assure you it would have happened by now. I apologize for giving the impression that it’s not important. It is, but that doesn’t help the timeline in this case.
Having been around a bunch of former-government people and bumping into FIPS myself a few times (like yubikeys) and reading about it, that's also been my sense, but it's nice to see a formal writeup with examples.
It makes no sense if your goal is "have the most secure system feasible under your resource constraints and usage requirements", which is a reasonable goal.
However, the whole FIPS and USG compliance in general mindset is not that; the goal instead is "be aware of ways in which your system is known to not be secure". The idea that a known flaw is better than an less-known fix is infuriating to devs, but from a business standpoint it makes some sense.
That might make sense, but I’ve never seen it with FIPS.
I’ve only seen them force changes were ones that weaken or remove defenses against known attacks.
I’ve never seen them require additional standard defenses, or identity and propose fixes against attacks that were not already considered and addressed by the existing system.
This looks very interesting but as other commenters pointed out, the path to running it yourself seems to be obscured. Even the GitHub page is listed only on the main page.
Alpine isn't immutable, meaning it opens up for more user error, and security issues, by allowing changes to its system partition.
We run immutable container hosts in production because we want to minimize the level of admin interaction. Basically it goes like this. Terraform idempotent setup of VMs with immutable Linux server OS, running containers.
We even disabled login on these in production, only keep it enabled in staging. All changes are tested in staging. If anything happens in prod, instead of logging in and making manual changes we just revert to an earlier state.
There is less need to configure files and services on the OS when everything runs in a container. You set it up once and start the VM.
On one hand it seems like an ncurses tool to install to a disk seems appropriate. On the other hand, the number of times one of these images would be configured for a company is probably pretty small.
I’ll have to spend a bit more time, but this seems like a nice option for orgs that want to run on-prem (e.g. not in cloud), and have a low maintenance container host.
Is this available as an AMI I can use when launching an EC2 instance? If so, how do I specify which container or containers it should run? Do I paste a docker-compose.yaml file into the User Data field in the EC2 launch wizard? Do I send configuration to a certain reserved port with a specially authed HTTP POST? About the only thing I know atm is that I can’t use ssh until a container is deployed.
Yes, its listed in the Community AMIs section. It's more common to use this alongside Elastic Kubernetes or other similar AWS services though, where you can opt to use Bottlerocket at the host during configuration.
OK cool, thanks for that information, but I do wish that someone would explain the mechanism for deploying this OS. Like, if it’s part of an ECS/EKS scheme I’ll tolerate some magic, but at the end of the day I’m a curious person and I’d like to know the mechanics behind how my software is getting deployed. In general if I personally can’t deploy something to EC2 I feel weird about trusting higher level abstractions to do something I don’t know how to.
Well, the link I provided references the Bottlerocket docs which explains the control container and the admin container and also how you can configure Bottlerocket via the User Data field when launching it as an AMI. All the information appears to be in the docs
This seems really useful for stuff like AMD SEV-SNP, where we want a measurement of the (kernel + initrd + arguments) to guarantee certain behavior from the machine. Ideally, we could use this as the container hypervisor, and have it produce attestations that bind to the hashes of the running containers. This relies on not having container escapes; not sure what the state of the art on that is right now.
Website says that the OS does not have a shell. I cannot imagine a useful docker container without at least one shell script inside. So, if there is no shell, doesn't it mean that Bottlerocket is generally unusable except niche scenarios?
The docker containers can have shell scripts inside. The host machine doesn't have a shell. You can bring a docker container with a shell, and run it privileged, to have a shell on the host machine.
You can also launch an admin container and type `sudo sheltie` in it to get a root shell on the bottlerocket host OS if you need to debug things.
We've been using Bottlerocket together with its update operator on K8s for about a year now and we are really happy with it as it solves patch management by swapping out an immutable host OS image instead.
Containers which contain shell scripts also contain the shell itself. It is not typical for the host machine's shell binary to be made available to containers running on the host.
The idea with Bottlerocket is that the host itself does not have a direct shell nor a way to access it via SSH or any other method. Instead this responsibility is delegated to the admin container which is where you would actually connect to via SSM/SSH. From here if you needed a root shell you would use the `sheltie` utility to do so.
I still don't understand why people are so keen to shoot themselves in the foot and make everything sandboxed containers with virtual filesystems and networks.
Just use the damn OS and hardware directly. SSH into the host whenever you need to see how things are performing.
Kubernetes only works so long as you don't really care about resources being used well.
20 years ago I managed thousands of machines through ssh and was able to maintain them all to the same setup.
Nowadays I see people spend man-years developing tools to ensure consistent deployment on 10 machines. Not only do the tools not even work, they take months to land a change that could be done manually in two minutes.
Bet you didn't do canary or blue/green deploys, or deliver automated telemetry data, or guarantee resource quotas, or provide network attached storage, or etc. etc.
Because Kubernetes is both abstraction and de facto standardized platform across infrastructure providers. All deployments to customers who are large institutions start with provisioning, OS alignment (there are huge differences between RHEL, SLES, Debian or Amazon, then customisations like hardenings are put on top), networking, storage, access rights. You don't want to deal with that from scratch each time. It costs both time and money. Direct dealing with hardware is long gone (Hyper-V and VMware), and now it's time to cut out the upper layers. Also, Kubernetes allows better resource utilisation and scaling.
I wish the Bottlerocket team would do 1 of 2 things. Either own up that this is just an AWS project, or start to solve for things like this and actually be a product that "runs in the cloud or in your datacenter" as they suggest on their website.
[1] https://bottlerocket.dev/en/faq/#4_2