And here is a post from AWS with more technical details: https://aws.amazon.com/blogs/aws/bottlerocket-open-source-os...
Consider my interest to be piqued.
Operating system maintenance falls under the Customer responsibility side. With using this new OS, would this responsibility shift back to AWS?
You'd need to look towards providers that specifically take on more responsibility like https://compliantkubernetes.com/ (disclaimer: I have worked at Elastisys, the company behind Compliant Kubernetes).
I miss the part where it explains how it works. Is it all just containers?
Packages are built with RPM but RPM isn't used at runtime. Instead, the system is image-based and reboots for updates.
> bork: A setting generator called by sundog to generate the random seed for updog, determining where the host falls in the update order.
WiX (windows installer creation) has a multi-phase command line interface where the compiler/linker/.. has different names indicating the order they are applied: candle, light, smoke... Also a working system I guess.
>It is safe to say that our industry has decided that containers are now the chosen way to package and scale applications.
Curious how the HN community feels about that statement. Not so much about the truth of the statement but about the fact that containers are becoming the de facto method of packaging applications.
As kinks in the kernel support and tech get worked out, and OSs deepen support I can't imagine that it will ever make sense to say something like "I could have run the process with cgroup and namespace isolation but I chose not to, choosing to make a new user-level isolation or run everything as root instead".
Arguments against containers as the future based on the complexity may have weight but not for long.
Containers the package format (docker) is completely rubbish in my opinion
simply not true: https://github.com/google/gvisor#why-does-gvisor-exist
It's not a good sandbox, but if we are pedantic about the definition of a sandbox, it fits, especially when we think of the benefits of namespacing (effectively removing access to resources like networks, filesystems, etc).
gVisor is a more focused on sandboxing processes specifically, so it's relevant but gVisor is not relevant to the wider discussion about a packaging format -- unless you're suggesting to run gvisor'd processes instead of containerized ones and containerization is still beneficial in that scenario.
The only thing that would make them better is if we stopped over-complicating them and made them portable .
 As in, could be moved to different disks and run from there without a bunch of hoop-jumping.
For load balancers, databases, and other stateful stuff I still run the binaries on VMs.
But users want to run lots of shady apps, either that they find on random websites or places like the Google Play store.
I find that the key to running desktop OS/apps is never use sensitive data and always be ready to wipe your machine and start over.
Containers have been gaining ground in telecoms since at least 2015 (https://www.sdxcentral.com/articles/analysis/telecom-opens-u...). Network function virtualization solutions rely increasingly on containers.
Now AppImage, that's pretty great. It doesn't really bring any of the security benefits of containerization, but that can be tacked on separately.
As far as complexity goes, I would much rather have a nodes where a single application is running and using 100% of resources (instead of having containerd or k8s services running) and do simple autoscaling, having access to host-level metrics that I can map back to applications easier than use Docker, k8s & co. Maybe is it only me, but I care about efficiency. Why waste energy?
The counter-argument is that developer time is more valuable than setting up clusters or autoscaling groups. Well, this breaks down when you have SRE team(s) maintaining the k8s clusters (literally every company I worked for). If you already have SRE people either embedded into your dev teams or separately then you can just build out a CI/CD pipeline that produces that production setup based on blueprints. We usually use Terraform and Ansible with tempalte variables (stage = test|qa|prod, cluster size = x, version = y) that makes it easy for everybody to provision clusters on their own. Does this mean more work than k8s deployments? Yes. Does this mean we have less complexity we need to care about? Yes. In my experience containerization is a development tool to make it extremely easy to achieve fast development cycles but right now the accidental complexity to take that with you to production is not worth it. There are very nice projects like LXC/LXD that I would consider using for security separation and resource management but we usually have clusters where 100% of resources go to a single services. Example: Hadoop cluster, Elasticsearch cluster, Web application (mostly API) clusters. I need to care about the underlying hardware because of financial reasons (what is the cheapest node type I can use to run workload X). k8s would not help here.
To sum it up: I do not think that the industry has decided on this. I also think that we are in the era of wasteful computing which will be finished soon because of reliability and unnecessary CO2 production reasons. Running containers has to be much less fragile and efficient to be considered the way to scale applications. I personally think that Firecracker is a step in the right direction in this while Docker & k8s in the wrong direction.
Hallelujah! I'd imagine inefficient containered architectures do more for the cloud provider's bottom line rather than help the customer.
How tied in to the AWS model is this? Are the places that would need to be expanded known?
Also at least at a glance, this is a neat use of real-world Rust
I don't know how actually tied in it is, but it's not totally surprising to me that it's built for AWS infrastructure.
That being said:
> To start, we're focusing on use of Bottlerocket as a host OS in AWS EKS Kubernetes clusters. We’re excited to get early feedback and to continue working on more use cases!
> Bottlerocket is architected such that different cloud environments and container orchestrators can be supported in the future.
Glad to hear it! Please reach out if you need anything :)
So is the idea that people create a Bottlerocket AMI and use that as their EKS worker node images? Is that correct?
The cargo.toml workspaces relates more to make IMHO.
In the context of software development, if you tell someone you're developing a new operating-system you're probably going to conjure up images of writing a new kernel. If you tell people you're developing a new Linux distro, this is closer to what they'll imagine.
> ...a new Linux-based open source operating system that we designed and optimized specifically for use as a container host.
Even the points made for creating this distro was really for stripping out the unnecessary software in a default Linux distro install and to also increase the startup time for the essential userland processes and optimizing the OS from any possible bottlenecks and security pot-holes.
It's a shame really that it is built on top of Linux rather than an actual new Rust operating system by Amazon. I'm not sure why I would use this particular one if it is based on Linux while it also promotes another lock-in opportunity for AWS.
I do wonder if the dual partition approach was deemed more stable than using OSTree or why the latter wasn't used.
This is not a container base image, it's a container host OS. It is somewhat similar to Atomic or CoreOS, but in some ways it seems to be a bit more of a radical redesign than those.
How does that work? The explaining image does not explain that. How is that different from rolling back on file system level?
It's different than filesystem-level rollbacks because it's all-or-nothing, so you don't have to worry about update failures after a few packages, and because all of the components in a given image are guaranteed to be tested together, whereas with package-based systems, your combination of packages may have never been used together by anyone else. In addition, for builders, it's easier to sign, distribute, and verify a single image.
In nix the nix store is remounted over itself read-only, but nothing stops someone from ripping out the disk and flipping bits. This is not possible with these kind of 2-partition schemes if you have dm-verity set up
Also an attacker could modify the nix-store sqlite database and spoof the hashes, rendering this check moot
I wish somebody'd take some VC or R&D money and build distributed computing features into the kernel itself, so we could quit wasting our collective engineering talent, time, money and energy on distributed applications that run on non-distributed-operating-systems. It's like nobody wants to work on creating a round wheel, so instead we're spending all our time building custom roads for square wheels.
Unfortunately, all of them were either research projects, proprietary products, or patches that never made it into the mainline kernel. Nobody has since tried to get the functionality into mainline, so people keep hacking together these non-standard pseudo-operating-systems and jumbles of disparate applications.
You could eliminate 80% of the need for K8s by adding OS primitives to connect and operate namespaces and control groups between nodes, as well as native i/o (block, file, "N-way pipes") between nodes. Once that was done, systemd (or something like it) could manage services across an entire cluster. Applications could communicate between arbitrary nodes without any added functionality. Virtually all of the complexity would be in the kernel and systemd, so apps could be simpler and we wouldn't need 100 layers of userspace junk just to keep an app running on 3 nodes.
1. When are services branded as AWS (AWS Fargate) vs Amazon (Amazon DynamoDB)?
2. Is BottleRocket a nod to SkyRocket  or a movie of the same name?
3. Why is it called Fargate ?
Disclaimer: I work for AWS, but this is not an official answer. What I said is correct to my best knowledge, but I cannot guarantee its correctness/accuracy.
On Fargate, you got the answer there.
We know this is how you show you care. I don't think there was any reason to downvote or flag. But not everyone on the Orange Site knows one another, so I can see how it could be misinterpreted.
It's getting largely a positive response out there: https://twitter.com/alexwilliams/status/1237773085039722496
It is also interesting to note that every step on that journey seems to have picked the coolest runtime to implement it in (C/early Go, established Go, and now Rust)
One more thing on the good side: the TUF implementation in Rust seems really interesting. I'll be digging some more and may actually steal it for linuxkit (and by extension Project EVE)
Fun fact: a lot of the patches you will find in more system level packages like grub seems to trace their lineage to CoreOS (and potentially Project EVE) but I haven't seen acknowledgments anywhere. This is of course all fine from licensing perspective -- but I still would be curious to know whether it is indeed where it was taken from.
I'm happy with the usage of systemd if they take advantage of the hardening features in systemd units for core system services. I'm a bit less happy about the continued usage of Docker, but I get why that's happening for this (EKS and ECS both use it, so it helps support that infrastructure).
You obviously know this, but for everyone else playing at home, "Docker" is made up of three distinct projects: moby (CLI and API), containerd (supervisor daemon), and runc (container runtime core).
Of the three projects mentioned above, only runc is used by nearly all major "container engines" as people call them.
And as pointed out by another poster, you do have the rest in the Bottlerocket tree.
Can you say more about the advantages over Buildroot and Yocto?
So on #1, if you look very casually at Buildroot and Yocto -- they will clearly come on top over this next generation of systems. It appears they have WAY more upstream components already available for you to chose from. Compared to them the list here looks almost laughable https://github.com/bottlerocket-os/bottlerocket/tree/develop... and https://github.com/linuxkit/linuxkit/tree/master/pkg The problem though is combinatoric explosion of how you can compose all these upstream components. The canonical example here is the choice of your init system. You pick one -- and your choice in everything else gets severely restricted. So to some extent that apparent embarrassment of riches that Buildroot and Yocto offer is misleading.
These next generation systems, on the other hand, don't pretend that you can build a host OS in any shape or form you want (hence very few base packages) but rather that you build "just enough of Linux to run containerd" -- the rest of what you would typically put into your baseOS goes into various containers. This is a very different approach to constructing the bootable system, but subtly so -- which I don't think a lot of people on either side of this debate appreciate.
I honestly think that what makes Yocto and Buildroot difficult is that they want to be all things to all people and they want it at the level of baseOS -- complexity-wise, this is a wrong approach these days.
That scores one point for these next generation systems in my book.
The question #2 is not even a comparison. In Buildroot most of the integration/package logic is implemented in Makefiles and usability-wise (if you're trying to actively change the system or add a new package) it falls apart pretty quickly (it is still great if you're just using what's already there btw). In Bitbake -- the codebases is REALLY complex Python which suffers from the same issue. Contrast it with Linuxkit/Project EVE where all that logic is golang and bottle rocket which uses Rust and ask yourself whether you would rather debug a complex issue with a dozen of Makefiles all full of non-trivial recipes or look over go/rust codebase (yes, I know all these things are turing complete and thus equivalent -- but life is to short to debug Makefile).
If you don't quite believe me, there's been a number of studies about using Buildroot and Yocto for building containers. Pretty much all of them came back with the same conclusion -- the usability aspect of extending them makes it a non-starter. Here's the one from the last Kubecon that VMWare guys did: https://blogs.vmware.com/opensource/2020/02/27/distribution-...
This is a valid approach if you want to build something that can only run containers, but IMO is somewhat orthogonal to the Yocto and Buildroot goal of building distros for embedded platforms.
It's awesome that people are making new tools to do similar things to Yocto and Buildroot in this post-container world, but I don't think it's really fair to say that bottlerocket is a direct competitor to Yocto/Buildroot. It's probably fairer to say that bottlerocket makes it easier to do things that Yocto/Buildroot aren't really designed to do. Hopefully both live on, serving their own niche! I'm all for specialised tools rather than generic 'do it all' tools.
What is container specific about all this? It just seems to be minimal images?