We've been very vocal to AWS product managers and solution architects about our needs for an Amazon Linux 3 that is a refresh over AWS Linux 2 (at least 5 years support with RHEL 8 compatibility, free kernel patching w/o reboots, official support from datadog, vmware images). Sad that we haven't been heard. We'll now need to plan to move over 20k instances to Rocky Linux.
I suspect that the move to using Fedora has something to do with changes to the CentOS project that AWS Linux 2 forked. Let's hope the beancounters at IBM doesn't have other plans for Fedora.
Plus, honestly, per-system licensing plus cloud autoscaling isn't really anybody's idea of a good time.
In addition to RHEL compatibility, AL2 worked well because our existing support plan with AWS covered it. It also came with free kernel 'live' patches unlike others (except Oracle).
What do you think they would potentially do? Fedora is the upstream for RHEL, it's integral to it. Part of the reason they dropped support for CentOS was because it didn't serve to benefit RHEL very much.
It does not, according to Amazon themselves. https://twitter.com/stewartsmith/status/1463594136473202692
Disclosure: I work at Red Hat.
Similarly had amazon created their own Debian derivative I'm not sure it'd be that tempting to users whose baseline is Ubuntu.
No judgement - curious, what does Rocky give vs. al2022? RPM compatibility? Or maybe, what do you like about RHEL compatibility?
Any other linux (including Rocky Linux) will need to be painfully certified.
I just deal with iso cert, so not clear on possible requirements
While AL2022 isn't a drop-in RHEL alternative, it seems much more likely that vendors will accept it as a commercially supported install base given that it is a first-class citizen on the biggest cloud.
AWS has a secret weapon in compliance - its called AWS Artifact. most compliance certifying agencies will pass-through accept AWS Artifact reports.
No technical reason. Just sheer Amazon muscle i guess.
There’s still a whole boatload of things that still need to be covered which do not fall on their side of the shared responsibility model.
If anything, Amazon Linux 2022 clarifies things by indicating they directly track Fedora Linux, branch and stabilize that, and offer their own lifecycle guarantees for that branch.
Like RockyLinux ?
As skeptical as I am about huge tech corps like Amazon, Google, etc., I have to admit I enjoy being their paying customer - nice experience. I find GCP and AWS a pleasure to use.
Looks like they change branding and nothing else. Maybe a backported bugfix here or there.
Find the AMIs (newest last)
$ aws ec2 describe-images --region eu-west-1 --owners amazon --filters 'Name=name,Values=amzn2-ami-hvm-*-x86_64-gp2' 'Name=state,Values=available' --output json | jq -r '.Images | sort_by(.CreationDate)'
"Description": "Amazon Linux 2 AMI 2.0.20211103.0 x86_64 HVM gp2",
You'll also find VirtualBox, Hyper-V and VMWare ready images in there.
(and also arm64 ones)
ISTR there’s also an SNS topic you can subscribe to if you want to do something automatically on new AMI release.
Once it's GA, they'll provide VM images and docker containers, so you'll be able to test it offline
Yes there are services like Fargate but they're not cost efficient for many cases.
Or, like you suggest, they could use an AWS service to manage their container, like App Runner, or Lightsail, or EKS, EKS Fargate, EKS Anywhere, ECS, ECS Fargate, ECS Anywhere, ROSA, Greengrass, App2Container, Elastic Beanstalk, or Lambda. There are plenty of guides on AWS's website on how to use them.
Cost is mostly irrelevant to the conversation, as you can run containers anywhere (other than, say, a CloudFlare worker); pay for any infrastructure you want and then run the container there.
This is like updating the direct dependencies of your service itself (e.g. cargo audit -> cargo update) but anecdotally I'm seeing many people neglect the image and sometimes even pin specific versions and miss potential updates even when they do later rebuild it.
We take unattended upgrades for granted on Debian-based servers, and that will likely help the Docker host system, but I'm not aware of anything nearly as automated for rebuilding and redeploying the images themselves.
It could be part of your CI/CD pipeline but that in itself is a lot of extra setup and must not be neglected, and it must make sense, e.g. pin in a way that will still pick up security patches and have a dependency audit as part of CI/CD to report when the patching hasn't been enough (e.g. due to semver constraints).
What you describe isn't hard to achieve. Write a one-line cron job that gets the latest packages for your container's base, writes them to a file, commits it to Git, and pushes it. Then set up a Git webhook that runs a script you have to build your container with a new version and push that to a dev instance. Add some tests, and you have an entire CI/CD process with just one cron job and one Git webhook.
This is literally one of the only things that is not included in a container image. The Linux kernel is the Operating System and you are subject to differences in its configuration depending on where the container is running. You are referring to the distribution.
This is a false sense of reproducibility. I encountered cases where container worked well on one machine and crashed or had weird bugs on another one.
I would say that AmazonLinux is likely to have less issues with latest instance types (if they change something "hardware" wise, for example when AWS started exposing EBS using NVMe drivers there were some issues originally).
Enabling it won't in itself secure your company's applications, as the default policies in Fedora only apply to installed services (e.g ssh) that have a policy written for them.
This is probably right on the boundary of the shared-security-model, but I think it would be great if they also offered easier ways for application developers to leverage the advertised feature.
& AWS are active in this area moreso with bottlerocket-os https://aws.amazon.com/blogs/opensource/security-features-of...
The recent release even incorporating the use of a feature I had previously dismissed as useless (MCS) is really quite neat https://github.com/bottlerocket-os/bottlerocket/discussions/...
Surely the fact that 'disabling SELinux' is the top result on the subject in Google or StackOverflow will tell you that you would be in the minority of developers that like working with it and find it easy to do so.
I think there's more to it than just simply running an app without receiving an AVC complaint in auditd, you need to be able to test that the controls you put in place actually protect the application in some way, this does not come for free with audit2allow and other such generative tools.
If it actually did what it was supposed to do in a reasonable manner, people would use it.
I doubt that.
journalctl has always given me something when there was an actual denial. You might just not have looked right
But for applications with a large feature set - e.g. a web browser - if the policy author didn't use a particular feature - e.g. U2F security key support - you might be introducing a new source of problems that only advanced users can easily solve.
Not that I imagine Amazon Linux is used for web browsing very often....
It’s also interesting that at the same time Amazon is sponsoring Rocky Linux: https://rockylinux.org/sponsors/ (Which is based on Red Hat Enterprise Linux.)
"We believe that having Fedora as upstream allows us to meet the needs of the customers that we talked to in terms of flexibility and pulling in newer packages."
And as long as the kernel is decently fresh, one can run new shiny software using containers instead.
This is the whole reason why people use CentOS and Debian as server side old stable and most importantly security patch. if you need your dev stack as newest version just installed it on the old stable OS base so you can worry only on your dev stack.
Historically, CentOS has been very slow to release security patches, compared to upstream (RHEL).
And for anything non-critical (but still often high) Debian stable tends to receive fixes a lot later than unstable, and sometimes never due to need to backport.
Source: Fedora developer, previously employed at Red Hat as a Linux packager.
You are correct saying it is designed for EC2 instances, as it is the de facto default image for EC2 instances, despite many folks choosing an Ubuntu image instead.
Unfortunately I'm just way more used to debian based systems, and I feel like having a mismatch in production would just lead to friction.
However it became a problem once you used 3rd party software as step 1 of most install guides was to disable SELinux.
These come up unconfined by default in RHEL 7, but this behavior changed in RHEL 8.
I don't remember the specifics, but Oracle support confirmed that it should be assigned unconfined in the newer OS.
This is essentially "setenforce 0" for a process, as I understand it.
- create and install SELinux rules for it
- disable SELinux
Unfortunately most did not bother to learn how to do the first option so they go with the 2nd.
So while they are the Linux distribution that takes advantage of almost all security knobs available, LinuxSE, seccomp, eBPF, userspace drivers,..., that is transparent to apps unless they try to see behind the curtain.
* Will be released on a predictable schedule every 2 years, supported for 5 years. Minor releases every quarter.
* GA will be based on Fedora 35. Preview is currently based on Fedora 34
* There's no official statement regarding compatibility with Fedora packages
* SELinux will be enforcing by default
* Kernel will be a kernel.org longterm version, not the Fedora one
* VM images/docker containers will be officially available when GA. For now you can download images unoffically 
* Unofficial ETA is Q2 2022. For reference, AL2 is currently officially supported until June 30, 2023.
I work alongside a small team maintaining quite a lot of machines on AWS. They're struggling (IMHO) to manually apply all of the security patches their scanning tool identifies. My theory is that Amazon Linux gets patched frequently, and so they'd be better off spending time normalizing our EC2 infra so that every instance is running Amazon Linux, and then work on an easy rollout mechanism to deploy the latest version.
Has anyone got any thoughts on this? It wouldn't obviate the need for patching completely, but I feel like AWS is already doing some of this work for us, so we should take advantage.
Sounds like you need a better process / automation for rolling updates. Either continuously rebuilt golden images or rolling security patches, or turning on your distros unattended upgrade mechanism could be solutions depending on your environment.
Note: We were already mostly Launch Templates/ASGs, so updates are always new instances (rather than patching long-running ones).
I opted for Packer and I've been very happy with it. Though with that said I'm still using AWS SSM Patch Manager for a few outliers that are long lived.
Like. You, Okta AD agent that can only be programmatically installed using AHK. :-/
I guess I should write up a blogpost, because... the documentation is kinda garbage.
I never got around to using packer properly so can't compare.
We used to have all sorts of distros that people just felt like using without worrying about their maintainability. We kept fighting fires to keep everything running. Once we standardized on a single distro (CentOS at the time), everything started working much more smoothly. We could have picked Debian, Ubuntu, it doesn't matter.
That being said, Amazon Linux 2 is pretty well maintained. Most things (all?) that work on RHEL, will work on it. You may need to use 3rd-party repos if you want really newer stuff (eg. PHP) but that's inherent to such LTS releases. That situation is expected to improve with the improvements that adopting Fedora brings in AL2022 but I need to catch up.
You really can't go wrong with either - use what you prefer.
I have the same question / concern.
I think it's great that AWS are offering their own flavour of Linux, but doesn't that result in added risk of vendor lock-in?
Whereas with an Ubuntu instance, one can change cloud providers quite easily , without being tied into AWS-specific Linux configurations .
 I think AWS is great, so this is more a hypothetical scenario.
 I say this knowing little about AWS Linux and the degree to which it has custom configurations that can't easily plug-and-play on other distros.
Would appreciate insight from someone who can speak to this risk from experience.
Ideally you build your software such that the OS is just an implementation detail that’s abstracted away. In the server world a switch from RHEL to Ubuntu is not as hard as a move from, for instance, Google BigTable to AWS DynamoDB
AM2 is able to run in other places so there doesn't seem to be much vendor lock-in compared to a service like DynamoDB, though.
* It has support for all new instances/hardware/services on day 1
* It's optimised for AWS, sometimes a bit faster than Ubuntu
* It has a good balance of freshness, stability and long term support
* It comes with integrations to all of AWS services and ecosystem
* It's the default choice, and often the least hassle
* It's fully supported by AWS
* It's the only realistic choice for some services (e.g. Lambda, Elastic Beanstalk)
I'm surprised to read that. Coming from Ubuntu and in the last couple years NixOS I've found the Yum repos for Amazon Linux woefully stale.
It provides newer versions for a few key packages, e.g.: Docker 20.10, PostgreSQL 13, Ruby 3.0, Kernel 5.10, nginx 1.20, PHP 8.0, Python 3.8, Redis 6.2, Go 1.15, Rust 1.47, etc.
Some newer packages, e.g. OpenSSL 1.1.1 and zsh 5.7 are provided in the main repo.
Outdated packages wasn't a major pain point in my experience. The bigger issue is a relative small selection of packages.
These are either available via 3rd party repos (e.g. NodeJS) or EPEL (e.g. libsodium), or by recompiling Fedora SRPMS. That can be an inconvenience, but not a big deal overall.
I hope the situation will improve once AL2022 is out, as Fedora comes with a much wider selection of packages.
...and with this new version, a great support for SELinux too (because of Fedora). Some don't like the push for Snaps as well.
I think SELinux is one of the biggest differences and the hardest to adapt to (as changing apt to dnf is not hard).
If you want a good starter on SELinux, my whole book on deployment is SELinux ready with a full dedicated chapter on SELinux and a SELinux cheatsheet. Today also with a 33% off for Black Friday ("blackfriday" discount code at checkout).
Of course I would prefer they use Fedora and contribute to Fedora directly.
Meanwhile, the same issue was actually fixed once and for all on GCP: https://news.ycombinator.com/item?id=8732356
Only remaining piece is the desktop segment
macOS has a unix environment, so it'll stay relevant (for how long?)
windows has WSL, it's slow, i don't see myself using it since the host OS is a giant piece of shitty crap
MS missed a chance with Win11, they could have went full steam ARM with a Linux Distro, 100% native Android support, 100% cloud native support, 100% unix support as a host OS, i wouldn't use it myself because i despise the company and its culture, but i can see potential, and i smell a huge missed opportunity
Amazon it getting it right, even thought it's exclusively targeting for cloud usages
Marketing wise it's great and consistent with their offering
The advantages of windows and mac these days are that a lot of stuff 'just works' and secondly that due to their 20-30 year history their desktop application ecosystem is much richer and also much more widely used. User interfaces are also more friendly in general.
There's no technical reason why these faults cannot be overcome, but there are significant hurdles in getting a third OS to gain major market share from the incumbents MS and Apple. The reason they have market share in servers is because the experience is better to develop on and it is free. The reason they have market share in mobile is that it was free and Google could build on top of it, and secondly that mobile was a brand new market so there were no incumbents with decades of history. I don't see those conditions replicated for desktop, especially when you consider that desktop is in decline and therefore less attractive to spend capital on.
If you want to make Ubuntu as nice as MacOS I think you need a private company willing to spend money and time in a concerted effort to get it to that point, which IMO won't happen.
Canonical employees would like a word with you.
It seems to underline the necessary financial backing. Canonical barely pulls in 100 million in earnings.
Why there isnt't billions of dollars available for Linux alone from governments worldwide for securing and improving Linux since it is currently becoming the backbone of the worldwide data processing and security infrastructure is mystifying.
Why NVidia, Nintendo, AMD, Intel, Sony, Samsung, all the Chinese handset makers, Compaq, Dell, and all of them don't provide a billion a year to linux desktop so they can have market leverage against Microsoft and/or enable them to push their hardware out to the OS for utilization by the public at large within months (as opposed to decades for Microsoft) is beyond me.
The funny thing with the M1 architecture and OSX: it highlights how the entire PC stack tying themselves to the Microsoft behemoth has them cornered. How do you move to a competitive PC ARM architecture without waiting for MS (even if it has an arm-compatible windows, let's face it it doesn't have the software support or organization behind it) to move it's bloated carcass in about 5 years to support it properly at the OS level.
Meanwhile, Linux can support an Arm arch right now with practically all the necessary software.
If someone smart was with Linux, they'd have long ago been coordinating a desktop alliance, secret or not, and selling it actively to the major powers who could fund it with chump change.
Intel makes 20 billion a quarter. AMD 4 billion. NVidia 7 billion.
And then there's the US military.
And then there's google compute, AWS, and all the other cloud wannabes besides Azure. Linux is the reason your platforms mint money. Wouldn't you like companies and people to move to cloud-based desktops?
Why the EU doesn't fund this for economic competition with american software mystifies me. Why Africa, Asia, and South America don't fund it for language support and an affordable computing ecosystem for their countries is beyond me.
I'm not a Torvalds hater, but a super-technical person leading Linux was fantastic for the first 10 years, but really the last 15 needed a different skillset.
Amazon should be interested in providing a great remote desktop solution. IMO that is a huge untapped market, especially in BYOD orgs: sure, bring your crap device, but you RDP in for anything you need business-wise that needs to be secure.
Microsoft are learning - try uninstalling the "Your Phone" application in Windows.
(Google it, paste the command into an Admin Powershell ...)
Linux now bundles NTFS.
What if a rescue distro, that erased \Windows, and ran all the applications under Wine or appropriate emulation, existed at that desupport date?
Microsoft's desktop dominance would be over on that date. They would cling to corporate desktops, but the general consumer market would be lost.
ReactOS is another wildcard for 2024.
microsoft lost the application war, everything is web based nowadays
developpers moved on linux to deploy their web apps / servers
what's left? people moved to smartphones, even for games, smartphone generates more revenue than both console / desktop combined
WASM even more showcase it, the present and future is web based interfaces
Some of them require Active Directory.
In the core of this, Windows is not going anywhere.
Linux could do it a LOT of damage, if this 2024 plan is executed.
Haha we said the same with Vista and some even for WindowsME...and nothing has and will happen.
Mmm No, not really, many small and medium businesses still have windows server + exchange + hyperV etc....
> Compared to Plan 9, Unix creaks and clanks and has obvious rust spots, but it gets the job done well enough to hold its position. There is a lesson here for ambitious system architects: the most dangerous enemy of a better solution is an existing codebase that is just good enough.
Linux might be "better"; unfortunately Windows is just "good enough" for most people to not care.
There are literally millions of Windows workstations or laptops across corporate campuses in the US, because at the end of the day what 95% of white collar workers need to do their jobs is MS Office and *maybe* one industry-specific LoB app.
Yeah, on Windows Server WSL2 is explicitly made not accessible.
It was also the only real option for me given Ubuntu's continued push for Snap, which I just don't like.
I liked ubuntu when it first released, but modern day Canonical is a shell of its former self. Snap is their sad attempt at EEE from Microsoft's playbook. And I have never liked Debian, even when it's been running on my servers since forever.
Yeah, that's pretty much my methodology for why I went Fedora too. Although I didn't know Red Hat was that deep into upstream work (awesome!), but it makes total sense.
> I liked ubuntu when it first released, but modern day Canonical is a shell of its former self. Snap is their sad attempt at EEE from Microsoft's playbook.
Yup. It's sad all around. Ubuntu was my first ever non-Windows OS back in 2006 or so. I feel like every few years I mentally shout "YOU WERE THE CHOSEN ONE!" to Canonical, haha.
Welp, better luck next time.
A Fedora LTS is exactly what I'm looking for.
So RHEL/CentOS/Alma Linux/Rocky are Fedora LTS.
For what it's worth - I'm currently running OpenSUSE Leap and MicroOS (immutable OpenSUSE Tumbleweed variant) - and they run a nice middle ground, but I still have to do major upgrades for my Leap systems every year or so between their point release updates, which is kindof a pain. I just wish I could use Fedora with ~3 years of support because that's the system and tools that I'm most familiar with (we used CentOS at work, recently migrated to Alma).
SELinux by default is a welcome addition but I'm concerned it will break many apps.
Also good that they finally get out of a python2 default Amazon Linux - only 2.5 years after it's EOL.
It's funny, I was just about to move to kernel 5.10 for amz Linux 2. Might just wait a bit for AL2022
If you go here, there's OVA files, VDI, container, and a few other formats, and you can easily translate them to some more esoteric formats.
AL2022 does not make any commitments similar to AL2 (see text from link below).
It is therefore borderline unusable for enterprises that value stability like my current employer.
1) AWS will provide security updates and bug fixes for all packages in core [for 5 yrs]
2) AWS will maintain user-space Application Binary Interface (ABI) compatibility for the following packages in core:
elfutils-libelf, glibc, glibc-utils, hesiod, krb5-libs, libgcc, libgomp, libstdc++, libtbb.so, libtbbmalloc.so, libtbbmalloc_proxy.so, libusb, libxml2, libxslt, pam, audit-libs, audit-libs-python, bzip2-libs, c-ares, clutter, cups-libs, cyrus-sasl-gssapi, cyrus-sasl-lib, cyrus-sasl-md5, dbus-glib, dbus-libs, elfutils-libs, expat, fuse-libs, glib2, gmp, gnutls, httpd, libICE, libSM, libX11, libXau, libXaw, libXext, libXft, libXi, libXinerama, libXpm, libXrandr, libXrender, libXt, libXtst, libacl, libaio, libatomic, libattr, libblkid, libcap-ng, libdb, libdb-cxx, libgudev1, libhugetlbfs, libnotify, libpfm, libsmbclient, libtalloc, libtdb, libtevent, libusb, libuuid, ncurses-libs, nss, nss-sysinit, numactl, openssl, p11-kit, papi, pcre, perl, perl-Digest-SHA, perl-Time-Piece, perl-libs, popt, python, python-libs, readline, realmd, ruby, scl-utils, sqlite, systemd-libs, systemtap, tcl, tcp_wrappers-libs, xz-libs, and zlib
3) AWS will provide Application Binary Interface (ABI) compatibility for all other packages in core unless providing such compatibility is not possible for reasons beyond AWS’s control.