Hacker News new | past | comments | ask | show | jobs | submit login
Amazon Linux 2022 (amazon.com)
270 points by gtirloni 12 days ago | hide | past | favorite | 157 comments

As a happy user of AWS Linux 2, it is extremely disappointing that they're no longer providing a drop-in RHEL replacement for EC2. I don't see any mention of things we and many large shops like ours care about - long term support and RHEL compatibility.

We've been very vocal to AWS product managers and solution architects about our needs for an Amazon Linux 3 that is a refresh over AWS Linux 2 (at least 5 years support with RHEL 8 compatibility, free kernel patching w/o reboots, official support from datadog, vmware images). Sad that we haven't been heard. We'll now need to plan to move over 20k instances to Rocky Linux.

I suspect that the move to using Fedora has something to do with changes to the CentOS project that AWS Linux 2 forked. Let's hope the beancounters at IBM doesn't have other plans for Fedora.

You could pay Red Hat for licenses if it's that important to you. But as far as I can tell Amazon is a major supporter of Rocky Linux so sounds like they heard you and are giving you exactly what you want.

Actually Amazon is just providing them with open source credits, much like they do for every other open source project.


I don’t understand your mindset at all. You’re moving thousands of instances that are likely responsible for millions of dollars over to a community run project that is effectively in the hands of random people? Why don’t you just pay Red Hat for their work?

If it's only thousands of instances that's unlikely to involve enough money for Red Hat to actually provide anything like meaningful support, from what I can tell from talking to RH customers of various sizes - but it is likely to involve enough money that selling it to management as basically a moral license isn't going to work either.

Plus, honestly, per-system licensing plus cloud autoscaling isn't really anybody's idea of a good time.

I work at a large bank and we are also sad they didn't release another RHEL compatible distro. We prefer not to pay RH subscription fees and are now looking at both Rocky and Oracle Linux. Can I contact you to share notes on approaching such a migration from AL2?

In addition to RHEL compatibility, AL2 worked well because our existing support plan with AWS covered it. It also came with free kernel 'live' patches unlike others (except Oracle).

I don’t imagine this is really the kind of thing you can get into publicly but I’d love to know more about why a bank of all places doesn’t want to pay for RHEL and just go with the free fork instead.

Thought? Nobody wants to pay for anything.

They may be wiling to pay for RHEL for some critical services, but there are a lot of things where a full RH sub doesn't make sense. Dev/test environments, non-critical services, web applications and applications with high parallelisation. You want to support as few OS variants as possible, so by using RH clones there this way you can have "can't tell by the taste" compatibility across all your systems. Plus it makes it easier hiring people.

"Amazon Linux 2022 includes kernel live patching functionality" [1]


> Let's hope the beancounters at IBM doesn't have other plans for Fedora.

What do you think they would potentially do? Fedora is the upstream for RHEL, it's integral to it. Part of the reason they dropped support for CentOS was because it didn't serve to benefit RHEL very much.

>I suspect that the move to using Fedora has something to do with changes to the CentOS project that AWS Linux 2 forked. Let's hope the beancounters at IBM doesn't have other plans for Fedora.

It does not, according to Amazon themselves. https://twitter.com/stewartsmith/status/1463594136473202692

Disclosure: I work at Red Hat.

It's almost like RH/IBM made this change to prevent AWS/Oracle from cloning their OS!

I work at AWS but not in the AL team. I’m curious what parts of AL2022 don’t meet your needs? It’ll have 5 years of support and live kernel patching. Just like AL2 I’m assuming official partner support is coming soon (AL2022 is still in preview release)

It seemed to read pretty clearly to me that "being its own Fedora derivative and therefore not as easy to trust for RHELish workloads as something like Rocky" is the sticking point, though 'mst needs more coffee' is always a possibility here.

Similarly had amazon created their own Debian derivative I'm not sure it'd be that tempting to users whose baseline is Ubuntu.

> We'll now need to plan to move over 20k instances to Rocky Linux.

No judgement - curious, what does Rocky give vs. al2022? RPM compatibility? Or maybe, what do you like about RHEL compatibility?

compliance i suspect. If you're working in compliance heavy industries, then RHEL certification is signed off on.

Any other linux (including Rocky Linux) will need to be painfully certified.

That makes sense, but wouldn't you expect al2022 to reach compliance before Rocky?

I just deal with iso cert, so not clear on possible requirements

why would amazon linux be a suitable substitute for genuine RHEL if compliance is an issue?

If compliance is an issue, I doubt whether there is such a thing as a "suitable substitute". Certainly none of the commercial applications I interact with accepted CentOS as an alternative to RHEL, so they certainly won't allow a substitution for Rocky or Alma.

While AL2022 isn't a drop-in RHEL alternative, it seems much more likely that vendors will accept it as a commercially supported install base given that it is a first-class citizen on the biggest cloud.

amazon linux is BETTER than rhel in that regard.

AWS has a secret weapon in compliance - its called AWS Artifact. most compliance certifying agencies will pass-through accept AWS Artifact reports.

No technical reason. Just sheer Amazon muscle i guess.


We used that for PCI compliance when I was working at Whole Foods. That only covers their part of the shared responsibility model.

There’s still a whole boatload of things that still need to be covered which do not fall on their side of the shared responsibility model.

So this is a weird thing to think, because Amazon Linux 2 was never a drop-in RHEL replacement. It was a bastard child between RHEL 7 and Fedora stuff, combined with a custom kernel. You didn't have RHEL kABI and you didn't even have complete RHEL userspace compatibility either.

If anything, Amazon Linux 2022 clarifies things by indicating they directly track Fedora Linux, branch and stabilize that, and offer their own lifecycle guarantees for that branch.

I'm confused about what data dog has to do with running an operating system and what RHEL 8 compatibility means. Kernel patching is a feature of a good number of free operating systems. As for LTS, I really wish that every company that griped about long term support for an OS would volunteer to pay 2-3 engineers to contribute to OS development. You'd probably see a lot more sustainable OS ecosystem if that happened.

> As a happy user of AWS Linux 2, it is extremely disappointing that they're no longer providing a drop-in RHEL replacement for EC2.

Like RockyLinux ?

I’d go for Alma Linux, but I get your point.

On AWS, I always now use Amazon's Linux distro. They also maintain their own version of OpenJDK.

As skeptical as I am about huge tech corps like Amazon, Google, etc., I have to admit I enjoy being their paying customer - nice experience. I find GCP and AWS a pleasure to use.

Just be aware that it isn't a drop in replacement. We were using AWS Corretto (https://aws.amazon.com/corretto/) and had to back out because we had all sorts of connectivity issues in combination with Mulesoft Mule ESB. I suspect it was because Corretto deprecated a number of cipher suites, but we weren't able to determine for sure.

I doubt that.



Looks like they change branding and nothing else. Maybe a backported bugfix here or there.

How do you develop for it though? Do you install it locally as well? Or do you only do interpreted languages and/or Java? I suppose Go would work across distros also (because it doesn't use libc), but that's all I can think of.

I use their images locally with Qemu, here's an example of an address to a qcow2 image:

How does one find these links you might ask? Well, I haven't found a nice way other than this:

Find the AMIs (newest last)

$ aws ec2 describe-images --region eu-west-1 --owners amazon --filters 'Name=name,Values=amzn2-ami-hvm-*-x86_64-gp2' 'Name=state,Values=available' --output json | jq -r '.Images | sort_by(.CreationDate)'

      "Architecture": "x86_64",
      "CreationDate": "2021-11-09T04:50:55.000Z",
      "ImageId": "ami-09d4a659cdd8677be",
      "ImageLocation": "amazon/amzn2-ami-hvm-2.0.20211103.0-x86_64-gp2",
      "ImageType": "machine",
      "Public": true,
      "OwnerId": "137112412989",
      "PlatformDetails": "Linux/UNIX",
      "UsageOperation": "RunInstances",
      "State": "available",
      "BlockDeviceMappings": [
          "DeviceName": "/dev/xvda",
          "Ebs": {
            "DeleteOnTermination": true,
            "SnapshotId": "snap-0f312650dadc31d95",
            "VolumeSize": 8,
            "VolumeType": "gp2",
            "Encrypted": false
      "Description": "Amazon Linux 2 AMI 2.0.20211103.0 x86_64 HVM gp2",
      "EnaSupport": true,
     "Hypervisor": "xen",
      "ImageOwnerAlias": "amazon",
      "Name": "amzn2-ami-hvm-2.0.20211103.0-x86_64-gp2",
      "RootDeviceName": "/dev/xvda",
      "RootDeviceType": "ebs",
      "SriovNetSupport": "simple",
      "VirtualizationType": "hvm"
From the information returned you have to stich the version numbers and filenames into this format:

And if you did it right, you can now download the file.

https://cdn.amazonlinux.com/os-images/latest/ exists too.

You'll also find VirtualBox, Hyper-V and VMWare ready images in there.

(and also arm64 ones)

The latest Amazon Linux and Windows AMIs are available as public SSM parameters:


ISTR there’s also an SNS topic you can subscribe to if you want to do something automatically on new AMI release.

Currently you have to launch an EC2 instance to test it.

Once it's GA, they'll provide VM images and docker containers, so you'll be able to test it offline

You should just develop your apps in/with/for containers. The container contains all the dependencies for your app. This way you never have to think about the host OS ever again; your app "just works" (once you hook up the networking, environment, secrets, storage, logging, etc for whatever is running your container). That sounds like a lot of extra work, but actually it's just standardizing the things you should already be dealing with even if you didn't use containers. The end result is your app works more reliably and you can run it anywhere.

Some of us are systems/infrastructure engineers who have to build the intermediate layer. You can't just lay a dockerfile on top of a kernel and hope the system learns how to run it by osmosis.

Yes there are services like Fargate but they're not cost efficient for many cases.

The person was asking how they should develop their app to run on a particular host. If they need to run/deploy it, they can use the EC2 Instance Launch Wizard to set everything up in the console, log in and install Docker, use Docker.com to pull their container, and then run it.

Or, like you suggest, they could use an AWS service to manage their container, like App Runner, or Lightsail, or EKS, EKS Fargate, EKS Anywhere, ECS, ECS Fargate, ECS Anywhere, ROSA, Greengrass, App2Container, Elastic Beanstalk, or Lambda. There are plenty of guides on AWS's website on how to use them.

Cost is mostly irrelevant to the conversation, as you can run containers anywhere (other than, say, a CloudFlare worker); pay for any infrastructure you want and then run the container there.

This is true, but people focusing on only these benefits often miss the fact that they still have to update the image contents and re-deploy as soon as security patches are available.

This is like updating the direct dependencies of your service itself (e.g. cargo audit -> cargo update) but anecdotally I'm seeing many people neglect the image and sometimes even pin specific versions and miss potential updates even when they do later rebuild it.

We take unattended upgrades for granted on Debian-based servers, and that will likely help the Docker host system, but I'm not aware of anything nearly as automated for rebuilding and redeploying the images themselves.

It could be part of your CI/CD pipeline but that in itself is a lot of extra setup and must not be neglected, and it must make sense, e.g. pin in a way that will still pick up security patches and have a dependency audit as part of CI/CD to report when the patching hasn't been enough (e.g. due to semver constraints).

Docker's website has pretty sweet automation that you can use to re-build your containers automatically when the base image changes.

What you describe isn't hard to achieve. Write a one-line cron job that gets the latest packages for your container's base, writes them to a file, commits it to Git, and pushes it. Then set up a Git webhook that runs a script you have to build your container with a new version and push that to a dev instance. Add some tests, and you have an entire CI/CD process with just one cron job and one Git webhook.

Why? I develop C++ servers for Linux. I have script that can build production server from nothing with all the dependencies needed, deploy database and then pull down source build executable run tests and install it as a daemon. I test if from scratch every once in a while just in case and did not have any troubles for years.

> you never have to think about the host OS ever again

This is literally one of the only things that is not included in a container image. The Linux kernel is the Operating System and you are subject to differences in its configuration depending on where the container is running. You are referring to the distribution.

> You should just develop your apps in/with/for containers. The container contains all the dependencies for your app. This way you never have to think about the host OS ever again; your app "just works" (once you hook up the networking, environment, secrets, storage, logging, etc for whatever is running your container). That sounds like a lot of extra work, but actually it's just standardizing the things you should already be dealing with even if you didn't use containers. The end result is your app works more reliably and you can run it anywhere.

This is a false sense of reproducibility. I encountered cases where container worked well on one machine and crashed or had weird bugs on another one.

This happens, but is pretty rare. Using containers generally leads to much more reliable portability than trying to manage all the dependencies by hand.

If I remember correctly Go does use libc by default if you link with net package (you can set CGO_ENABLED=0 to disable it but then you won’t get NSS). On openbsd it also switched back to using libc

You can also use a net specific build tag.

I guess like in the good old UNIX days, by ssh (nee telnet), browser (nee X Windows) into devenvs.

Are there any quantities data for the performance comparison like they did for Aurora vs MySQL? Thanks

Well, it is generally more likely to be tuned to AWS, containing right drivers and tools installed than a default distro you would download from the website, but the images that are available on AWS would likely also tuned similarly. If there are some issues where other image is noticeable worse they would look into AmazonLinux and apply the changes from it.

I would say that AmazonLinux is likely to have less issues with latest instance types (if they change something "hardware" wise, for example when AWS started exposing EBS using NVMe drivers there were some issues originally).

I'm a big SELinux fan and user.

Enabling it won't in itself secure your company's applications, as the default policies in Fedora only apply to installed services (e.g ssh) that have a policy written for them.

This is probably right on the boundary of the shared-security-model, but I think it would be great if they also offered easier ways for application developers to leverage the advertised feature.

FWIW, Docker, podman, LXC, and Kubernetes will apply SELinux policies to containers automatically if you have that support enabled at build time (many distributions do have it enabled, esp Fedora family) and SELinux enabled at runtime. Likewise for AppArmor.


& AWS are active in this area moreso with bottlerocket-os https://aws.amazon.com/blogs/opensource/security-features-of...

The recent release even incorporating the use of a feature I had previously dismissed as useless (MCS) is really quite neat https://github.com/bottlerocket-os/bottlerocket/discussions/...

most servers use debian or ubuntu, i think this will be greate and maybe even a killer feuture to change a little the landscape, but i don't think is as much impact as we wish in at least the next 5 years

You're not wrong, but writing selinux policy isn't that complicated. You can easily look at ausearch output to understand why a constrained process failed and brute force a policy using audit2allow. Although as the policy writer becomes more familiar with selinux and their app, they can write better policy.

I do know this, I'm currently putting together a training course on authoring SELinux policy.

Surely the fact that 'disabling SELinux' is the top result on the subject in Google or StackOverflow will tell you that you would be in the minority of developers that like working with it and find it easy to do so.

I think there's more to it than just simply running an app without receiving an AVC complaint in auditd, you need to be able to test that the controls you put in place actually protect the application in some way, this does not come for free with audit2allow and other such generative tools.

The problem I found (on Centos 8) is that audit sometimes denies but nothing is logged. I found this is the case when an apache script tries to kill another process. It required 2 separate policies: one of which audit2allow came up with, and another I had to figure out myself after a whole bunch of time scouring stackoverflow. After that I just gave up on selinux and turned it off, as I just couldn't trust it.

If it actually did what it was supposed to do in a reasonable manner, people would use it.

> The problem I found (on Centos 8) is that audit sometimes denies but nothing is logged

I doubt that. journalctl has always given me something when there was an actual denial. You might just not have looked right

I did. It is trivial to recreate.

For some applications, certainly.

But for applications with a large feature set - e.g. a web browser - if the policy author didn't use a particular feature - e.g. U2F security key support - you might be introducing a new source of problems that only advanced users can easily solve.

Not that I imagine Amazon Linux is used for web browsing very often....

I don’t understand why this is based on Fedora. Isn’t that more of a desktop distro…? And this seems more aimed at virtual machines running on EC2…? Or am I missing something?

It’s also interesting that at the same time Amazon is sponsoring Rocky Linux: https://rockylinux.org/sponsors/ (Which is based on Red Hat Enterprise Linux.)

"Our release cadence (new major version every 2 years) best lines up with a highly predictable release cadence of an upstream distribution such as Fedora."

"We believe that having Fedora as upstream allows us to meet the needs of the customers that we talked to in terms of flexibility and pulling in newer packages."


Stability is important but less and less people seem to want to run 4 year old software.

I think it depends quite a bit on the circumstances.

And as long as the kernel is decently fresh, one can run new shiny software using containers instead.

It depends on how do you define stability. Fedora packages are very stable in terms of bugs, but the problem of changes between versions might cause extra work. However, many run their services in containers anyway, and you can use the latest packages on your host.

Fedora has a server edition, and everything in RHEL/CentOS is old

> and everything in RHEL/CentOS is old

This is the whole reason why people use CentOS and Debian as server side old stable and most importantly security patch. if you need your dev stack as newest version just installed it on the old stable OS base so you can worry only on your dev stack.

Speaking as an ex-platform engineer, we needed to be on much newer kernel versions to leverage ebpf and other modern features

The Oracle UEK wasn't an option?

Oracle was never an option.

I kinda dragged Oracle's UEK over the coals for xxhash. That was a red herring.


> CentOS and Debian as server side old stable and most importantly security patch

Historically, CentOS has been very slow to release security patches, compared to upstream (RHEL). And for anything non-critical (but still often high) Debian stable tends to receive fixes a lot later than unstable, and sometimes never due to need to backport.

Not completely true since the introduction of module packages. You can now get more up-to-date packages within CentOS Stream/RHEL.

Fedora is the source and integration space for many things these days, not just the Fedora Workstation any more. Its the upstream for RHEL/CentOS but also has a ton of editions and spins, including Fedora CoreOO, Fedora IoT, Fedora Silverblue, etc.

No, Fedora is general purpose.

Source: Fedora developer, previously employed at Red Hat as a Linux packager.

Fedora works badly on Raspberry PI.

Fedora is the upstream for RHEL, and was the upstream for CentOS. While many folks use Fedora as a desktop OS alternative to Ubuntu, Fedora was not designed with desktops in mind.

You are correct saying it is designed for EC2 instances, as it is the de facto default image for EC2 instances, despite many folks choosing an Ubuntu image instead.

Looks interesting. SELinux by default is certainly a win, it seems that Linux has finally hit a tipping point where SELinux is a reasonable option (ie: someone else is going to do the work for you).

Unfortunately I'm just way more used to debian based systems, and I feel like having a mismatch in production would just lead to friction.

RHEL running with SELinux enabled has been a thing since I worked at Red Hat 12 years ago, and Amazon Linux 2 was based on a CentOS upstream that had the capability of running in that way. All certification had to happen with SELinux enabled, and any distro provided service was setup to run with full restrictions, and it was the default on for all Professional Services work.

However it became a problem once you used 3rd party software as step 1 of most install guides was to disable SELinux.

I use systemd units to start Oracle databases.

These come up unconfined by default in RHEL 7, but this behavior changed in RHEL 8.

I don't remember the specifics, but Oracle support confirmed that it should be assigned unconfined in the newer OS.

This is essentially "setenforce 0" for a process, as I understand it.

In RedHat or CentOS it was enabled by default as well for a long while. The problem was that if you installed custom software (not packaged by the distro) you had two options:

- create and install SELinux rules for it

- disable SELinux

Unfortunately most did not bother to learn how to do the first option so they go with the 2nd.

Besides the sibling answers, it has been enabled by default on Android for quite some time now, it is one of the mechanisms how they enforce the NDK being mostly about extending the Java/Kotlin userspace with native code and nothing else.

Yes, Android is in fact one of the major contributors to that 'tipping point' I was mentioning.

The irony with Android, is that from the userspace point of view it doesn't matter it runs on top of the Linux kernel.

So while they are the Linux distribution that takes advantage of almost all security knobs available, LinuxSE, seccomp, eBPF, userspace drivers,..., that is transparent to apps unless they try to see behind the curtain.

Yep, it's great. But they did do a lot of work to get system services and privileged Google Apps to behave in SELinux.

SELinux has always been a reasonable option but it’s just scarier than people are used to. I used Fedora for a couple of years and was surprised by how straight forward it was once I understood it.


* Will be released on a predictable schedule every 2 years, supported for 5 years. Minor releases every quarter.

* GA will be based on Fedora 35. Preview is currently based on Fedora 34

* There's no official statement regarding compatibility with Fedora packages

* SELinux will be enforcing by default

* Kernel will be a kernel.org longterm version, not the Fedora one

* VM images/docker containers will be officially available when GA. For now you can download images unoffically [1]

* Unofficial ETA is Q2 2022. For reference, AL2 is currently officially supported until June 30, 2023.

1. https://news.ycombinator.com/item?id=29344927

You make no mention of it being a rebuild of RHEL 9 beta, so are they making their own EL based on F35 or is it based on RHEL 9?

It's not based on RHEL. They're making their own distro which is based on Fedora.

This sounds closer to what CentOS Stream is planning to be but with a 2-year release cadence instead of 3?

Perhaps someone could give me some advice?

I work alongside a small team maintaining quite a lot of machines on AWS. They're struggling (IMHO) to manually apply all of the security patches their scanning tool identifies. My theory is that Amazon Linux gets patched frequently, and so they'd be better off spending time normalizing our EC2 infra so that every instance is running Amazon Linux, and then work on an easy rollout mechanism to deploy the latest version.

Has anyone got any thoughts on this? It wouldn't obviate the need for patching completely, but I feel like AWS is already doing some of this work for us, so we should take advantage.

For those few AMI's that are long lived, AWS SSM Patch Manager is your friend. Naturally take care to roll out patches in a rolling block, you don't want to apply a broken patch everywhere in the same day :)


I second this, we use it to manage a bigger fleet with a few hundred machines. One thing to keep in mind though is that it will not apply kernel updates (as those require a reboot) so you still need to account for it.

I haven’t tried this yet (our instances need other changes to get to this point) but AL2 can support live kernel patching: https://docs.aws.amazon.com/systems-manager/latest/userguide...

Every mainstream upstream Linux vendor is continuously pushing updated AMIs. It shouldn’t really matter whether you solve this with Ubuntu or Amazon Linux or RHEL/CentOS.

Sounds like you need a better process / automation for rolling updates. Either continuously rebuilt golden images or rolling security patches, or turning on your distros unattended upgrade mechanism could be solutions depending on your environment.

Recently we moved to use EC2 Image Builder, and it's been working great: https://aws.amazon.com/image-builder/

Note: We were already mostly Launch Templates/ASGs, so updates are always new instances (rather than patching long-running ones).

For the life of me I never got Image Builder working in a decent state.

I opted for Packer and I've been very happy with it. Though with that said I'm still using AWS SSM Patch Manager for a few outliers that are long lived.

Like. You, Okta AD agent that can only be programmatically installed using AHK. :-/

It was a little strange to set up, I remember it taking a while/a lot of experimentation... But in the end it's just running userdata, and/or "component" scripts, and baking that into the AMI. It's been happily updating and switching out Launch Template versions for our ASGs (for reasons each pipeline can only push to 5 LTs).

I guess I should write up a blogpost, because... the documentation is kinda garbage.

I never got around to using packer properly so can't compare.

Yes, one of the core benefits of a provider like AWS is that they provide tooling to treat individual instances as immutable entities that you simply replace without any interruption to your users. You should focus on expressing the infrastructure as code and using mechanisms like ASGs to roll out new instances based on the latest Amazon provided AMIs.

If you can, definitely standardize on as few distros as possible. It'll make applying patches (and learning when things go wrong, because they will) much easier.

We used to have all sorts of distros that people just felt like using without worrying about their maintainability. We kept fighting fires to keep everything running. Once we standardized on a single distro (CentOS at the time), everything started working much more smoothly. We could have picked Debian, Ubuntu, it doesn't matter.

That being said, Amazon Linux 2 is pretty well maintained. Most things (all?) that work on RHEL, will work on it. You may need to use 3rd-party repos if you want really newer stuff (eg. PHP) but that's inherent to such LTS releases. That situation is expected to improve with the improvements that adopting Fedora brings in AL2022 but I need to catch up.

Yep we do this, works good - you can either trigger a server refresh from SNS (AWS notifies you of certain AMI updates) or we just rebuild our underlying fleet each week with the most current AL2 AMI

Why do people choose Amazon Linux over, say, an Ubuntu LTS?

There are really two primary camps - RedHat based (CentOS, Rocky Linux, Amazon Linux, etc) and Debian based (Debian, Ubuntu, etc). There are of course many other bloodlines - but these are the most common in production environments and more specifically cloud env. If you are familiar with one version of linux that is RH based, you will tend to gravitate to others with similar DNA. Likewise, if you come from Debian/Ubuntu you will tend to stick with those. At the end of the day they are both Linux, but each has their own approach to configuration, where things go, package management, etc.

You really can't go wrong with either - use what you prefer.

FWIW, the real brunt of my question was why one would go with a cloud-provider specific operating system over one from a group like Canonical or RedHat, as I would naively expect it to have less support and particularly less ecosystem-wide understanding and experience while not being available for other systems, and so it would seem like an easily-avoidable source of vendor lock-in. If I were be part of Camp RedHat I would personally use CentOS, not "Amazon Linux", unless there was some extremely good reason why Amazon Linux in specific was awesome.

> it would seem like an easily-avoidable source of vendor lock-in

I have the same question / concern.

I think it's great that AWS are offering their own flavour of Linux, but doesn't that result in added risk of vendor lock-in?

Whereas with an Ubuntu instance, one can change cloud providers quite easily [0], without being tied into AWS-specific Linux configurations [1].

[0] I think AWS is great, so this is more a hypothetical scenario.

[1] I say this knowing little about AWS Linux and the degree to which it has custom configurations that can't easily plug-and-play on other distros.

Would appreciate insight from someone who can speak to this risk from experience.

AWS’ flavor of Linux is open source, though. You can run it anywhere, not just Amazon. I don’t see this as a vendor lock in issue, personally.

Ideally you build your software such that the OS is just an implementation detail that’s abstracted away. In the server world a switch from RHEL to Ubuntu is not as hard as a move from, for instance, Google BigTable to AWS DynamoDB

Besides going the path of least resistance in AWS, possibly to get OS & package support from AWS if there's already an enterprise-level support plan in place, rather than needing to buy other support subscriptions (eg, RHEL)?

AM2 is able to run in other places so there doesn't seem to be much vendor lock-in compared to a service like DynamoDB, though.

While I prefer Ubuntu myself, we use Amazon Linux because:

* It has support for all new instances/hardware/services on day 1

* It's optimised for AWS, sometimes a bit faster than Ubuntu

* It has a good balance of freshness, stability and long term support

* It comes with integrations to all of AWS services and ecosystem

* It's the default choice, and often the least hassle

* It's fully supported by AWS

* It's the only realistic choice for some services (e.g. Lambda, Elastic Beanstalk)

> a good balance of freshness ...

I'm surprised to read that. Coming from Ubuntu and in the last couple years NixOS I've found the Yum repos for Amazon Linux woefully stale.

AL2 comes with amazon-linux-extras.

It provides newer versions for a few key packages, e.g.: Docker 20.10, PostgreSQL 13, Ruby 3.0, Kernel 5.10, nginx 1.20, PHP 8.0, Python 3.8, Redis 6.2, Go 1.15, Rust 1.47, etc.

Some newer packages, e.g. OpenSSL 1.1.1 and zsh 5.7 are provided in the main repo.

Outdated packages wasn't a major pain point in my experience. The bigger issue is a relative small selection of packages.

These are either available via 3rd party repos (e.g. NodeJS) or EPEL (e.g. libsodium), or by recompiling Fedora SRPMS. That can be an inconvenience, but not a big deal overall.

I hope the situation will improve once AL2022 is out, as Fedora comes with a much wider selection of packages.

A level of support within AWS.

...and with this new version, a great support for SELinux too (because of Fedora). Some don't like the push for Snaps as well.

I think SELinux is one of the biggest differences and the hardest to adapt to (as changing apt to dnf is not hard).

If you want a good starter on SELinux, my whole book on deployment[0] is SELinux ready with a full dedicated chapter on SELinux and a SELinux cheatsheet. Today also with a 33% off for Black Friday ("blackfriday" discount code at checkout).

[0] https://deploymentfromscratch.com/

"because of Fedora" <- So, why not use Fedora? Is it Amazon Linux that you find awesome or whatever Amazon Linux is based on?

I would prefer Fedora, but if I am on AWS and Amazon Linux is the one that gets the awesome Amazon support, then choosing Amazon Linux might be compelling.

Of course I would prefer they use Fedora and contribute to Fedora directly.

On AWS, there are serious issues such as https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1331150..., that only requires a simple sysctl workaround, yet AWS would only put the workaround into the Amazon Linux images.

Meanwhile, the same issue was actually fixed once and for all on GCP: https://news.ycombinator.com/item?id=8732356

Lots of developers / dev orgs are AWS users first, Linux users second and so they don't have strong opinions or experience about that.

I can see linux eclipsing all the current OS's, it already happened with smartphones, IOTs and the other little things (i forgot how they are called)

Only remaining piece is the desktop segment

macOS has a unix environment, so it'll stay relevant (for how long?)

windows has WSL, it's slow, i don't see myself using it since the host OS is a giant piece of shitty crap

MS missed a chance with Win11, they could have went full steam ARM with a Linux Distro, 100% native Android support, 100% cloud native support, 100% unix support as a host OS, i wouldn't use it myself because i despise the company and its culture, but i can see potential, and i smell a huge missed opportunity

Amazon it getting it right, even thought it's exclusively targeting for cloud usages

Marketing wise it's great and consistent with their offering

For Linux to conquer the desktop you'd need for it to beat out Windows or Mac for market share. For it to do that you'd need it to be have competitive usability for everyday people, and this is still far off. When I use my linux box at work I have to google for things like, "how do I enable this resolution for my monitor which isn't showing up" and then punch in a bunch of commands into the terminal.

The advantages of windows and mac these days are that a lot of stuff 'just works' and secondly that due to their 20-30 year history their desktop application ecosystem is much richer and also much more widely used. User interfaces are also more friendly in general.

There's no technical reason why these faults cannot be overcome, but there are significant hurdles in getting a third OS to gain major market share from the incumbents MS and Apple. The reason they have market share in servers is because the experience is better to develop on and it is free. The reason they have market share in mobile is that it was free and Google could build on top of it, and secondly that mobile was a brand new market so there were no incumbents with decades of history. I don't see those conditions replicated for desktop, especially when you consider that desktop is in decline and therefore less attractive to spend capital on.

If you want to make Ubuntu as nice as MacOS I think you need a private company willing to spend money and time in a concerted effort to get it to that point, which IMO won't happen.

> If you want to make Ubuntu as nice as MacOS I think you need a private company willing to spend money and time in a concerted effort to get it to that point, which IMO won't happen.

Canonical employees would like a word with you.

You mean because they "failed" with Unity? (which was technically not a failure I'd say, just a political/funding failure)

It seems to underline the necessary financial backing. Canonical barely pulls in 100 million in earnings.

Why there isnt't billions of dollars available for Linux alone from governments worldwide for securing and improving Linux since it is currently becoming the backbone of the worldwide data processing and security infrastructure is mystifying.

Why NVidia, Nintendo, AMD, Intel, Sony, Samsung, all the Chinese handset makers, Compaq, Dell, and all of them don't provide a billion a year to linux desktop so they can have market leverage against Microsoft and/or enable them to push their hardware out to the OS for utilization by the public at large within months (as opposed to decades for Microsoft) is beyond me.

The funny thing with the M1 architecture and OSX: it highlights how the entire PC stack tying themselves to the Microsoft behemoth has them cornered. How do you move to a competitive PC ARM architecture without waiting for MS (even if it has an arm-compatible windows, let's face it it doesn't have the software support or organization behind it) to move it's bloated carcass in about 5 years to support it properly at the OS level.

Meanwhile, Linux can support an Arm arch right now with practically all the necessary software.

If someone smart was with Linux, they'd have long ago been coordinating a desktop alliance, secret or not, and selling it actively to the major powers who could fund it with chump change.

Intel makes 20 billion a quarter. AMD 4 billion. NVidia 7 billion.

And then there's the US military.

And then there's google compute, AWS, and all the other cloud wannabes besides Azure. Linux is the reason your platforms mint money. Wouldn't you like companies and people to move to cloud-based desktops?

Why the EU doesn't fund this for economic competition with american software mystifies me. Why Africa, Asia, and South America don't fund it for language support and an affordable computing ecosystem for their countries is beyond me.

I'm not a Torvalds hater, but a super-technical person leading Linux was fantastic for the first 10 years, but really the last 15 needed a different skillset.

100% agree. Getting an M1 Mac has solidified this feeling for me too; I still want to be a Linux desktop user on philosophical grounds and will never go back to Windows, but as I age I have rapidly lost interest in tinkering and now want it to Just Work. Actually it isn't really the distro which needs fixing, but rather the package management space. Appimage/Snap/Flatpak need to be consolidated into a single open standard which is as easy to use as Homebrew.

Essentially none of the organizations you've mentioned care about desktop Linux. They may care about the kernel, and many of them do help fund aspects of kernel development. But desktops? It's not relevant to them, or their plans.

You're mostly right, but most government orgs care about desktops, Intel/AMD care about desktops, IBM should care about desktops if they could get a foot in the door.

Amazon should be interested in providing a great remote desktop solution. IMO that is a huge untapped market, especially in BYOD orgs: sure, bring your crap device, but you RDP in for anything you need business-wise that needs to be secure.

Maybe they can do it if they 10x their revenue.

Really and what would the say? Sorry for the blocking "App Store"? The bs we started with mir and never finished ubuntu touch? That all our "home made" code is proprietary?

> then punch in a bunch of commands into the terminal.

Microsoft are learning - try uninstalling the "Your Phone" application in Windows.

(Google it, paste the command into an Admin Powershell ...)

If Microsoft actually follows through with their plan to desupport Windows 10 in 2024, then (guessing) over half of their installed base will be orphaned.

Linux now bundles NTFS.

What if a rescue distro, that erased \Windows, and ran all the applications under Wine or appropriate emulation, existed at that desupport date?

Microsoft's desktop dominance would be over on that date. They would cling to corporate desktops, but the general consumer market would be lost.

ReactOS is another wildcard for 2024.

microsoft desktop dominance is a myth

microsoft lost the application war, everything is web based nowadays

developpers moved on linux to deploy their web apps / servers

what's left? people moved to smartphones, even for games, smartphone generates more revenue than both console / desktop combined

WASM even more showcase it, the present and future is web based interfaces

There are a lot of .EXE's out there.

Some of them require Active Directory.

In the core of this, Windows is not going anywhere.

Linux could do it a LOT of damage, if this 2024 plan is executed.

>Linux could do it a LOT of damage, if this 2024 plan is executed.

Haha we said the same with Vista and some even for WindowsME...and nothing has and will happen.

well things happened, windows server went extinct ;)

>windows server went extinct

Mmm No, not really, many small and medium businesses still have windows server + exchange + hyperV etc....

For a historical take - UNIX to Plan 9:

> Compared to Plan 9, Unix creaks and clanks and has obvious rust spots, but it gets the job done well enough to hold its position. There is a lesson here for ambitious system architects: the most dangerous enemy of a better solution is an existing codebase that is just good enough.[1]

Linux might be "better"; unfortunately Windows is just "good enough" for most people to not care.

[1] http://catb.org/esr/writings/taoup/html/plan9.html

While I would love for this to happen, I just don't see it at all.

There are literally millions of Windows workstations or laptops across corporate campuses in the US, because at the end of the day what 95% of white collar workers need to do their jobs is MS Office and *maybe* one industry-specific LoB app.

Isn't MS Office a web app now?

Yes, and considering the google office, that percentage is a lot lower currently.

Last I checked, WSL wasn't available for Windows Server, unfortunately.

WSL1 only, not WSL2.

You have been able to run WSL2 on Windows 10 Insider builds for a long time now (since 2019?). Is it still not possible to switch WSL versions for a Linux distro running on Windows Server?

> Is it still not possible to switch WSL versions for a Linux distro running on Windows Server?

Yeah, on Windows Server WSL2 is explicitly made not accessible.

What's your use-case for WSL on Windows Server, if you don't mind me asking?

Hopefully Fedora gets a lot of extra eyes and brains because of this. Cool, it's been my desktop OS for a long time.

I used it from 2012-2015 on my main machine, and now after a mix of Windows, macOS and Chrome OS, I'm back to Fedora as of this week. I enjoy Fedora but I really like Gnome.

It was also the only real option for me given Ubuntu's continued push for Snap, which I just don't like.

I have used Arch Linux for 10+ years, then shortly macOS and Windows, and to Fedora as of yesterday as well. Red Hat is employing and paying for most of the desktop and server userspace developers (GNOME, systemd, pipewire, podman, ostree), I might as well use their official distro. It's the closest thing to bleeding edge and forward looking distribution that dictates the pace for everybody else to follow. I like that.

I liked ubuntu when it first released, but modern day Canonical is a shell of its former self. Snap is their sad attempt at EEE from Microsoft's playbook. And I have never liked Debian, even when it's been running on my servers since forever.

> Red Hat is employing and paying for most of the desktop and server userspace developers (GNOME, systemd, pipewire, podman, ostree), I might as well use their official distro.

Yeah, that's pretty much my methodology for why I went Fedora too. Although I didn't know Red Hat was that deep into upstream work (awesome!), but it makes total sense.

> I liked ubuntu when it first released, but modern day Canonical is a shell of its former self. Snap is their sad attempt at EEE from Microsoft's playbook.

Yup. It's sad all around. Ubuntu was my first ever non-Windows OS back in 2006 or so. I feel like every few years I mentally shout "YOU WERE THE CHOSEN ONE!" to Canonical, haha.

An LTS distribution based on Fedora (and NOT RHEL) is something I've been wanting for a long time, but I don't think this is really gonna be for the non-cloud general use case?

Welp, better luck next time.

Same here. It's not clear to me if this distro is indeed viable for the desktop, and hoe exactly they support the Fedora packages past Fedora lifetime (or whether they'll even supply all of the Fedora repos).

A Fedora LTS is exactly what I'm looking for.

But RHEL is based on Fedora. Every X releases Red hat forks Fedora and builds the next major version of RHEL on top of it.

So RHEL/CentOS/Alma Linux/Rocky are Fedora LTS.

But I can't do direct upgrades between RHEL versions. I want Ubuntu LTS style support but with Fedora and direct upgrades between LTS versions.

For what it's worth - I'm currently running OpenSUSE Leap and MicroOS (immutable OpenSUSE Tumbleweed variant) - and they run a nice middle ground, but I still have to do major upgrades for my Leap systems every year or so between their point release updates, which is kindof a pain. I just wish I could use Fedora with ~3 years of support because that's the system and tools that I'm most familiar with (we used CentOS at work, recently migrated to Alma).

They yanked my favorite part of AL2, the extras repo with new packages from EPEL and Fedora.

SELinux by default is a welcome addition but I'm concerned it will break many apps.

Interesting. For me the amazon-linux-extras part was the most annoying part. Using automation tooling (Terraform for instance deployment and Ansible in User Data of instances) it was so annoying to work with. Fallback to the shell executor in Ansible to get something installed is a pita.

Also good that they finally get out of a python2 default Amazon Linux - only 2.5 years after it's EOL.

Big fan of using a largely upstream Linux kernel. In general, I've been very happy with AWS Linux kernels vs. Ubuntu and CentOS in AWS

It's funny, I was just about to move to kernel 5.10 for amz Linux 2. Might just wait a bit for AL2022

Has anyone gotten lld linker to work with any version of Amazon Linux?

Can I get an ISO of this distro without having an Amazon account? I checked the few links on the page but found nothing.

Cloud images are harder to find as ISOs ... They are using this 'cloud image' concept where they distribute something very similar or equal to an OVA, which is the hard disk and a manifest.


If you go here, there's OVA files, VDI, container, and a few other formats, and you can easily translate them to some more esoteric formats.

Looks like a regression from AL2 in the area of compatibility preserving updates.

AL2022 does not make any commitments similar to AL2 (see text from link below).

It is therefore borderline unusable for enterprises that value stability like my current employer.


1) AWS will provide security updates and bug fixes for all packages in core [for 5 yrs]

2) AWS will maintain user-space Application Binary Interface (ABI) compatibility for the following packages in core:

elfutils-libelf, glibc, glibc-utils, hesiod, krb5-libs, libgcc, libgomp, libstdc++, libtbb.so, libtbbmalloc.so, libtbbmalloc_proxy.so, libusb, libxml2, libxslt, pam, audit-libs, audit-libs-python, bzip2-libs, c-ares, clutter, cups-libs, cyrus-sasl-gssapi, cyrus-sasl-lib, cyrus-sasl-md5, dbus-glib, dbus-libs, elfutils-libs, expat, fuse-libs, glib2, gmp, gnutls, httpd, libICE, libSM, libX11, libXau, libXaw, libXext, libXft, libXi, libXinerama, libXpm, libXrandr, libXrender, libXt, libXtst, libacl, libaio, libatomic, libattr, libblkid, libcap-ng, libdb, libdb-cxx, libgudev1, libhugetlbfs, libnotify, libpfm, libsmbclient, libtalloc, libtdb, libtevent, libusb, libuuid, ncurses-libs, nss, nss-sysinit, numactl, openssl, p11-kit, papi, pcre, perl, perl-Digest-SHA, perl-Time-Piece, perl-libs, popt, python, python-libs, readline, realmd, ruby, scl-utils, sqlite, systemd-libs, systemtap, tcl, tcp_wrappers-libs, xz-libs, and zlib

3) AWS will provide Application Binary Interface (ABI) compatibility for all other packages in core unless providing such compatibility is not possible for reasons beyond AWS’s control.


This is not a desktop distribution by any means. Keep the LTT nonsense to YouTube, please.

Lol obvious! - Why so angry ?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact