Hacker News new | past | comments | ask | show | jobs | submit login

On AWS, I always now use Amazon's Linux distro. They also maintain their own version of OpenJDK.

As skeptical as I am about huge tech corps like Amazon, Google, etc., I have to admit I enjoy being their paying customer - nice experience. I find GCP and AWS a pleasure to use.




Just be aware that it isn't a drop in replacement. We were using AWS Corretto (https://aws.amazon.com/corretto/) and had to back out because we had all sorts of connectivity issues in combination with Mulesoft Mule ESB. I suspect it was because Corretto deprecated a number of cipher suites, but we weren't able to determine for sure.


I doubt that.

https://docs.aws.amazon.com/corretto/latest/corretto-17-ug/p...

https://docs.aws.amazon.com/corretto/latest/corretto-11-ug/p...

Looks like they change branding and nothing else. Maybe a backported bugfix here or there.


How do you develop for it though? Do you install it locally as well? Or do you only do interpreted languages and/or Java? I suppose Go would work across distros also (because it doesn't use libc), but that's all I can think of.


I use their images locally with Qemu, here's an example of an address to a qcow2 image:

    https://cdn.amazonlinux.com/os-images/2.0.20210721.2/kvm/amzn2-kvm-2.0.20210721.2-x86_64.xfs.gpt.qcow2
How does one find these links you might ask? Well, I haven't found a nice way other than this:

Find the AMIs (newest last)

$ aws ec2 describe-images --region eu-west-1 --owners amazon --filters 'Name=name,Values=amzn2-ami-hvm-*-x86_64-gp2' 'Name=state,Values=available' --output json | jq -r '.Images | sort_by(.CreationDate)'

    {
      "Architecture": "x86_64",
      "CreationDate": "2021-11-09T04:50:55.000Z",
      "ImageId": "ami-09d4a659cdd8677be",
      "ImageLocation": "amazon/amzn2-ami-hvm-2.0.20211103.0-x86_64-gp2",
      "ImageType": "machine",
      "Public": true,
      "OwnerId": "137112412989",
      "PlatformDetails": "Linux/UNIX",
      "UsageOperation": "RunInstances",
      "State": "available",
      "BlockDeviceMappings": [
        {
          "DeviceName": "/dev/xvda",
          "Ebs": {
            "DeleteOnTermination": true,
            "SnapshotId": "snap-0f312650dadc31d95",
            "VolumeSize": 8,
            "VolumeType": "gp2",
            "Encrypted": false
          }
        }
      ],
      "Description": "Amazon Linux 2 AMI 2.0.20211103.0 x86_64 HVM gp2",
      "EnaSupport": true,
     "Hypervisor": "xen",
      "ImageOwnerAlias": "amazon",
      "Name": "amzn2-ami-hvm-2.0.20211103.0-x86_64-gp2",
      "RootDeviceName": "/dev/xvda",
      "RootDeviceType": "ebs",
      "SriovNetSupport": "simple",
      "VirtualizationType": "hvm"
    } 
From the information returned you have to stich the version numbers and filenames into this format:

    https://cdn.amazonlinux.com/os-images/2.0.20210721.2/kvm/amzn2-kvm-2.0.20210721.2-x86_64.xfs.gpt.qcow2
And if you did it right, you can now download the file.


https://cdn.amazonlinux.com/os-images/latest/ exists too.

You'll also find VirtualBox, Hyper-V and VMWare ready images in there.

(and also arm64 ones)


The latest Amazon Linux and Windows AMIs are available as public SSM parameters:

https://aws.amazon.com/blogs/compute/query-for-the-latest-am...

ISTR there’s also an SNS topic you can subscribe to if you want to do something automatically on new AMI release.


Currently you have to launch an EC2 instance to test it.

Once it's GA, they'll provide VM images and docker containers, so you'll be able to test it offline


You should just develop your apps in/with/for containers. The container contains all the dependencies for your app. This way you never have to think about the host OS ever again; your app "just works" (once you hook up the networking, environment, secrets, storage, logging, etc for whatever is running your container). That sounds like a lot of extra work, but actually it's just standardizing the things you should already be dealing with even if you didn't use containers. The end result is your app works more reliably and you can run it anywhere.


Some of us are systems/infrastructure engineers who have to build the intermediate layer. You can't just lay a dockerfile on top of a kernel and hope the system learns how to run it by osmosis.

Yes there are services like Fargate but they're not cost efficient for many cases.


The person was asking how they should develop their app to run on a particular host. If they need to run/deploy it, they can use the EC2 Instance Launch Wizard to set everything up in the console, log in and install Docker, use Docker.com to pull their container, and then run it.

Or, like you suggest, they could use an AWS service to manage their container, like App Runner, or Lightsail, or EKS, EKS Fargate, EKS Anywhere, ECS, ECS Fargate, ECS Anywhere, ROSA, Greengrass, App2Container, Elastic Beanstalk, or Lambda. There are plenty of guides on AWS's website on how to use them.

Cost is mostly irrelevant to the conversation, as you can run containers anywhere (other than, say, a CloudFlare worker); pay for any infrastructure you want and then run the container there.


This is true, but people focusing on only these benefits often miss the fact that they still have to update the image contents and re-deploy as soon as security patches are available.

This is like updating the direct dependencies of your service itself (e.g. cargo audit -> cargo update) but anecdotally I'm seeing many people neglect the image and sometimes even pin specific versions and miss potential updates even when they do later rebuild it.

We take unattended upgrades for granted on Debian-based servers, and that will likely help the Docker host system, but I'm not aware of anything nearly as automated for rebuilding and redeploying the images themselves.

It could be part of your CI/CD pipeline but that in itself is a lot of extra setup and must not be neglected, and it must make sense, e.g. pin in a way that will still pick up security patches and have a dependency audit as part of CI/CD to report when the patching hasn't been enough (e.g. due to semver constraints).


Docker's website has pretty sweet automation that you can use to re-build your containers automatically when the base image changes.

What you describe isn't hard to achieve. Write a one-line cron job that gets the latest packages for your container's base, writes them to a file, commits it to Git, and pushes it. Then set up a Git webhook that runs a script you have to build your container with a new version and push that to a dev instance. Add some tests, and you have an entire CI/CD process with just one cron job and one Git webhook.


Why? I develop C++ servers for Linux. I have script that can build production server from nothing with all the dependencies needed, deploy database and then pull down source build executable run tests and install it as a daemon. I test if from scratch every once in a while just in case and did not have any troubles for years.


> you never have to think about the host OS ever again

This is literally one of the only things that is not included in a container image. The Linux kernel is the Operating System and you are subject to differences in its configuration depending on where the container is running. You are referring to the distribution.


> You should just develop your apps in/with/for containers. The container contains all the dependencies for your app. This way you never have to think about the host OS ever again; your app "just works" (once you hook up the networking, environment, secrets, storage, logging, etc for whatever is running your container). That sounds like a lot of extra work, but actually it's just standardizing the things you should already be dealing with even if you didn't use containers. The end result is your app works more reliably and you can run it anywhere.

This is a false sense of reproducibility. I encountered cases where container worked well on one machine and crashed or had weird bugs on another one.


This happens, but is pretty rare. Using containers generally leads to much more reliable portability than trying to manage all the dependencies by hand.


If I remember correctly Go does use libc by default if you link with net package (you can set CGO_ENABLED=0 to disable it but then you won’t get NSS). On openbsd it also switched back to using libc


You can also use a net specific build tag.


I guess like in the good old UNIX days, by ssh (nee telnet), browser (nee X Windows) into devenvs.


Are there any quantities data for the performance comparison like they did for Aurora vs MySQL? Thanks


Well, it is generally more likely to be tuned to AWS, containing right drivers and tools installed than a default distro you would download from the website, but the images that are available on AWS would likely also tuned similarly. If there are some issues where other image is noticeable worse they would look into AmazonLinux and apply the changes from it.

I would say that AmazonLinux is likely to have less issues with latest instance types (if they change something "hardware" wise, for example when AWS started exposing EBS using NVMe drivers there were some issues originally).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: