Hacker News new | past | comments | ask | show | jobs | submit | chillaranand's comments login

I stumbled up on this issue recently while setting up GHA and switched to AWS ECR Gallery which doesn't have limits.

My blog post on the same at https://avilpage.com/2025/02/free-dockerhub-alternative-ecr-...


This is using AWS ECR as a proxy to docker hub, correct?

Edit: Not exactly, it looks like ECR mirrors docker-library (a.k.a. images on docker hub no preceded by a namespace), not all of Docker Hub.

Edit 2: I think the example you give there is misleading, as Ubuntu has its own namespace in ECR. If you want to highlight that ECR mirrors docker-library, a more appropriate example might be `docker pull public.ecr.aws/docker/library/ubuntu`.


I can tell you with 100% certainty that ECR definitely has limits, just not "screw you" ones like the blog post. So, while I do think switching to public.aws.ecr/docker/library is awesome, one should not make that switch and then think "no more 429s for me!" because they can still happen. Even AWS is not unlimited at anything

Disclaimer: I work for Amazon, but not on ECR or ECR Public.

The rate limit for unauthenticated pulls is 1/second/IP, source: https://docs.aws.amazon.com/general/latest/gr/ecr-public.htm...


Weirdly we just this week came across docker hub limiting our pulls from a CI job in AWS.

Not something we'd encountered before but seems earlier than these changes are meant to come into effect.

We've cloned the base image into ECR now and are deriving from there. This is all for internal authenticated stuff though.


To start from scratch, we need to start with big bang!


After trying out various file managers, I am able to do this in xplr file manager finally. Not sure if any other file managers support this.


I ran small POC as you mentioned earlier and it worked well.

https://avilpage.com/2018/02/deploy-django-web-app-android.h...


2014

Almost 10 years ago, I translated this article to Telugu.

https://avilpage.com/2014/12/python-paradox.html


Each Common crawl monthly data consists of ~100 TB. For some use cases, we don't need entire data set. We just need a subset of the data.

In this post, lets see how we can extract sub set of the data from our laptop itself.


We can also use ssl-cert-check cli tool. `ssl-cert-check -s avilpage.com -p 443 -x 30` will return if the certificate will expire in 30 days.

We can use this command in CI pipeline or setup a cron job to monitor it.


Does devknox upload app source code to your servers?


No, it is scanned locally


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: