Hacker News new | past | comments | ask | show | jobs | submit | hkwerf's comments login

Back when I was a physics student, I helped engineer exactly this experiment as a off-the-shelf solution for schools as a classroom experiment! We used PMTs mounted on top of coffee cans:

http://kamiokanne.uni-goettingen.de/gb/kamiokanne.htm

The FTL muons produce Cherenkov radiation in the water in the coffee cans, which is picked up by the PMTs.

Using this setup gives a much higher rate, as the surface is much larger compared to geiger tubes. Thus it's possible to quickly capture a sufficient amount of muons.


Memory has faded as it was in the last millenia... junior year physics lab one of the experiments was muon counting. It used two large NaI scintillators covered in foil attached to PMTs with a potential > 10kV. I don't recall exactly how we separated them, probably just some type of stand.

It strikes me that the former would profit from a strong type system, whereas the latter could profit from (enforced) strict aliasing.

Why exactly would it be too low-level for Rust?

I have to install an app to communicate with my child's state-sponsored daycare. I'll have to install an app to communicate with the teachers at his future school. Is this still fine?

It'd be one thing, if it were just apps. But all of these apps are essentially just containers for some web application.

Do you get access to the web application without the app? No.

So what's the point of the apps? So they can send you notifications and annoy you with irrelevant updates concerning other groups at the same daycare all day long, because they don't care to filter?


Once they get into school you’ll need to use separate apps to communicate with their teachers, pay for fee/lunches, etc.

The communication apps are out of control since the teachers seem to have choice in what is required and so changes every year.

My middle schooler needs 3 different apps (and a Chromebook) to check/hand in homework and parents need 2 to receive communication from the teachers.


Here in Sweden, it took me about 5 months but I was able to get the school to send me info by email. They switched to an app-only system, and I have no smartphone.

Setting aside any issues related to privacy or US corporate control over my life, I'm one of the people who doesn't use a smartphone because the temptation to be online, at the drop of the hat, is too much to resist.

I compare it to being like someone who needs to lose weight, so keeps all chocolate out of the house, while everyone seems to expect me to have a luscious bar of high quality chocolate with me all the time, just sitting there, begging me to eat it.

There is an ongoing debate about smartphones at school, and the addiction and distraction they can be for kids. I think my strongest argument is that the addiction and distraction don't simply disappear for adults, and there was no way they were going to force me to get a smartphone.

I don't think that would work for those with a smartphone, but it's a crack keeping an alternative open.


I live in Sweden with two kids in school and can do everything through the web (except BankId authentication of course).

In January our city switched from a web portal to an app-only system.

Last fall I was working on getting a username+password access to the portal, since the students and teachers don't need BankID to log in. I was using my son's login to read the weekly newsletters.

At the time I warned our skolnämnden that I would not be getting a smartphone. They switched to app-only, making it impossible for me to get info or let the school know about absences or late arrive.

It took talking with the teachers, with fritids, and the principal to work out an email-based option.


Just an FYI: If I select "Features" / "API First" on your page, I end up at a 404.

Thanks, restructured the docs site recently. Updating now!

I suppose they don't believe certain facts engineers are telling them. With Brexit it was coined "Project Fear". Now they're being told that adding backdoors to an encrypted service almost completely erodes trust in the encryption and, as in the case with Apple here, in the vendor. However, I suppose it is very hard to find objective facts to back this. I'd guess this is why Apple chose to both completely disable encryption and inform users about the cause.

Now we're probably just waiting for a law mandating encryption of cloud data. Let's see whether Apple will actually leave the UK market altogether or introduce a backdoor.


This is only for Docker Desktop. The Docker Engine itself is free (AFAIK). If you're on Linux, you probably don't care about Docker Desktop at all.

There are many decent alternatives for MacOS too. I had good luck with minikube a few years ago. This article seems decent, based on my previous research:

https://dev.to/shohams/5-alternatives-to-docker-desktop-46am

I don’t use windows, put presumably you can just use their built in linux environment and docker cli.


Yeah I'm on a Mac. Uh, you know I really had a memory of homebrew getting things out of the .app or something, but I really can't find any evidence that was ever the case. I blame sleep deprivation, this is like the 13th blunder I've made this week haha.

I get why this is annoying.

However, if you are using docker's registry without authentication and you don't want to go through the effort of adding the credentials you already have, you are essentially relying on a free service for production already, which may be pulled any time without prior notice. You are already taking the risk of a production outage. Now it's just formalized that your limit is 10 pulls per IP per hour. I don't really get how this can shift your evaluation from using (and paying for) docker's registry to paying for your own registry. It seems orthogonal to the evaluation itself.


The big problem is that the docker client makes it nearly impossible to audit a large deployment to make sure it’s not accidentally talking to docker hub.

This is by design, according to docker.

I’ve never encountered anyone at any of my employers that wanted to use docker hub for anything other than a one-time download of a base image like Ubuntu or Alpine.

I’ve also never seen a CD deployment that doesn’t repeatedly accidentally pull in a docker hub dependency, and then occasionally have outages because of it.

It’s also a massive security hole.

Fork it.


> This is by design, according to docker.

I have a vague memory of reading something to that effect on their bug tracker, but I always thought the reasoning was ok. IIRC it was something to the effect that the goal was to keep things simple for first time users. I think that's disservice to users, because you end up with many refusing to learn how things actually work, but I get the sentiment.

> I’ve also never seen a CD deployment that doesn’t repeatedly accidentally pull in a docker hub dependency, and then occasionally have outages because of it.

There's a point where developers need to take responsibility for some of those issues. The core systems don't prevent anyone from setting up durable build pipelines. Structure the build like this [1]. Set up a local container registry for any images that are required by the build and pull/push those images into a hosted repo. Use a pull through cache so you aren't pulling the same image over the internet 1000 times.

Basically, gate all registry access through something like Nexus. Don't set up the pull through cache as a mirror on local clients. Use a dedicated host name. I use 'xxcr.io' for my local Nexus and set up subdomains for different pull-through upstreams; 'hub.xxcr.io/ubuntu', 'ghcr.xxcr.io/group/project', etc..

Beyond having control over all the build infrastructure, it's also something that would have been considered good netiquette, at least 15-20 years ago. I'm always surprised to see people shocked that free services disappear when the stats quo seems to be to ignore efficiency as long as the cost of inefficiency is externalized to a free service somewhere.

1. https://phauer.com/2019/no-fat-jar-in-docker-image/


> I'm always surprised to see people shocked that free services disappear when the stats quo seems to be to ignore efficiency as long as the cost of inefficiency is externalized to a free service somewhere.

Same. The “I don’t pay for it, why do I care” attitude is abundant, and it drives me nuts. Don’t bite the hand that feeds you, and make sure, regularly, that you’re not doing that by mistake. Else, you might find the hand biting you back.


Block the DNS if you don’t want dockerhub images. Rewrite it to your artifactory.

This is really not complicated and your not entitled to unlimited anonymous usage of any service.


That will most likely fail, since the daemon tries to connect to the registry with SSL and your registry will not have the same SSL certificate as Docker Hub. I don't know if a proxy could solve this.

This is supported in the client/daemon. You configure your client to use a self-hosted registry mirror (e.g. docker.io/distribution or zot) with your own TLS cert (or insecure without if you must) as pull-through cache (that's your search key word). This way it works "automagically" with existing docker.io/ image references now being proxied and cached via your mirror.

You would put this as a separate registry and storage from your actual self-hosted registry of explicitly pushed example.com/ images.

It's an extremely common use-case and well-documented if you try to RTFM instead of just throwing your hands in the air before speculating and posting about how hard or impossible this supposedly is.

You could fall back to DNS rewrite and front with your own trusted CA but I don't think that particular approach is generally advisable given how straightforward a pull-through cache is to set up and operate.


This is ridiculous.

All the large objects in the OCI world are identified by their cryptographic hash. When you’re pulling things when building a Dockerfile or preparing to run a container, you are doing one of two things:

a) resolving a name (like ubuntu:latest or whatever)

b) downloading an object, possibly a quite large object, by hash

Part b may recurse in the sense that an object can reference other objects by hash.

In a sensible universe, we would describe the things we want to pull by name, pin hashes via a lock file, and download the objects. And the only part that requires any sort of authentication of the server is the resolution of a name that is not in the lockfile to the corresponding hash.

Of course, the tooling doesn’t work like this, there usually aren’t lockfiles, and there is no effort made AFAICT to allow pulling an object with a known hash without dealing with the almost entirely pointless authentication of the source server.


Right but then you notice the failing CI job and fix it to correctly pull from your artifact repository. It's definitely doable. We require using an internal repo at my work where we run things like vulnerability scanners.

> since the daemon tries to connect to the registry with SSL

If you rewrite DNS, you should of course also have a custom CA trusted by your container engine as well as appropriate certificates and host configurations for your registry.

You'll always need to take these steps if you want to go the rewrite-DNS path for isolation from external services because some proprietary tool forces you to use those services.


You don't have to run docker. Containerd is available.

It's trivial to audit a large deployment, you look at dns logs.

This is Infamous Dropbox Comment https://news.ycombinator.com/item?id=9224 energy

They didn't say it's easy to fix, just detect.

Is there no way to operate a caching proxy for docker hub?!

There are quite a few docker registries you can self-host. A lot of them also have a pull-through cache.

Artifactory and Nexus are the two I've used for work. Harbor is also popular.

I can't think of the name right now, but there are some cool projects doing a p2p/distributed type of cache on the nodes directly too.


I don't really get how this can shift your evaluation from using (and paying for) docker's registry to paying for your own registry

Announcing a new limitation that requires rolling out changes to prod with 1 week notice should absolutely shift your evaluation of whether you should pay for this company's services.


Here's an announcement from September 2024.

https://www.docker.com/blog/november-2024-updated-plans-anno...


You're right, that is "an announcement":

At Docker, our mission is to empower development teams by providing the tools they need to ship secure, high-quality apps — FAST. Over the past few years, we’ve continually added value for our customers, responding to the evolving needs of individual developers and organizations alike. Today, we’re excited to announce significant updates to our Docker subscription plans that will deliver even more value, flexibility, and power to your development workflows.

We’ve listened closely to our community, and the message is clear: Developers want tools that meet their current needs and evolve with new capabilities to meet their future needs.

That’s why we’ve revamped our plans to include access to ALL the tools our most successful customers are leveraging — Docker Desktop, Docker Hub, Docker Build Cloud, Docker Scout, and Testcontainers Cloud. Our new unified suite makes it easier for development teams to access everything they need under one subscription with included consumption for each new product and the ability to add more as they need it. This gives every paid user full access, including consumption-based options, allowing developers to scale resources as their needs evolve. Whether customers are individual developers, members of small teams, or work in large enterprises, the refreshed Docker Personal, Docker Pro, Docker Team, and Docker Business plans ensure developers have the right tools at their fingertips.

These changes increase access to Docker Hub across the board, bring more value into Docker Desktop, and grant access to the additional value and new capabilities we’ve delivered to development teams over the past few years. From Docker Scout’s advanced security and software supply chain insights to Docker Build Cloud’s productivity-generating cloud build capabilities, Docker provides developers with the tools to build, deploy, and verify applications faster and more efficiently.

Sorry, where in this hyped up marketingspeak walloftext does it say "WARNING we are rugging your pulls per IPv4"?


That's some cherry-picking right there. That is a small part of the announcement.

Right at the top of the page it says:

> consumption limits are coming March 1st, 2025.

Then further in the article it says:

> We’re introducing image pull and storage limits for Docker Hub.

Then at the bottom in the summary it says again:

> The Docker Hub plan limits will take effect on March 1, 2025

I think like everyone else is saying here, if you rely on a service for your production environments it is your responsibility to stay up to date on upcoming changes and plan for them appropriately.

If I were using a critical service, paid or otherwise, that said "limits are coming on this date" and it wasn't clear to me what those limits were, I certainly would not sit around waiting to find out. I would proactively investigate and plan for it.


The whole article is PR bs that makes it sound like they are introducing new features in the commercial plans and hiking up their prices accordingly to make up for the additional value of the plans.

I mean just starting with the title:

> Announcing Upgraded Docker Plans: Simpler, More Value, Better Development and Productivity

Wow great it's simpler, more value, better development and productivity!

Then somewhere in the middle of the 1500-word (!) PR fluff there is a paragraph with bullet points:

> With the rollout of our unified suites, we’re also updating our pricing to reflect the additional value. Here’s what’s changing at a high level:

> • Docker Business pricing stays the same but gains the additional value and features announced today.

> • Docker Personal remains — and will always remain — free. This plan will continue to be improved upon as we work to grant access to a container-first approach to software development for all developers.

> • Docker Pro will increase from $5/month to $9/month and Docker Team prices will increase from $9/user/month to $15/user/mo (annual discounts). Docker Business pricing remains the same.

And at that point if you're still reading this bullet point is coming:

> We’re introducing image pull and storage limits for Docker Hub. This will impact less than 3% of accounts, the highest commercial consumers.

Ah cool I guess we'll need to be careful how much storage we use for images pushed to our private registry on Docker Hub and how much we pull them.

Well it's an utter and complete lie because even non-commercial users are affected.

————

This super long article (1500 words) intentionally buries the lede because they are afraid of a backlash. But you can't reasonably say “I told u so” when you only mentioned in a bullet point somewhere in a PR article that there will be limits that impact the top 3% of commercial users, then 4 months later give a one week notice that images pulls will be capped to 10 pulls per hour LOL.

The least they could do is to introduce random pull failures with an increasing probability rate over time until it finally entirely fails. That's what everyone does with deprecated APIs. Some people are in for a big surprise when a production incident will cause all their images to be pulled again which will cascade in an even bigger failure.


None of this takes away from my point that the facts are in the article, if you read it.

If the PR stuff isn't for you, fine, ignore that. Take notes on the parts that do matter to you, and then validate those in whatever way you need to in order to assure the continuity of your business based on how you rely on Docker Hub.

Simply the phrase "consumption limits" should be a pretty clear indicator that you need to dig into that and find out more, if you rely on Docker in production.

I don't get everyone's refusal here to be responsible for their own shit, like Docker owes you some bespoke explanation or solution, when you are using their free tier.

How you chose to interpret the facts they shared, and what assumptions you made, and if you just sat around waiting for these additional details to come out, is on you.

They also link to an FAQ (to be fair we don't know when that was published or updated) with more of a Q&A format and the same information.


It's intentionally buried. The FAQ is significantly different in November; it does say that unauthenticated pulls will experience rate limits, but the documentation for the rate limits given doesn't offer the limit of 10/hour but instead talks about how to authenticate, how to read limits using API, etc.

The snippets about rate limiting give the impression that they're going to be at rates that don't affect most normal use. Lots of docker images have 15 layers; doesn't this mean you can't even pull one of these? In effect, there's not really an unauthenticated service at all anymore.

> “But the plans were on display…”

> “On display? I eventually had to go down to the cellar to find them.”

> “That’s the display department.”

> “With a flashlight.”

> “Ah, well, the lights had probably gone.”

> “So had the stairs.”

> “But look, you found the notice, didn’t you?”

> “Yes,” said Arthur, “yes I did. It was on display in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying ‘Beware of the Leopard.”


I'm certainly not trying to argue or challenge anyone's interpretations of motive or assumptions of intent (no matter how silly I find them - we're all entitled to our opinions).

I am saying that when change is coming, particularly ambiguous or unclear change like many people feel this is, it's no one's responsibility but yours to make sure your production systems are not negatively affected by the change.

That can mean everything from confirming data with the platform vendor, to changing platforms if you can't get the assurances you need.

Y'all seem to be fixated on complaining about Docker's motives and behaviour, but none of that fixes a production system that's built on the assumption that these changes aren't happening.


> but none of that fixes a production system that's built on the assumption that these changes aren't happening.

Somebody's going to have the same excuse when Google graveyards GCP. Till this change, was it obvious to anyone that you had to audit every PR fluff piece for major changes to the way Docker does business?


> was it obvious to anyone that you had to audit every PR fluff piece for major changes to the way Docker does business?

You seem(?) to be assuming this PR piece, that first announced the change back in Sept 2024, is the only communication they put out until this latest one?

That's not an assumption I would make, but to each their own.


Sure, but at least those of us reading this thread have learned this lesson and will be prepared. Right?

Oh definitely.

This isn't exactly the same lesson, but I swore off Docker and friends ages ago, and I'm a bit allergic to all not-in-house dependencies for reasons like this. They always cost more than you think, so I like to think carefully before adopting them.


But Mr Dent, the plans have been available in the local planning office for the last nine months.”

“Oh yes, well as soon as I heard I went straight round to see them, yesterday afternoon. You hadn’t exactly gone out of your way to call attention to them, had you? I mean, like actually telling anybody or anything.”

“But the plans were on display …”

“On display? I eventually had to go down to the cellar to find them.”

“That’s the display department.”

“With a flashlight.”

“Ah, well the lights had probably gone.”

“So had the stairs.”

“But look, you found the notice didn’t you?”

“Yes,” said Arthur, “yes I did. It was on display in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying ‘Beware of the Leopard’.”


> I don't get everyone's refusal here to be responsible for their own shit

No kidding. Clashes with the “gotta hustle always” culture, I guess.

Or it means that they can’t hide their four full-time jobs from each of the four employers as easily while they fix this at all four places at the same time.

The “I am owed free services” mentality needs to be shot in the face at close range.


Documentation on usage and limits from December 2024.

https://web.archive.org/web/20241213195423/https://docs.dock...

Here's the January 21st 2025 copy that includes the 10/HR limit.

https://web.archive.org/web/20250122190034/https://docs.dock...

The Pricing FAQ goes back further to December 12th 2024 and includes the 10/HR limit.

https://web.archive.org/web/20241212102929/https://www.docke...

I haven't gone through my emails, but I assume there was email communication somewhere along the way. It's safe to assume there's been a good 2-3 months of communication, though it may not have been as granular or targeted as some would have liked.


Hey, can I have your services for free because I also feel entitled?

I mean, there has never not been some issue with Docker Desktop that I have to remember to work around. We're all just collectively cargo culting that Docker containers are "the way" and putting up with these troubles is the price to pay.

If you offer a service, you have some responsibility towards your users. One of those responsibilities is to give enough notice about changes. IMO, this change doesn't provide enough notice. Why not making it a year, or at least a couple of months? Probably because they don't want people to have enough notice to force their hand.

> Why not making it a year, or at least a couple of months?

Announced in September 2024: https://www.docker.com/blog/november-2024-updated-plans-anno...

At least 6 months of notice.


I use docker a few times a week and didn’t see that. Nor would I have seen this if I hadn’t opened HN today. Not exactly great notice.

If you had an account you’d receive an email back in September 2024. I have received one…

This is not announcing the 10 (or 40) pull/hr/ip

Didn't they institute more modest limits some time ago? Doesn't really seem like this is out of nowhere.

Yes they have. They are reducing the quota further.

They altered the deal. Pray they don't alter it any further.

s/Pray/Pay/

What principal are you using to suggest that responsibility comes from?

I have a blog, do I have to give my readers notice before I turn off the service because I can't afford the next hosting charge?

Isn't this almost exclusively going to effect engineers? Isn't it more of the engineer's responsibility not to allow their mission critical software to have such a fragile signal point of failure?

> Probably because they don't want people to have enough notice to force their hand.

He says without evidence, assuming bad faith.


You don't. You have responsibility towards your owners/shareholders. You only have to worry about your customers if they are going to leave. Non-paying users not so much - you're just cutting costs now zirp isn't a thing.

If this was a public company I would put my tin foil hat and believe that it's a quick buck scheme to boost CEO pay. A short sighted action that is not in the shareholders interest. But I guess that's not the case? Who knows...

At this stage of the product lifecycle, free users are unlikely to ever give you money without some further "incentives". This shouldnt be news by now, especially on HN.

If you're production service is relying on a free-tier someone else provides, you must have some business continuity built in. These are not philanthropic organisations.


It's bait and switch that has the stakes of "adopt our new policy, that makes us money, that you never signed up for, or your business fails." That's a gun to the head.

Not an acceptable interaction. This will be the end of Docker Hub if they don't walk back.


To think of malice is mistaken. It's incompetence.

Docker doesn't know how to monetize.


Yes. But they are paying for this bandwidth, authenticated or not. This is just busy work, and I highly doubt it will make much of a difference. They should probably just charge more.

They charge more.

You're essentially suggesting a Drake equation [1] equivalent for the number of security vulnerabilities based on NLoC. What other factors would be part of this equation?

[1] https://en.wikipedia.org/wiki/Drake_equation


Language or framework definitely plays a role (isn't that what the Rust people are so excited about). Maybe say like the materials/tools used.

There's definitely some measure of complexity. I still like simple cyclomatic but I know there are better ones out there that try to capture the cognitive load of understanding the code.

The attack surface of the system is definitely important. The more ways that more people have to interface with the code, the more likely it is that there will be a mistake.

Security practices need to be captured in some way (maybe a factor that gets applied). If you have vulnerability scanning enabled that's going to catch some percentage of bugs. So will static analysis, code reviews, etc.


How close to the Balmer peak the programmer was when he wrote the code.


Correlated or inversely correlated?


While it's not implemented, Rust even has a keyword reserved [1] for musttail, "become": https://github.com/phi-go/rfcs/blob/guaranteed-tco/text/0000...

[1] https://doc.rust-lang.org/reference/keywords.html


And like everything else in Rust, it will forever live in the "unstable" pile of features that are required to use the language without standing on your head (looking at you, pattern matching through boxes).

Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: