lots of people have a dynamic ip. if you don't want that fb (or any other web-thing) correlates an ip with you, you should get an isp that gives you a dynamic ip.
maybe i am wrong, but i think you did not read the full article?
"This article estimates that the average password is merely 40 bits strong, and that estimate is already higher than some of the others. In order to guess a 40 bit password you will need to test 2^39 guesses on average. If you do the math, cracking a password will take merely a minute on average then. Sure, you could choose a stronger password. But finding a considerably stronger password that you can still remember will be awfully hard."
I did, but that assumes the average person. This is HN, not Facebook, so we can make different assumptions.
E.g., we can assume that everyone has read the famous XKCD on passwords, and is using passwords that are a bunch of words after another, in lowercase ascii, which would get us somewhere around 80 bits of security, and still keep it memorable.
That xkcd comic recommends a password that by its own estimate is 44 bits of entropy. Why do you assume a HN user would choose something stronger than that?
you still can write `<html><body><h1>hello world</h1></body></html>`, there is nothing wrong with it!
but once you have written a file-browser with an embedded editor in it or anything else at more complex level, you will consider using react and all the things. because complex things are complex.
again, for simple things you can easily use simple tools - maybe you should. but of course once you are using react day in day out because you need it for other things - you might just use it for anything because you are familiar with it.
i think this is only one possible outcome. sure it makes sense but imho this can not happen over night and that is where this theory breaks.
they cant force "their web" upon the rest of the world. there will _always_ be some form of web where one can be anonymous, where you can run your own server without any hardware from "them", using standard-protocols.
maybe the business-people and most poeple who dont care about their privacy will use everything they get thrown from "them". that does not mean the web is dead, it is only that the "mainstream" will live in a separate web. maybe this (their) web will be much bigger than our web. even then, the cool kids will switch to the cool punky web again :D
I think you're totally right. An analogy is the stuff around politicians trying to "ban encryption" all the while not understanding that you can't ban an idea (not meaningfully).
So, yeah, maybe 10 years from now things do like almost entirely like the article predicts, but for most consumers that probably really isn't a problem at all except in some theoretical ethical/privacy sense.
For the people who really do care, they'll just go make their own custom shit like you describe.
i guess the easiest way would be to run a registry in minikube.
another approach would be to run docker inside a k8s-pod (docker-in-docker), that way you can run images without having to push them to a registry but still test it in k8s-environment (at least to some extent).
in order to test your app in prod-like env you need to run prod-like env locally. ie. a k8s-cluster that is close enough to prod. for that you will have to at least simulate multi-node-setup and run all cluster-addons like in production.
i am excited about this move from docker but i don't think it will solve all the problems. i think once you have a bigger team it is worthwhile to run a second k8s-cluster besides prod where people can just test things on it. otherwise it is actually not that hard to run a local k8s-cluster with vagrant, not sure how docker wants to top that - i think there is no need to top vagrant.
I believe when you say "simulating multi-node setup and addons" - you're seeing it from an operations perspective. Thing is, those concerns don't need to be repeated for every single application. When a consumer says "test", they mean testing functionality. Not testing operations like network I/O bandwidth, sysctl parameters, rebalancing, etc. The expectation is that, operational folks (kubernetes integration tests, ops integration tests, GKE tests, etc) already have tested and verified all of that.
This is not like ops vs dev. When you use a library or framework (say, spring) - you don't test whether HTTP MIME Types are working correctly in spring. You assume the library already has all that tested and covered, and as a consumer, you write tests for what you code. The library's code (and tests) are abstracted from you. This is similar, except for operational stuff. In fact, there is no major difference between them. Its just layering and separation of concerns.
There is no such thing as 100% prod except prod. Even for rocket launches. 90+% is good enough for majority of cases, and is already on the higher side.
Not necessarily, we have been using k8s for a while now where I work and what we've been doing is running a simple minikube setup locally, with the production add-ons (DNS, nginx ingress controllers, etc.) and circumventing what is AWS-related.
It's been working quite well, no need for multi-node-setup so far that I'm aware of.
It will be possible to run all your add-ons in the local setup, including networking. Multi-node is essential for some use-case, but arguably is not critical for most people, yet it is coming in the future.