How can the "Internet routes around failure" be trusted without testing? Everything needs regular exercise or it atrophies.
These days, there are lively public internet exchanges up and down both coasts, in texas and chicago and elsewhere. A well placed backhoe can still make a big mess, many 'redundant fibers' are in the same strand, and last mile is fragile, but if my ISP network is up to the local internet exchange, there are many distinct routes to the other coast and a fiber cut is unlikely to route my traffic around the world.
I bet it's a lower number than anyone's actually comfortable with, though it would be rather difficult to pull of.
We've seen this one before... BGP doesn't (usually) look at link utilization, just link status and maybe that the BGP connection doesn't time out, but even a massively overloaded connection tends to have room for BGP pings, so it's less netsplit and more blackhole.
I don't think BGP hijacks is really the way to acheive this one.
edit: disk is for diskette
Call it BBP Protocol... "Blue Ball Prevention"
"A few hubs"? Just Hurricane Electric has presence in many, many IXPs, and they're not even the largest transit provider:
AT&T, the largest transit provider, has a ridiculous number of peers:
And there are several different major global providers:
A lot of the Big Tech companies are also building their own private fibre networks so they don't even have to worry about sharing infrastructure with the telcos:
Source: I was sitting next to the person who fixed it.
I had a long lost picture when they were doing some street construction outside and marked all the buried cables with spray paint. Lines going everywhere…
Next off, the their front page says:
> Carrier-Neutral with Access to 200+ Carriers
You're telling me these 200+ carriers have no other POPs?
Going to their "Connectivity" page, they show four trans-Pacific cable that land there:
Meanwhile there are a whole bunch of other trans-Pacific cables landing on the US West Coast:
As someone who lives in Toronto, I certainly worry about how concentrated TORIX is, but even it has three physical locations (IIRC), and most folks connect elsewhere as well from the spelunking I've done in BGP.
So if one of these major carrier hotels does have a problem, I don't doubt that there would be repercussions, but I'm not worried about "the Internet" as a whole.
But look I live in Hong Kong, I dont feel that way: there are other backends, we could survive without the caching for a while and Google is forbidden for 1.4bn people who get on with it very well...
Depends what you call the internet. Yes, facebook and whatsapp are gone the minute one of those 3 companies screwed up.
Bing might be able to auto-scale using Azure infrastructure but there might not be enough hardware even there.
Yes, for the majority of users the two may be the same, but "The Internet" is the IPv4 and IPv6 network on which the web runs. The equivalent companies to what you named are the transit network operators, companies like Hurricane Electric, AT&T, CenturyLink, etc. The list of transit networks is very long, much longer than the list of major web hosting companies.
Now, that said, it is absolutely true that there are ways where major outages can be the result of just one failure. Last year there was a major outage on the US east cost after a Verizon fiber optic line was severed in NYC; it turns out that this line carried many supposedly redundant links, all bundled into a single cable. The failure of that line then caused a cascade of failures as rerouted traffic began overwhelming other systems, but in the end the outage was contained in a relatively small geographic region, albeit one with many Internet users.
Internet is much more reliable now.
I studied CS in Rome at "La Sapienza" and I was merely 10 steps away from INFN (national institute for nuclear physics) where GARR phisically resided.
I spent more time there than in my class.
That's how I got involved in this new "internet thing".
The "gigalapse" seemed like a big deal because proportionally a billion user hours in 1995 is everything doesn't work for days, or most things don't work for several weeks... but today of course it could also be everybody in the world is inconvenienced for about long enough to make breakfast. Oh no.
The larger tlds aren't quite as diversely hosted and certainly aren't amenable to long term caching, but it should take a major f up to break those too.
Some of the minor tlds, even the more popular ones do screw up from time to time though.
I dunno man, My used T420 laptop is serving several sites over a residential symmetrical fiber connection just fine.
Since about 1999 I have never NOT been running a server of some kind off my home connection. I wouldn't run a business off of it that way, but it's reliable enough to count on it which has to meet any sane definition of semi-reliable. The two biggest problems I've had have been essentially unrelated to the internet. The first is when I've been violating TOS of the ISP or the power was unreliable in the place I lived.
From yesterday. Note the bit about being on a Pi. Stood up quite admirably to an HN hug.
From 1998 to 2004 I served a fairly large amount of traffic out of my homebuilt pentium in a trailer out in the woods with an ISDN line. We stood up to several media mentions OK.
It was down when I tried it yesterday.
Download being 300Mpbs but upload 20Mbps IS kind of irritating though.
When did 100Mbps become popular for home LANs even?
I'm pretty sure new PCs and laptops have had gigabit standard for probably about 10 years.
Enthusiast and prosumer motherboards are now coming with 2.5 gigabit networking.
Also, smartphone proliferation made streaming from home NAS very convenient, as well as home security cameras and smart home features.
I disagree that current upload capacities are adequate. With 1Gbps+ upload connections standard, we might actually see privacy forward solutions that do not require us to depend on cloud services.
These systems could handle a ton of traffic very reliably. Consider that there was very little dynamic content and what there was barely taxed the CPU at all (e.g. a Perl CGI script to query and display some tiny amount of data from (LOL!) an Access database.
EDIT: I have a soft spot for the person who is so over it that they sit around pontificating the meaning of the word broken instead of actually getting up and helping in the midst of an emergency. Because half the time that's me. When prod is on fire every day, eventually that becomes the norm.
Beyond some level of complexity a system isn't one, anymore, it is an emergent phenomenon that has many of the aspects we ascribe to "life." Our inputs affect the systems, surely, but not in a predictable or even repeatable way.
A cow is an "inefficient means" of turning grass into tasty beef, by some accounting. But the cow is, as far as it is concerned, fully efficient at being alive. our notion of "what is the internet" fell out of sync with the realities sometime around '94 i think.
Anti-Fragile would imply that each doh-inducing failure resulted in internet wide improvements. Something that famously is not happening.
The Internet: "It's Anti-Fragile!" (looking up)
The Internet: "It's Duct Tape all the Way Down" (seen from above)
Seriously though, it seems like every form of infrastructure we rely on is held together in such a fragile manner. I hate hate to think of the chaos should there be a major Internet and physical infra failure in close proximity, time-wise.
As far as I can tell this is a truism across all fields: farming, skilled trades, build systems, transportation infrastructure, housing…
Having ready spares is great but I'm sure the HN crowd knows how much crib-death there is on new, replacement bits of all kinds.
Until it's actually been run-in for a while how do you really know it's going to work when you unbox it to replace a truly dead piece?
The upshot of this is' RAID-6' type designs are the most reliable in the real world since when one fails at least you are leaning on other parts that have been run-in and are past the leading edge of the bathtub curve.
We had the unfortunate experience of installing 4 new 16 bay chasis with brand new drives (10+ years ago now). We designated one of the 16s as hot spare for each chasis, plus had 2 cold spares for each as well. 72 brand new drives in total. All from the same batch of drives from the manufacture. Set them all up on a Friday and configured all for RAID5 (pre-RAID6 availability). Plan was to let them build and have some burn-in time over the weekend for possible Monday availability. Monday provided us with multiple drive failures in each chasis. Drive manufacturer confirmed a batch batch from whichever plant, replaced all and delivered larger sizes for replacements. Luckily, they failed during burn-in rather than 1 week after deploying.
Screwdrivers are more of a consumable than a capital good.
So presumably just another programmer making up an analogy with insufficient background knowledge so that we can nitpick the details of it.
In fact, so much care is put into infrastructure that most people/shops have lots of tools they have designed and built themselves at great expense to streamline their operations.
As I was writing that, I was just thinking of the recent Surfside collapse, what would would have happened if the regions data networks had gone down simultaneously (by chance). A major event two decades ago, cellular networks were overwhelmed and calls could not be made. I dare say we're more reliant on those networks today, as well as the Internet.
Not that I would expect it to happen, but it was just a thought.
This is why you can't just drive a car until it breaks, you get it checked out.
Then again, cars are a private good. When it's your property vs our property you have more of an incentive to take care of it.
Getting it out won't be fun.
but in a shop unless you can't take it out using other means (pliers for example or a nut split remover), you can simply weld a bolt on the screw's head and use a wrench to unscrew it.
or drill a hole through it
after removing the head and use another screw to take out the moncone from the other side.
Bad screws are more common than bad screwdrivers and even a brand new screwdriver could lead to the same result.
as the original post said "If it breaks all the time, everybody is highly experienced at patching together new workarounds"
All the steps you outlined risk damaging the part you're working on, or wasting time. Let's just say you only value your time as a mechanic at $30 an hour if getting that screw out takes you another 20 or 30 minutes, you would have been better off buying a $5 screw bit
but what the original post said it's not that you shouldn't buy a new screwdriver, but that it you know something will break, you learn how to fix it.
if your brand new screwdriver ends up stripping that screw anyway and you don't know what to do next because you thought that it would never happen using a working screwdriver, well. you're screwed :)
sometimes there are no good solutions, only bad solutions
that's what keep things working.
if everyone believed that spit and bailing wire was a no go, because they are bad solutions and there's a better one - there always is - (i.e. fixing the root cause, after having identified it) there would be no internet as we know it today.
I'd be far more concerned with agricultural logistics, though we've never starved to extinction, either.
Perhaps we've reached a point of positive no return: We can no longer cease to exist!
A lot of starvation events have of course occurred across many different nations, peoples, civilizations. Europe, as one example, was numerous times ravaged by extreme starvation events that collectively killed millions of people across the 19th and 20th centuries. I think your agriculture concerns are well placed.
The various Internets have gone down routinely for all sorts of reasons.
Note that the argument is not dependent on any sort of cause, such as climate change or whatever. It is entirely probabilistic.
I'd prefer if we kept deprecated and removed as two different terms. It sounds like level3 deprecated it, and everyone else removed it. To me (and most definitions I can find) deprecated basically means "don't start using it, if you are using it stop using it, we will remove it soon but have not done so yet for compatibility reasons"
There are 1000s of these pictures.
The entire world's IT infrastructure has been held together with spaghetti-noodle-cabling and bodged patches for over three decades now. I'm even guilty of it. Most IT guys are guilty of it.
Don't even get me started talking about how bodged together corporate codebases are. Banks are perhaps the worst offenders. Old hardware running with bandaids and bubblegum. Software that more than 5 different teams have had their fingers in, mucking about, and some codebases have orders of magnitude more than that.
Anyone that didn't know this, doesn't pay attention, or just started internetting.
If you (the collective you, everyone reading this) haven't, read it.
And yet, perhaps counterintuitively, the main downside of such systems is that they are slow and expensive to change, not that they are unreliable.
They are slow and expensive to change because they are maintained because they have been, at great expense and cost in both failures and remediation efforts, made tolerably reliable (but still extremely fragile) so long as things are exactly within certain expectations (which often have been narrowed from the intended design based on observed bugs that have been deemed too expensive to fix), and it is inordinately difficult to modify them without causing them to revert to a state of intolerable unreliability.
They are systems that generations have been spent reshaping business operations around their bugs to make them “reliable”.
As an adult, you realize the world is a patchwork of semi-functioning systems and it’s a miracle the whole thing works as well as it does.
BGP4 in general dates back to a time when everybody trusted each other and the global routing table/network engineering community was a much smaller thing. We've been trying to glue better things on top of it for 25+ years.
As I understand it is continuously almost hand-tuned machine and enough misconfiguration or entities simply stopping to exist much of the transit capacity would be gone. Works well enough when it is not touched and there is someone on call to fix it, but specially if later was gone? How robust and self-correcting is it really? My guess is probably not at all...
But I read stuff like this, and in this case it's Krebs, so I have to expect these kinds of issues will pop up. The article mentions the FB outage and most everyone on my FB feed was freaking out over not being able to access it, and for the most part it's not a critical service. And when they came back online some of the conspiracies they were sharing about what/why it happened were way over the top.
From my perspective it feels like everything on the internet is just one missed tap on a keyboard from breaking.
Most modern, massive systems seem that held with only spit and baling wire. This might not be true for infra like roads, bridges and dams. And there are things, buildings and structures that are 10s or 100s of years old that still work. Power plants and cars for example.
Not really. What it shows is the stark difference between two ideologies. The first camp contains people who believe in Postel's Law, "be conservative in what you do, be liberal in what you accept from others". The second camp has people who recognize that the current world is not a cooperative network of researchers: "all input is untrusted".
Krebs is absolutely in the second camp.
The "liberal in what you accept" part is mostly honest acceptance of the reality of the network: you cannot control the information sent to your service, only how you respond to it.
I'd say the internet has some sort of a "biological" architecture. Robust in the sense that organisms are robust; extremely messy, sensical from a high-level view, chaotic from a low-level view.
It's the same quality that's claimed by proponents of "decentralized autonomous organizations" (DAO), though for that I'm not yet convinced of its practicality.
Someone on this thread mentioned the book, Antifragile: Things That Gain from Disorder.
> Just as human bones get stronger when subjected to stress and tension, and rumors or riots intensify when someone tries to repress them, many things in life benefit from stress, disorder, volatility, and turmoil. What Taleb has identified and calls “antifragile” is that category of things that not only gain from chaos but need it in order to survive and flourish.
This reminds me of the concept of "emergence" in living systems.
> the arising of novel and coherent structures, patterns and properties during the process of self-organization in complex systems
When the internet has troubles it's a mistake of a giant centralized service.
Most of the time only that service is affected by their own mistakes, but sometimes that service hosts a lot of others, or is so massive that they cause DDoS attacks like when Facebook went down and their clients spammed DNS servers.
Seems the internet is fine. Centralized services not so much.
I wish our government spent billions on funding open source software to reduce that complexity and cost, instead of on introducing/hoarding vulnerabilities.
This is an appropriate task for government funding. (even better yet, multi-lateral/international funding). Nobody else is going to provide the resources to get it done. Clearly.
Two factor authentication and account verification is really an elaborate corporate sham to get people's phone numbers and PII for free. It doesn't do anything new or good for consumers in terms of security over time. There, I said it.
I prefer the old Internet. All these new fangled "fixes" are only makin it worse, more expensive, and overly complicated. :/
Pricing on cert services is also far too high when everyone's concern and agreement should be security as a basis for operations. It's not something that should be an upcharge or income opportunity.
You buy a door lock for your home once, and it works as long as you don't compromise the key. If you buy a house, door locks are expected to come with the house in most circumstances.
The pricing on Let's Encrypt is literally zero, and they provide (also free of charge) the `certbot` utility which you can run as a cronjob and which will automatically renew your certificates for you. The whole thing comes extremely well documented and with install scripts that take less than a minute to download, verify and run. If you think even that is too much of a burden I don't think any topic in programming is simple enough.
Building a site from scratch in this day and age is a lot more analogous to building your house from scratch. Nobody to blame but yourself if you buy substandard locks and thieves get in. Only here the metaphor breaks down, because if you aren't encrypting your HTTP traffic and it is intercepted, it's your users who suffer, not the site owner.
I, too, pine for the days of simpler internet. But that was a function of the user base, not the technology. It was always insecure... it simply hadn't been exploited yet. Now that it has, and is, site administrators owe it to users to secure their connections.
On a house you own, you can change locks and keys any time you want to keep security up to date (for example).
no house in "move in ready condition" comes without sufficiently keyed door locks of some kind (on day1).
In a lot of cases, SSL is not expensive or time consuming. It is a single line in cron. I appreciate that this is not the case for your hosting, but economic pressure is one of the main ways SSL can be more utilised. The fact that you’re considering moving away from them, suggests that their business will suffer in the long term, if they don’t make integrating SSL easier/less expensive. This is good economic pressure, and its likely the best pressure that can be applied right now, considering the glacial pace of technology laws in almost all countries. You seem to be generalising your situation and applying the blanket “it’s too expensive” argument to everyone, even though it’s mostly a non-issue for people who have better hosting providers or not as much legacy.
Arguably, building a website with a login is a LOT easier and cheaper now than it was 10 years ago, because Let’s Encrypt is such a well known option. If they wanted to do so 10 years ago, they would have most likely had to pay through the nose for an expensive certificate. You seem to also have forgotten about these people with your blanket statement about hosting websites being more expensive for everyone.
Is the security provided significant in simple sites? Probably not. However, having SSL be a default is good overall. It gives less chances for operators to screw up because non-HTTPS raises very user-visible alarm bells. If your site is small and non-revenue generating, then why does the security alert even matter? It doesn’t prevent anyone from accessing the website.
Your 2FA argument is wrong. Sure, there may be multiple reasons for mandating it, but for regular users, 2FA is good defense in depth, that offers protection against password compromise. Again, the average consumer doesn’t necessarily have strong passwords or unique passwords across services. 2FA is good protection for them.
Also, if mining user data was the main reason for 2FA, big tech wouldn’t support hardware security keys for 2FA. Mobile 2FA is a usability compromise because it targets a lowest common denominator that (almost) everyone has.
The number of sites that should have had SSL but didn't was laughable and justification enough for browsers to require SSL.
I don't know if you're being deliberately alarmist, but 2FA is a huge peace of mind when done correctly with one time codes. Those don't require phone numbers and is the properly secure method.
Sure the old internet was a bit more fun and carefree, but it became far less fun when you had your online accounts compromises because of weak or non existent security.
Previously it was only required for secured transactions like purchases and working on health care records etc... And very rightfully so.
Now Google Chrome flags even simple (informational) sites for not being encrypted, and (quite possibly) rightfully so because of the potential for tracking/abuse, but adding encryption to a site is costly for independent sites (not hosted on social media or corporate platforms like blogs etc...
You shouldn't be required to encrypt a baking recipe site if you don't want to... Ultimately laws should discourage data abuse, and/or encryption should be inherently provided for every site/app uniformly by all web host providers (natively and inherently, and at a far lower price than it is now, generally speaking).
Too many people are running widely varying encryption measures, and implementing security in too many different ways to ensure that it is stable across the Internet. Security is best when it is uniform, fortified by rules and regulations, and updated ritually.
So when web traffic is uniformly not encrypted that's more secure than if it is encrypted by varying degrees and implementations?
Tbh your complaint reads along the lines of "Perfect is the enemy of good enough". SSL may not be perfect, but it sure as hell is better than running pretty much the whole web in the clear.
Particularly as pretty much all of your complaints do not really have anything to do with SSL itself, but rather in how Chrome surfaces a lack of SSL a certificate and how your hoster handles installing certificates.
Both of which are things you can personally change something about.
No, because "how Chrome surfaces a lack of SSL" is about your users' Chrome, not your own instance of Chrome. You cannot personally change the Chrome that is run by your users.