All of the tech exists in some form or another, but if it were well packaged, it's not hard to see there being a sufficient distribution of members to get people connected easily.
Perhaps in an event like that a decision would be made to temporarily open up the spectrum, but even then there are only so many of us, only so many transceivers out there.
I feel the HAM net would be more useful after a natural catastrophe, where the infrastructure would be destroyed physically. Which is exactly what a lot of us are preparing for.
Or, depending on who shut it down, you may have to deal with jamming.
The amateur radio regulation regime and common ham radios work well for small numbers of small messages sent around in a well-regulated way, without the government initiating a frequency band jubilee. But beyond that, HAM radios are limited, even if they're modded, and the cheap SDRs are even cheaper than baofeng handhelds, so where does that leave amateur radio in a real frequency free-for-all? I think what would matter is, as mentioned above, availability of SDRs, and secondly, parties of people tracking down transmitters that are messing up the ad-hoc sdr wireless nets.
And there are caveats anyway.
For local connections, some kind of WiFi mesh might still be the best option.
For long distance, I don't think you can currently use anything but proper HAM equipment, and fairly large power at that. For a reliable connection, especially at good bandwidth, you need lots of power and a good antenna. But if you blow standards out of the water, and start pumping out huge bandwidths at huge powers, you run again into a tragedy of commons - you're taking up large chunks of spectrum over entire continents.
There is no free lunch.
The (VHF and UHF) ham bands will send data a few tens of kilometers with good messaging bandwidth. Here in Seattle there's a Monday-night digital net through a repeater on the Columbia tower where folks send text messages and some (slow) photos on 444.55 MHz. Typical speeds are like 9600 baud.
As you mention, when you go down to the HF bands like 20m you can transmit and receive around the world, but there's far less bandwidth. The tragedy of the commons is right on.
I've operated PSK31 (which runs at 31 baud) worldwide on 20m and it's pretty much a chatroom. You can get a lot said with that, but you certainly won't be browsing the web.
It would be cool to play around with connecting 2.4 GHz local wifi meshnets with ham repeaters at ranges of say 50 km. Then you'd have nice fast local communication with reasonable long distance. .
I don't know if there's any legal precedent or official policy regarding digital signatures; I would guess that they're probably okay because they don't obscure the meaning of the communication and anyone can verify them against the sender's public key (assuming that the public keys are published somewhere).
Communication with no privacy but with cryptographically secure signatures might be acceptable for emergency situations. It's unfortunate that ham rules are sufficiently restrictive that most of the tools we use on a day-to-day basis wouldn't be legal to use without substantial modification. But then again, we wouldn't want people trying to log into Facebook/Youtube/Reddit etc.. when the network runs at like 1200 baud (if it's packet radio on the 2 meter or 70 cm bands) or maybe in the low mbps (if it's over some kind of 802.11 b/g/whatever mesh network operating under part 97 rules).
Fortunately, while part 15 rules are pretty restrictive about power, they're less strict about antenna gain, so it's at least theoretically possible to make multi-mile connections without having to operate under ham rules. Building a large network out of point-to-point links with directional antennas, though, would be pretty difficult and laborious even in a non-disaster situation, so realistically I think the best local disaster communications option at the moment is to just use APRS and analog voice over 2 meters and accept that 1980's technology that sort of works is better than a modern internet experience that requires a lot of infrastructure that isn't working or available.
What's the downside?
And there are a lot of them, and at this point if you're into ham radio, it's for the love and you tend to be pretty proficient.
A zero-privacy internet might be better than nothing, but I'm not 100% sure of that.
and you've basically got the web ;p
at least a one way communication network form
If there's a nuclear war, I want to be in the hypocenter of the first detonation, because we're not climbing out of that hole as a species in any meaningful way. I suppose that reality is why so many have turned to fantasies of a worldwide EMP of some exotic type which is at least survivable in their imagination.
Me? I've read my history, if civilization comes tumbling down, my plan is to eat a gun. Meanwhile I'll live my life without terror, and plan only for disasters that can reasonably be managed without dedicating my limited lifespan to it.
Blasts themselves seem relatively harmless, beyond the hundereds of millions they just outright kill that is, radiation we're just characteristically paranoid about, but can actually deal with at least in many remaining areas, and the main issue at debate is whether a nuclear winter of substantial duration would be formed or not. Which depends on the scope of the fires, so flammability of urban environments and the like. We can't pretend to know a real answer, but its certainly possible.
And then there's the issue of whether the south hemisphere could avoid that fate even in such a case, due to weather patterns, provided there's no detonations there (as there are no weapons there).
Now that doesn't seem substantially different from any large-scale warfare civilization easily survived previously, like world war II; urban devastation and millions of dead. Hardly a civilization-ending event.
So I grant you that something a bit less hopeful and dramatic than 'Mad Max' is more than possible, but only as part of a long slide into the end of our species.
While "Humans" would almost certainly survive (for a while), the odds of any given human (i.e. you or me) surviving are pretty poor. At best you'd be looking at generations of struggle and misery, and then what? Our way of life came to pass by a number of factors including the ready availability of coal. That's... not coming back either. Resources that don't' require extreme mining are generally depleted already, from fossil fuels to various metals.
The various steps that brought us up from mud huts don't necessarily work for another round.
IDK, for one I'm not even sure a major exchange is as deadly as here supposed (much depends on who else joins the party I guess); sure lots of land on some continents at least is badly irradiated, but a majority is not quite as bad, and most of radiation degrades quickly, some simple measures help, and besides not everyone needs to survive every year After Launch anyhow, and non-extreme radiation doses kill only statistically. Much depends on - as you say, on how much of state command structure is able to survive and organize the remains, and that could easily vary from state to state on the planet.
I lived through a minor war in my youth, with the frontline maybe 2km at its closest approach and regular shelling for IDK couple of years. It was a remarkably well-ordered affair, considering. Fact half the GDP evaporated and rest was put under direct government command for war and other logistical purposes didn't really constitute a panic collapse of societal order or anything like that, and return to some kind of (low budget) normality quite quick. Kinda remarkable in retrospect, in how badly we react to small upsets, like a high-single digit GDP loss in a recession, yet tolerate such major disasters...
If the number of people directly killed is on the order of a few hundered million, thats not a substantial part of the humankind, so it may not be too different -- to people living in places too boring to have been hit by a nuke and not too close downwind from something interesting, ofc.
Guess it becomes worse if all the players hit all the other players, and no further advances in reducing nuclear inventories are made before it hits etc.
But again, yeah, a societal collapse is certainly a kill-myself kind of event for me too, because as you say - we're never gonna rebuild if we fall to that point. I'm just more sanguine about nukes being lobbed about not necessarily causing this I guess.
While internet failure mitigation isn't an explicit goal of such projects, their resources might make a good starting point.
After the significant inaccuracies and frequent unsubstantiated speculation in Schneier on Security, I don't think credible security researchers can take his analysis at face value. Additionally, the halo effect of his actual expertise, cryptography, convinces people who aren't security experts that his opinions and speculations are correct. Worse, he rarely frames his speculation as such; he states conjecture as fact. This is counterproductive and leads to confusion among journalists and eventually the general public.
To the imminent downvoters, I'm not offended; I expect it with an unpopular opinion. I'd prefer you engage with a reply in addition to the downvote so we can have a discourse. I think it's important that I add my dissent to the conversation.
Yes, I agree the article is vague, and I'd like to learn more. But this is typical for this kind of backchannel intel. From some sources, through some channels, for some kinds of info - this is all you get. This is business as usual.
Take it in for what it's worth. It's a signal from a sea of noise, nothing more. Maybe it's actionable, but perhaps it's not. Just learn to deal with ambiguity; the world at large is quite different from the rigid boolean-logic computer systems you're interacting with on a daily basis.
You're shitting me, right?
Computer engineers are the last people who think in rigid, boolean-logic ways. It's the general population that does that. If you do any serious thinking in any STEM field, you quickly learn that the world is probabilistic in nature, and ambiguity is what you eat for breakfast. What the technical fields do to manage with it is learn to quantify the exact nature of ambiguity. When you do that, by means of probability theory, you learn that ambiguity doesn't mean "anything goes", there are rules it follows.
Like, backchannel intel may be vague, and this also implies it's likely to not be true (unless you can pull out additional evidence in its favour, like e.g. good track record of the person delivering this backchannel intel; that point is discussed in parallel threads). In a sea of noise, the "signal" you see is most likely a coincidence. Not comprehending this (aka. "seeing patterns everywhere") is one of the biggest sources of irrationality in people.
This rhetoric is patronizing and doesn't contribute to the conversation.
This is a limitation of computers, not the engineers. The engineers are happy to deal with ambiguous information as long as you don't mind ambiguous results.
His post is the very definition of "taking it for what it's worth".
He also cites anonymous sources. These sources agreed with each other and with the public report from Verisign. He explained why he was keeping those sources anonymous.
That is just good journalism.
My comment was juxtaposing it with the accuracy of Schneier's blog and his public statements on computer security.
On this topic, he chose the third option, because he felt that people needed to know, even though he couldn't give specifics. It sounds like you wanted him to pick the first option. If he did, though, it would be the last time he would be able to do so, because his information would dry up.
That's the pragmatic argument. There are also some of us who feel, when you tell someone that you aren't going to blab what they told you in confidence, that you should keep your word...
I do say that it's inappropriate to expect implicit trust after all his previous integrity failures (conjecture as fact, etc). I want to believe this article. I do believe it. But I also can't rely on it, as his track record shows that given the topic of computer security, he will even present unfounded speculation to Congress as fact if given the opportunity.
That's a pity, but I guess it makes sense for him if he wants to exert influence.
Unfortunately too many laypeople take everything he's writing as gospel. Remember when he clearly misunderstood the "xkcd scheme", was called out by pretty much everyone and couldn't even admit that and post a correction? You can be sure that lots and lots of people will dismiss everything looking like it (Diceware!), simply because Schneier erroneously piled heaps of ridicule on it.
Yes, appeal to authority and all that, but I don't have time to fully learn a field to find out if a cryptographer is mistaken.
Also, the point I was making is that if he wants to leave work uncited, it should at least be the work he has actual credibility in.
(Interestingly enough, that sounds like a point Schneier would make :-p )
In fact, there's a general problem in belief updating (Bayesian or otherwise) where you may over-credit others' opinions by treating them as independent when they were both actually relaying the same data point. You can only detect this error if you can inspect the source of those opinions.
> Also, the point I was making is that if he wants to leave work uncited, it should at least be the work he has actual credibility in.
A totally valid point. Way too often, people smuggle credibility from an area where they have expertise (and therefore deserve the credibility) to areas where they don't. In this case, though, the real credibility is Schneier's honesty, not his expertise, since he's passing on (obscured) reports from others.
I think it's absolutely valid for tptacek to demand citations from Schneier!
What are you referring to here?
And, taking your statement at face value: If he speculated, and was clear that he was speculating, and was wrong, that doesn't destroy his honesty - merely his reputation as a speculator.
His shortcomings are especially apparent when applied to APT, memory corruption, and computer network intrusion/defense.
This is the modern way to ask for upvotes.
Ask for upvotes straight out? The community saw it one too many times and doesn't anymore.
Mention you expect downvotes and that it's an unpopular opinion? People agreeing with you, which there almost always are anyway, will show you support while making potential downvoters think twice.
I was going to upvote, but never mind.
> someone has been probing the defenses of the companies that run critical pieces of the Internet.
> China and Russia would be my first guesses.
I don't understand the reverance around Schneier. I first saw him give a talk in 2009, and it was an 'insert town name here' speech about stuff that was blazingly obvious to people who should already know (topic: social engineering and passwords). Yet people were fawning over the talk. It really struck me as a guy who was once great, but is now resting on his laurels - that halo effect you mention.
I get exactly the same feeling from this article. There is nothing in it that we don't already know. What, there are state actors in Russia and China that are effectively at cyberwar with us? Quelle surprise! DDoS attacks are getting more sophisticated? Quelle surprise again! He takes one issue in tech that actually has filtered through to the general public, and makes it sound like he has the inside story. DDoS attacks pick up where they left off last time? Must be the work of an evil genius - no mere mortal could think of that!
I also get that the article is for a general audience, but in that case, the "oo, I can't share details!" bit is just populism. In short, I find his writing on tech to be lots of fluff and little meat.
Perhaps I'd have a different opinion if I grew up with him in his glory days, or if I was more interested in crypto and read his more technical papers, but while I've been on HN, I've never been enlightened by a linked article of his. This is all, of course, personal perception, and he may be a downright top bloke to someone more in the know.
In retrospect we've learned a lot since them and no one (including the author) would recommend developers read that book first or even at all. Now we've come to the understanding that folks are much better served by opinionated cryptosystem design ("no sharp edges") and texts like "cryptography engineering" that have a better focus on failure modes.
Anyway, he's not the be all, end all expert but he has been thinking about this stuff for a long time and often has perspectives that are worth thinking about. Some of them, like his views on airline security etc are now so mainstream that you wouldn't realise he was a big part of why they are now widely held.
But mainly it's that he has a lot of pretty high level gov and industry connections that I would at least entertain his conjecture here.
Let's safely assume that these servers, every single one of them, are subject to DDoS attacks all the time and have at least some experience in handling them, and have a backup scenario ready for a serious attack. One of the reasons why the root servers are not centralized is to avoid the kind of disaster that Schneier predicts.
Also what if I maintain a list of IP addresses of the websites I visit most and update that list daily. When the "big attack" strikes, I put that list in /etc/hosts. Would I still be able to do my holiday shopping from Amazon? Would I still be able to read the logs on my VPS by ssh'ing to its IP? How long would such an attack sustain before BGP modifications start blackholing the sources? Long enough to let the average TTL cache expire?
Would an attack on the root servers really take down the internet? Or in case Schneier isn't talking about that, what kind of attack on the decentralized internet is actually able to take it all down? I'm not saying he is wrong, but I have a hard time thinking about how we should prepare and protect our infrastructure if he doesn't want to share the intel he knows instead of some generic warnings.
But I never do anything about it.
If they disable/crack/overwhelm the major routers connecting different ISPs (e.g. zero-days or backdoors for router OSes, BGP attacks with cooperation or cracked credentials from some major ISP insiders), then the internet is not going to work for you because your ISP will be simply unable to route your data to where you want.
Are there any good reasons to believe that all major router models don't have backdoors inserted by state actors, either by bribing an insider engineer ten years ago, or even having a manufacturer of some secondary on-board chip (that has direct memory access) insert a hardware backdoor ? We've detected such attempts before, there's all reason to expect that there are some of them active and undetected right now.
* Find a couple of remote security holes in Windows and Android, maybe iOS and Macs as well (Linux would be good too, as lots of servers run Linux and have big bandwidth).
* Write a self-propagating worm which uses your holes to infect a large chunk of machines currently attached to the internet.
* Set your worm so, after an hour or so it starts hammering the root servers.
That mess would be almost impossible to sort out, particularly if you were clever about the traffic you created do it was hard to filter.
The only reason I can think no-one would do this is it's MAD -- no-one's internet would work, why would Russia or China or the US want to take down everyone's internet?
No-one's "internet" would work except for states that had a backup network. In the event of war such a tool would be useful, imagine the panic, chaos.
Another situation could be a major power trying to destabilise another's economy, fiscal warfare?
Could be an interesting, peripherally relevant talk...
Moreover, I don't think it's even possible to reply to posts made from shadowbanned accounts.
I didn't look at the poster's history, I just saw a constructive-looking comment that seemed to be modded to oblivion, and jumped to conclusions.
I had to vouch for the post before HN would let me reply, which seems consistent with how shadowbanned accounts are handled here.
That needs to change, and the author is right that while there seems to be little to do now, people should be aware of it.
Even if a cyber attack were the "plan a" for quickly and untraceably taking those systems out, the US has an easy enough "plan b" that testing "plan a" isn't going to be a major concern. Add that to the fact that the US has a lot more to lose if it gets caught attacking internet infrastructure than China and friends do (even just tests like this) and I would be surprised (honestly not shocked, but definitely surprised) if the USA is behind these shenanigans.
Actually, if the US wanted to test something like this on a service in a friendly country, I would expect the NSA to approach the infrastructure company and say something like, "We're concerned that $enemy_of_free_speech may be planning an attack on your service, and we would like to wargame that scenario with you. What time(s) would an outage have a minimal impact to your bottom line?"
So can you give me the address of the rock you've been living under for the past 3 years?
Sure, it throws its weight around when asking various social media platforms to censor certain types of content, and it has a no-holds-barred approach to intercepting data traffic, but it generally draws the line at knocking services entirely offline.
> it generally draws the line at knocking services entirely offline
Well if you're gonna play the card "I'm reading a different Internet than you are", sure whatever.
Remember the U.S. government has plans to effectively destroy the world with nuclear strikes as a contingency in war. Do you think they would hesitate to prepare plans to take down a data network? It's immoral? Unthinkable?
I'm not criticizing such plans. War is death and destruction, and the U.S. must be prepared.
The U.S., and all nations and citizens, also should do everything to prevent situations where war is the best remaining option. This requires sober, expert foresight in foreign policy and politics, anticipating 2nd-, 3rd-, and n-order effects, not emotional, knee-jerk ideology and amateur foreign policy.
As you say, we need to be active as citizens in ensuring that either such a war never occurs (in which case lets be realistic, a loss of the internet is going to the prelude to mushroom clouds), or that conflict is minimized and if necessary, occurs through proxies. It's ugly, it's not the way we should do things, but it is the way we do things.
Could they make it look like China was at fault? Also almost certainly.
Would they? Well, they'd need a good reason. What would a good reason be? To hone their attack skills? Perhaps. (I would expect - though I have no proof - that many of the American pieces of internet-critical infrastructure are more hardened against attacks than many other countries' stuff, because the American stuff gets actual attacks more often. If the NSA can attack our stuff to the point of breaking, it can probably break other countries' stuff.)
Would the NSA do it to hone peoples' defensive capabilities? To show them what a real nation-state attack might look like? Also perhaps. (Or perhaps it could even have both goals.)
Would the NSA be in very deep trouble if they ever got caught at that game? Probably. Deep enough to get them to not do it? I don't know.
TL;DR: The NSA could be doing this. I'm unsure how probable I consider that option.
What exactly rules out America? NSA wants to see how an attack might unfold, or wants to see how to actually shut things down in case of insurrection, a coup, or pitchforks. Does some hard probing. Things get bad, and companies call in ... the NSA, who then get to do unfettered battle damage assessment.
That said, Schneier obviously has more information than he's currently sharing.
They are already trying to pin the DNC insider leaks on Russia. Clinton is threatening physical war for "cyber attacks".
Face value is worthless.
This internationalism amplifies the net's vulnerability, and when coupled with (as per the article posted a couple of days ago on the grid's susceptibility to overload and the resulting brown/black outs) the net's dependency on a huge infrastructure meshes quite neatly with those who don't give a shit who suffers as long as it's someone that might be responsible for their woes, so someone desperate enough to eradicate the bulk of digital information would likely be concerned with larger issues like debt, weapons manufacture, or something similarly transnational.
China in particular has been building their own parallel internet universe. If Google goes down, most of us are going to feel it - but not China.
1) The US has nothing to gain by taking down the internet infrastructure via malicious means.
2) The US has better access to these systems to take them out directly rather than forcing them down via DDOS attacks.
I have no doubt, however, that we likely have developed plans around this sort of thing in a defensive or response capacity.
In this case, they don't overtly control the assets under attack, but would still want to know how resilient our networks are "in the real world" -- not always as a "friendly" drill, a la Red Cell.
They'll learn a lot more from them. ;)
Well, a mere suspicion does not rule out properly anything. It's like a quantum wave function with a maximum of probability density on China, but non-zero values everywhere.
In a cyberconflict escalation if it would come up to a possibility of disrupting core Internet infrastructure to (temporarily) disable most of Internet, it would be most likely for China or Russia to want this result and for USA/NATO to actually want the opposite.
It's not the designers fault that so many people are dumb enough to happily give one company a near-monopoly over certain forms of communication.
It's very simple: stop using Facebook for everything. Use different sites/services, or switch to a decentralized service like Diaspora. Otherwise, stick with Facebook for everything and stop complaining when it bites you in the butt, and suffer the consequences when disaster strikes.
An IP datagram authentication at the lowest level is required so that anyone on the route can detect forgery, error or tempering with the data. This would allow tracking the real sources of DDOS attack, diagnose the cause and fix it.
What's the point of keeping digging deeper trenches ?
This should be a top priority change of the Internet. There was no incentive to move to IPv6. Now there is one to move to a more secure Internet.
See you in thirty years.
Also, IP authentication doesn't help you. DDOS traffic often has real IP source addresses on. It tells you that the traffic is several hundred thousand home PCs. Now what?
We wrote about one way to do this about ten years ago, but no-one was really interested at the time: http://www0.cs.ucl.ac.uk/staff/M.Handley/papers/terminus2007...
Unfortunately, even if you know that the source address isn't spoofed, such a mechanism would itself be abused to deny service
And, since each additional node in the bot net has zero marginal cost, why bother trying to hide the device anyway?
Collecting the source IP addresses of a DDOS attack is the first thing that could be done. Then progressive pressure should be put to enforce fixing the computers and get rid of the bots. OS with weak security would then feel the pain.
The day this is done, the next step will be to use forged source IP address. What would be the incentive for ISP to pay the price to filter packets ? As long as no one will be able to prove that the packet is forged, they won't do anything.
If coordinated with an attack against the root nameservers so we couldn't change the .com and .net nameservers, DNS would become a real disaster. If combined with some BGP trickery, you could even see domain names being poisoned.
We should be able to be worked around the damage eventually; but so much of the internet relies on so few root servers/hosts/routers.
When you look up google.com, these root nameservers are queried for com, and they return the results (name and IP) for the nameservers for .com
These nameservers for com are then queried for google.com, which then return the results for the nameservers for google.com.
Google's nameservers are then queried for google.com, and an IP is returned.
So yes, given how DNS works, all .com and .net domains would stop resolving if the Verisign nameservers for .com and .net were to go down. Most people go through caching nameservers, which would retain the values for google.com, and continue to return them, up until the time to live on those records expired, at which point they too would stop returning any values if the upstream servers hadn't returned before then.
As applicable with all areas of life, association is a security risk. By
depending upon any centralized authority (such as a server or domain
name registrar) you are open to being censored (either by them or an attacker).
At this point however, decentralized web-hosting solutions still rely
upon clearnet centralized port checkers, which is (ofcourse) an issue.
The best the community can do is help to raise awareness of
decentralized web hosting in the hopes more people will adopt it leading
to a higher likelihood that the problems will be solved.
Or unless it's the US itself. Not the most likely possibility I think, but still a possibility.
I'm not doing the snarky "citations pls" thing; I don't dispute it happened. I just want to know more.
Cyber-warfare is the 'new' war and just like any war, misinformation plays an important role.
It's what the author is being told by the people he has spoken too. Maybe a lazy assumption on their part, but it's not lazy writing. And your point is directly addressed in TFA:
"The data I see suggests China, an assessment shared by the people I spoke with. On the other hand, it's possible to disguise the country of origin for these sorts of attacks."
It would be interesting to know the sort of resources needed for this kind of attack/probing. Is it limited to state actors, or could we all play? Is the objective simply to be prepared, or is there a plan afoot?
Per the article, no, we can't all play. We don't have either the bandwidth or the expertise.
Not quite, it says "If the attacker has a bigger fire hose of data than the defender has, the attacker wins" and "the size and scale of these probes—and especially their persistence—points to state actors" which is not quite the same as saying you need to own the bandwidth. For example, DNS amplification can be used "to turn initially small queries into much larger payloads, which are used to bring down the victim’s servers".
So maybe there are other techniques which might allow for similar leverage. Neither is the article conclusive about "state actors", they are merely "pointed to". As for expertise ... I don't doubt there are people out there who have it or might acquire it. So it's still an interesting question imo.