In the 2010s, the informational notion that "there is no 'other' informatics ecosystem" on which security, privacy, or surveillance practices and principles apply is slowly dawning.
(I would argue that the fact that governments try to have their own air-gapped packet-switched networks for secure communications, is a large part of the reason that governments don't invest too much into making the regular Internet secure.)
And, corroborating, from Wikipedia:
SIPRNet was one of the networks accessed by Bradley Manning, convicted of leaking the video used in WikiLeaks' "Collateral Murder" release as well as the source of the US diplomatic cables published by WikiLeaks in November 2010.
Ultimately, the weapon that convinced them was that they wanted to ultimately connect it to the “real” Internet, once the work was completed on the multi-level secure gateway that the NSA was developing.
I convinced them that there would be no way for them to communicate with the real owners of those IP addresses, and that they would need to use the DNS to communicate with the hosts and domains on the other side of that gateway.
It was a major face-palm moment for me, but I was glad that I was the DISA.net Technical POC at the time, that my boss trusted me and he was the DISA.net Admin POC at the time, and I had built a good reputation by helping them get the first CERT inside of DOD up and running inside of a week on ASSIST.mil, back at a time when there was just the one NIC for the entire Internet and they did root zone updates only once a week.
Thank $DEITY for that MLS gateway that the NSA was developing, because otherwise SIPRnet would probably still be using HOST.TXT files and random IP addresses pulled out of their ass.
This was also the event that convinced me I needed to get out of DISA quick, because I couldn’t keep saving the entire agency from making seriously brain-damaged decisions like the one I saved them from.
And that without IP and network isolation, HOSTS.TXT was an utterly meaningless obfuscation / network isolation mechansim (which was apparently not ... apparent ... to them)?
The only bureaucracy more SNAFUd than government and military is private sector. You just don't hear so much about it except through lawsuits and leaks -- no congressional investigations or constituent concerns. Representation has some advantages.
If I had to take a wild guess, an executive at some point said "why do I NEED to switch between these two terminals? Can't it all be secure on just one of them? I just want to be able to reply to outside emails on the SPIRNet."
And thus it happened, or probably something like that. Never underestimate the "executive factor".
And EF is very QED in DJT.
... largely as a consequence of several factors: interconnectedness, device development costs, functional flexibility, and winner-take-all market-capture dynamics.
Interconnectedness means that even if your secure, classified, compartmentalised, encrypted, logged information begins or exists on bespoke kit, there's extraodinarily good odds that it will transit or reside on other systems either on its way there or after being created. Networks are complex, with many components, and ensuring all kit is fully certified and cleared is all but impossible.
The costs of developing new devices is both falling (Moore's Law) and rising (hardware, firmware, driver, and software design is all rapidly rising in complexity). In virtually all cases, it is tremendously cheaper to begin with COTS (common off-the-shelf) hardware or software than to do a ground-up, greenfield, clean development. And where classified development exists, keeping up with improvements in consumer-grade kit is either impossible or remarkably expensive.
This claim rests on publicly available information, and it may well be that there are some exceptions. A few datapoints at least point to the likely costs. The Xen hypervisor has an advantage over vmware in that, by serving as a "shim" over the Linux kernel, it inherets the full class of Linux-supported devices. VMWare at least though the early 2010s was reliant on its own device development and featured a highly restricted HCL (hardware compatibility list).
The list of publicly-known top-500 supercomputers consists entirely of Linux-based systems. There is no public investment in either proprietary or bespoke supercomputer operating systems. Any classified work would have limited leverage from publicly-available development. Numerous of the publicly known supercomputers are engaged in highly-sensitive classified work. Evidence suggests limited, or exceptionally expensive, alternatives, if any.
Functional flexibility: what is still being called a "phone" is in fact a general purpose computer. And, for what it's worth, general-purpose surveillance-capitalism, state-surveillance, and ATP-surveillance platform, but I digress.
A slim glass-fronted slab provides voice comms, text comms, email, camera, calendar, notekeeping, Web access, geolocation, mapping, directions, e-book access, and a myrad of applications (though many of highly dubious utility). Both Android and iOS offer commandline access, though of varying completeness, reliability, and utility. (Termux now offers over 1,200 packages, Apple's iOS project remains fairly nascent.) This includes multiple development environments and potential well beyond the limitations of native Android and iOS platforms (already quite extensive).
A single "do everything" tool, that's sufficient for most of those tasks, will replace special-purpose bespoke tools in practice. It's an inevitable Desire Path (https://en.wikipedia.org/wiki/Desire_path). As a practical matter, workforce, corps, or agent discipline will be broken, and such devices will be used.
Winner-take-all market dynamics arise from several mechanisms, most especially positive-feedback network-effect loops of manufacture, development, sales and supply channels, developer ecosystems, and marketshare. Which means that not only will consumer-grade devices dominate, but a very small number of device or platform variants will dominate. Even highly-capitalised and capable firms experience frequent failure in attempting to dislodge established incumbents: Microsoft Mobile, in devices, Google+, in social networks, Intel's Itanium, in CPU design, Apple in cloud services, Amazon's Fire Phone. My point isn't that these are devices, but that they're the biggest players in the tech world taking on an entrenched contender and failing spectacularly.
Niche security projects are effectively playing in this field. They do not have to compete in the market with commercial offerings, but they do have to compete for talent, mindshare, tools, skillsets, and concepts. And they will all but certainly have to either interoperate with, or take into consideration the functions and features of consumer-grade kit.
Upshot: the worlds aren't separate, closed secure systems development competes poorly, and effective practice will blur all boundaries regardless.
There is no 'other' informatics ecosystem. We've got to make the one we've got healthy.
Just as there is no 'away' to throw things, there is no 'secure' storage system.
Though what I'm immediately addressing is the notion implicit in Barr's proposal that there are two separate universes of inforamation tech. There really isn't.
The question of whether infotech presents a fundamentally different "data physics", or if it's just an extreme form of one we've had for some time (information has an inclination to wander, digitised information more so), isn't entirely clear. That however is another question I've put some thought into.
No clear conclusions as yet, though I find myself preferring paper for many forms of recordkeeping.
The rule was NOTHING electronic went in that wasn't accounted for, and NOTHING ever left. We sent people onsite with what were effectively disposable laptops that were single use items and never left the location.
Showing up at the gate with anything extra was said to be a very bad idea. It never happened so I don't know what would happen.
You drove to the site in a car with only what you needed in it. Your license, keys, the equipment you were scheduled to bring. Smartphones weren't ultra common yet, but were absolutely forbidden. The car itself was searched too even though it was in a parking lot far from anything sensitive. You were warned that anything suspicious would not be in the car when you came back (like the extra stuff rule I don't know of anyone that happened to as nobody was foolish enough to drive out with anything but a clean rental car).
I honestly think that is the only security that makes sense. That should be the status quo for some areas the White House too IMO. At least as far as meetings and etc.
Maybe one day we get physical switches that power off cameras and mics, but hard to trust anything until that day....maybe not then.
The price paid is immense.
Norbert Wiener, following WWII, noted that the secure classified treatment of military R&D proved a bigger impediment for the Allies than its enemies. There were multiple independent efforts investigating the same or similar technologies who didn't know of one another and could not share experiences. At the same time, much of the information either leaked out or was available to Axis forces (or Soviets), though development lead-time made this a fairly minor concern.
J. Robert Oppenheimer had similar observations over the Manhattan Project, and there's a famous anecdote by Richard Feynman of travelling to Oak Ridge, where uranium processing and enrichment was occurring, and realising that plant practices were at severe risk of resulting in critical masses of refined material. That wouldn't have vapourised the plant, but could have resulted in deaths, meltdowns, and extensive radioactive contamination. Feynman had to fight to make this information available to plant management and workers, so that the US wouldn't unintentionally sabotage its own efforts.
That day could be quite soon – the Purism Librem 5 is supposed to ship at some point in Q3: https://puri.sm/products/librem-5/
More about the physical kill-switches: https://puri.sm/posts/lockdown-mode-on-the-librem-5-beyond-h...
Also, one of the Snowden disclosures was that apparently the NSA can power on the cameras and mics of Android phones even when the phone is off. I don't know exactly how it works, but it does, probably based off the residual current drawn from the battery and the circuitry that handles the power button. Smartphones are basically never safe, although I've heard iPhones are significantly safer than Android.
I mean, even if you could turn on the mic/camera, I would think you still need to possibly save it to storage for retrieval later which pretty much requires the OS to be running.
Otherwise, maybe, you could somehow transmit the data but that would require the ability to communicate with the device via Bluetooth/WiFi and for the data from the camera to be passed to the wireless interface.
Unless a device was designed with that sort of functionality, I'm not sure how the NSA could just turn that on.
Jumping from one exploit target to another in a chain is pretty much how all modern exploits work. To say that because they would need to find an exploit in the USB driver (or some other hardware or software interface), not only states the obvious, but more importantly misses the point: it's far more plausible than most engineers intuitively believe.
The depth of modern exploit chains is incredible, and while the conceptual difficulty has gone up the pace hasn't seemed to abate, which is clear evidence that our intuition about "too difficult" and about the elasticity of the exploit supply and demand curves is woefully inadequate to accurately gauge risk.
Part of the problem is that the complexity of the systems grows at the same time as improvements in security and correctness. Sometimes it grows faster, sometimes slower, but it's dynamic.
I believe that this has actually been pretty well established in the community, but I don’t have the evidential links immediately at hand.
The tone of incredulity in your comment suggests, as if this practice is extraordinary, when it would be considered SOP ─ tools and crims behind bars make the most unlikely bedfellows.
With SMD parts on a multilayer PCB how do you know that that switch controls the only path to the camera?
Do you have any idea what was being secured on the site? A research lab or something to do with analysing cyber weapons?
It was a military related site, remote, and our people were actually blindfolded from the gate all the way to the equipment needing service, and all you saw was indistinguishable from a small data center.
They had a pretty good system for keeping a tight ship, at least as far as threats from outside equipment.
He said there was a shooter at some point, and nobody could call for help, so they changed the rules.
You mentioned tweeting, was he doing NATSEC-relevant work from the phone or was he using an official device for those purposes?
I don’t think it’s possible to get high level pols to forgo their devices and follow good security hygiene. It’s in their nature to be communicative and available.
Seems like an unsubstantiated claim. Also, Android is a broad umbrella term where security varies widely across implementations and devices. Do you have any concrete information about reference Android devices (i.e., Pixels)
It absolutely isn't. Evidence supporting his claim are easily found online, both in leaked documents as well as from other first and second hand sources
I would hope there's no classified data access via WiFi at the WH, so perhaps that device's radios are of little importance. I'm not going to address the possibility of the radios being used to gather emanations, but keeping it on for very limited time periods is a mitigation.
(Yes, even so, the radios and microphone could perhaps be used for clever side channels for exfiltrating data during the times when it's powered on, but I'm sure there's lots of those side channels. This is partly why we have SCIFs, so again, not an issue.)
In any case, provided all he does with that device is tweet, provided when he's not tweeting it's powered off and locked away, provided the device is never ever taken to a SCIF, and provided there's no access to classified data over WiFi at the WH, I'd tolerate the President (whoever it might be) tweeting from a consumer-grade mobile device. That said, my preference would be for the President use a room for the purpose, and wired devices maintained for the purpose of tweeting.
The twitter account itself is of relatively low value.
Of course, perhaps he and his staff are quite careless with that device. Perhaps that device has been the source of many leaks! I thought NSA wouldn't let the President conduct the nation's business on an unsecure device, so I assume he only uses for tweeting, thus carelessness would be about the microphone, camera, and radios.
The case seems to be that the government and military has almost no special product offerings, so they use consumer tech. Therefore, weakening consumer tech weakens the government and military. This is not a robust argument imo.
The stronger argument is about how a whole economy would spring up that would fill office building after office building with full time hackers trying to dox, blackmail, mitm, or steal from every non-banking, non-crypto-approved communication in the world. The interned would eventually just die off as a communications platform as the public completely lost trust in it (though not politicians - they would be approved to use the secure channels and would not understand the issue)
edit: typo & wording fix
Especially in areas such as RF, optics and positioning, the military still has access to stuff the general market can only dream of.
I mean the small cassegrain telescopes in missiles are probably bleeding edge for that size and weight for instance. However, I think this fact falls into "weird flex, but ok" category.
A lot of confusion comes because it’s a combination of multiple different cameras than capture wide angles, infrared, and a separate camera that can focus on areas of interest within the field of view. So, if the entire image was at maximum resolution you would get into insane territory, but that’s not how it works.
It is not possible to have secure by definition communication and facilitate a backdoor at the same time. The two concepts are strictly mutually exclusive.
In the United States you have the amazing freedom to communicate in any language you desire (including one unintelligible to a would-be snoop), and the State cannot force you to translate your communications just because they want to hear them. We should not be so eager to give up that freedom in our digital lives.
Courts say otherwise sometimes, but they have to use twisted, convoluted reasoning that doesn't really stand up to the written letter of the constitution. Given that nothing more we can do in the constitution is immune from that either so what is the point of another amendment?
Waterboarding was treated as a crime by the US during WW2:
And then conveniently it was not a crime after 9/11:
Access to the escrowed keys may also be conveniently reclassified in the future.
And that assumes that the law is even followed: you can't trust that the "court ordered access" will remain only court ordered. Law enforcement agencies have violated laws in the past as a matter of policy:
- Obviously you can't reuse the same master key, so now you need 180+ keys, meaning 180+ backdoors into the system. It is ridiculous to expect that all of these will remain secure; the United States can't even secure the OPM database so even ours will probably be leaked, and countries with more bribe-prone law enforced will give it up even faster, and now everyone is pwned.
- Law enforcement are frequently the bad guys. YMMV on how often this is the case in the United States but it's certainly inarguably the case in many places abroad. Mandatory backdoors means no possibility for secure communications about dissidents and "inconvenients" in those countries.
Universal escrow == universal access. Leaking == global compromise.
There's a place for personal recovery key quorums, where multiple parts are joined to create an alternate recovery key, but that involves key management for each such key served, which is a Very Large Problem.
Might be possible to take it on, but the underlying problem of identity remains: The question "who are you?" is the most expensive one in information technology. No matter how you get it wrong, you're fucked.
Deny access to the right party: fucked.
Allow access to the wrong party: fucked.
The only advantage of security-based DoS over security-based unintended dislosure is that denied access doesn't propogate. Published data cannot be unpublished, at least not at any reasonable cost:assurance level.
Because of the key management issues, most seriously-proposed data-backdoor systems revolve around one or more of:
- Workfactor reduction in which known keys or values are used in key generation. The resulting keys are weak where the known inputs' secret elements are known, at least to state-leve actors.
- Specific escrow keys. No workfactor, just key access. Widespread key access is a Very Bad Day.
- Specified access accounts. Like above, but worse.
- Specific system bypass. Alternate paths to data access on systems.
- Alternate data submission. Various "phone home" or intercepts of in-the-clear transmisions, either through design or software/device compromise.
As a practical matter, alternate information channels (usually metadata), public data, standard detection, bug/zeroday exploitation, and various sideband attacks (Maginot compromise: don't go through, go around) tend to be used, though cryptographic attacks have some utility.
1. No one trusts the government not to abuse such a power. "Only usable with a valid court order" my ass.
2. The government doesn't trust citizens not to change around the escrow keys used (which would prevent them from decrypting things)
You can already do that by voluntarily providing access to your devices to anyone you want, including law enforcement. Other people don't necessarily share your ideology and accept that there could exist "legitimate law enforcement searches" of their private communications.
Stop trying to build scenarios where key escrow solutions are technically sound. They are not. This is an intractable problem that is not solved by these half cocked technical measures. Key escrow cannot by definition be secure, and wasting time trying to invent solutions just confuses the matter, weakens security and leaves us all vulnerable. Basically, Sssssssh, or the politicians might actually believe this fantasy.
On paper, sure. In practice, not so much.
Because "we've" spent decades trying to figure out a method that will actually work, and universally failed?
Governments have shown time and again that they abuse any power they get - I wouldn't trust them with this.
What if you simply claim to encrypt it twice, but instead place random noise wherever the "encrypted for gov't" data should be? If the system works as intended, it shouldn't be possible to attempt decrypting it (and thus verify if you encrypted it properly) without a search warrant based on probable cause, as the key material would be in escrow and not accessible to anybody including law enforcement and intelligence agencies before a warrant is served.
So you can't have proactive detection of that and everybody who wants to ecrypt communications with criminal intent can and will continue to do so with the same consequences as right now, where they can be pressured to reveal the keys but their communications are otherwise secure; the process would risk the privacy of honest citizens (in case if the escrow system is broken) but not hamper the bad guys at all.
I'm assuming that bad guys can modify the software they use, which seems to be a reasonable assumption supported by practice.
And given the proposed rules of key escrow the government has to assume that they use it as-is, because doing traffic inspection to ensure that they really do so is impossible without a specific warrant, so whenever they do get a warrant and get the keys out of escrow, then that's the first moment when they can tell "ah, we actually can't decrypt Bobs messages because he's not using that key".
So the proposed naive system of simply "encrypt the message twice so that either the intended recipient or the government with 5 HSMs can decrypt it" won't work. You can have a more complicated system that works around my objections above, but that would be a different system that will also have other drawbacks. Cryptosystems are very hard in general, all small details matter, a random proposal that hasn't undergone significant expert analysis has almost 100% chance of being fundamentally flawed, and doing reasonable escrow will have all kinds of "interesting" consequences and potential attacks, I am not aware of any public proposals for the specific details of a mass-escrow system that would have reasonable consequences.
This story should be repeated whenever anyone brings up 'solutions' involved with key escrow. Bruce warned us in 2006 this was a backdoor, ten years later, we find that not only was it implemented by Juniper, the backdoor was backdoored by unknown (and potentially malicious) actors. Really, this should be the last word on why this key escrow and general cryptographic backdoors are a terrible terrible idea.
In other news giving someone root access to your machine gives them root access.