Anyone having hard time swallowing this? They broke in and were totally undetected when they planted a... rootkit (unclear from article). While undetected they silently stole user credentials... Then to cover it all up, they ransomware'd the computer?
It's like breaking into a house and putting a bug in the wall, and then to cover the tracks you smash in the front door and leave the water running in the sink.
If the attacker was completely undetected, why intentionally jeopardize that?
So the premise is that they clear the ransomware and think its over. But its not.
Unless of course you don't provision machines, or keep backups. In which case hiding on a machine being "cleaned" would be simples.
and on this whole area, i would not encrypt someones machine - unless you are trying to scare someone, it would be better to have never known.
Think of it this way: if you are responsible for security, you never assume that you're fully protected. If you're a smart hacker, it should be the same thing: you never assume that you won't be detected.
I think it's more akin to breaking into a house, putting a bug in the wall, and then stealing some cash in case the homeowner had a silent alarm or nosy neighbors or some other way you could be caught.
If your shoe prints only led to the safe and back out, it would be obvious what you stole; if they're all over the house and the whole place is a wreck, then it's harder to tell what you were (actually) there for.
I think that all these NSA problems show bad management. They should be reorganized, or maybe even abandoned. They cost more than they deliver, and even costed us our privacy. Probably they are still breaking US and international laws on that. Breaking up NSA can allow the FBI and (open) security companies to take over the cybersecurity.
I suspect that we will soon have leaks of CIA tools too. But thanks to wikileaks companies can prepare for these future problems.
We can go deeper into who got these tools and who is using them. Some may even argue that the CIA leaked the NSA tools to weaken the NSA. Or worse, that some in the NSA want to create cyber chaos to push for more control over the internet in the future.
The article mentions the popular political scapegoats, and as usual this is just speculation. To solve NSA's problem we have to request very concrete evidence, otherwise we are just being played with.
Article: > The Shadow Brokers resurfaced last month, promising a fresh load of N.S.A. attack tools, even offering to supply them for monthly paying subscribers — like a wine-of-the-month club for cyberweapon enthusiasts.
This shows how we are being played with. The NSA could already have published the security details of all leaked tools, so we could all have protected our computer systems. We could have prevented Wannacry.
NSA did exactly what you said and went to MS months before wcry hit, once it was clear what shadowbrokers had, in order to patch the vulnerability that wcry exploited: https://technet.microsoft.com/en-us/library/security/ms17-01...
unfortunately, not everyone keeps their systems totally up to date for various reasons.
Because patching requires a quick risk calculation. Should I patch to the bleeding edge and get the latests security but risk a regression bug, or do I wait a bit so I can run a full regression test?
On my machines, sure I like to stay on the latest and greatest. But I'm sure there are plenty of companies that got bitten because some critical software they rely on didn't play well with the latest OS upgrade. Blame game notwithstanding, it comes down to a business disruption risk.
Of course, the right answer is to test the patches as they come out in a non-production environment, and go from there based on results. But I can see where some companies wouldn't have the resources devoted to do that on a frequent basis, which is unfortunate.
And I'm a security guy. If I don't care enough to update every day, what do other people do? (There's not much reason to keep a personal laptop up to date when you're not interesting enough to be targeted and not in the habit of running random internet programs. Or rather, a few months of lag is fine, as long as you're paying attention to what the updates are for.)
I think we just don't like to acknowledge the fact that updates sort of suck, yet we bash people and shame them for not doing them. I mean, we don't have many other tools to force them to update, but surprise surprise when people are just concealing the fact that they don't update at all. Even in corporate settings.
I'd be interested to hear a heavyweight enterprise sysadmin's take on this, but my experience and read is that it's "toss it at the wall and see what sticks".
Which is crazy when MS knows all of the following things: (a) which files the patch changes, (b) how it changes those files, (c) which programs a customer uses, & (d) which MS files get loaded by which customer programs.
In days where we can do ISA-to-ISA conversion on the fly, you'd think it wouldn't be rocket science to be able to say "Warning! This patch may effect operation of commonly used programs X, Y, and Z". Or at least have an admin tool to provide that information.
This is part of what I do. We do a risk assessment -- SMB and Kerberos get patched no matter what, everything else depends on a test cycle and may be deferred for up to 6 mo.
What? With the large monolithic patches that Microsoft has moved to, they have got worse, much worse, recently. It does make it easier to patch everything at once quickly, but if one thing goes wrong, you have to back it off and lose all protections.
But my point was more towards MS programatically alerting customers as to what programs and subsystems patches might effect.
As far as I've seen, they give you a file list, some brief notes on what the patch is for, and assurance that they internally QA'd it.
But I can't see why there's any technical reason that my system can't warn me that an MS library called into frequently by a particular program I use every day is modified in this patch.
Which is something I care about almost as much as "MS QA passed this patch" (side note: thanks so much to all the unloved, unknown internal QA folks out there, keeping things from breaking!).
The LTSB version is designed to essentially stay the same with regards to applications/apis/etc while still proffering up security updates as fast they have them.
Not to mention that also afterwards there were Windows 7 patches that kept one CPU core 100% busy for days!
Windows 7 is the new holdout, and people aren't eager to swallow 10.
Except when you are "updated" from a full, paid version to a spyware/adware-ridden version.
Seriously: I think Windows 10 is great, technically and usability wise.
But MS need to learn that they can't have it both ways:
Paid xor ads. (I can think of two exception: inside the store app and inside settings for onedrive.)
There is no good reason to excuse Microsoft for maliciously disguising updates as security patches in order to manipulate the non tech-savvy into switching to their new unprecedentedly invasive OS. Especially when, as you said, the UI is worse.
The entire UX of Windows 10 is worse, every time I have to use it for something I get physical anxiety from claustrophobia.
I really don't like that I have to install untrusted 3rd party software on my computer in order to prevent my operating system from automatically ruining my user experience and spying on me.
Then, not having learned their lesson, they pulled similar crap with the Windows 7->Windows 8 transition which pissed people off so badly that they refused to go to Windows 10 and are currently suing Microsoft for attempting to shove it down people's throats.
Insecure old version of Windows are Microsoft's own damn fault.
Moving them out of the kernel is a great engineering decision, infinite backwards compatibility isn't feasible. Sometimes breaking changes are needed.
The idea of non-privileged drivers is neat, but in general is not worthwhile because the driver has to somehow access the hardware, which for significant amount of device/platform combinations leads to access to arbitrary memory locations.
Edit: perfect example are GPU drivers, which are for a long time typically composed of small priviledged kernel driver and all the complex logic in userspace. In many cases the interface between these two components could be abused to get code execution in kernel context (in 2k/xp times there was even RCE in kernel context triggered by displaying properly crafted image in IE)
This means you aren't preventing drivers from having full access. You just need to prevent more unintended side-effects.
Suddenly, people can run that business critical, single, old driver as unsafe while running the other drivers safely.
Alas, some manager at Microsoft decided it was more important to get his numbers up this quarter so he could get his bonus. In so doing, Microsoft orphaned a bunch of people on XP just like they orphaned a bunch of people on VB6.
Microsoft made its own bed; now it has to lie in it.
There are plenty of companies where "most of the time it's not broken" is a big problem. Possibly bigger than the cost of a security issue (I don't know, because this is both hypothetical and I'm an outsider to the issues at play).
To not acknowledge that others would value a different part of the risk curve lacks perspective.
With all due respect, if that's the case you should not be running Windows.
I can't buy an Emerson control system for a small reactor getting reconfigured every other week, and LabView on an un-networked Windows computer is perfectly fine.
I would not use a PC (with any OS) to control a 10 kg reactor though. At least directly. I think it'd be okay to use a PC to coordinate discrete controllers as long as they couldn't change state without a command (i.e. latching valves and the like) and as long as there was a backup safety that didn't have a computer in the loop.
Safeties that do things like shut off furnaces if temperature sensors break or valves that shut off flow if it becomes too high or detect a flame are common, analogous to a fuse on a circuit board. You sure hope not to use them, but they'll suffice for unexpected situations.
But there's definitely a risk that has to be managed, and connecting infrastructure and industrial equipment to the internet is not managing it very well!
All OSes vulnerable to wannacry are still under support.
Contrary to popular belief, wannacry will not spread to windows XP, it will just crash it. It will run on it if you fire the executable manually, but it will not infect an XP host through SMB.
I.e. the average citizen.
Q: Is the NSA there to support the average citizen, or are they there to support the bureaucracy / power structures?
Q: depending on the previous answer, does your security or privacy matter to the NSA?
>Probably they are still breaking US and international laws on that.
Do the people who enforce the law get punished when they break it? On the whole, I don't think that's a clear "Yes". More of a "maybe, sometimes".
In a democracy they do ...
I will grant that there is a strong correlation between (the presence of) rule of law and democracy, but it's debatable in which way a causal relationship might flow or if they might both be caused by something else. (For example, economic prosperity seems to help)
Meanwhile, in a democracy, I can see you could have group not under the rule of law _iff_ any actions they take do not impact anyone who is going by the law, including an impact such as "gives an advantage or benefit to those under the law can't get". Otherwise the government whilst possibly being "by the people" would no longer be "for the people" but instead "for the preferred classes".
You could maybe finagle something that some people could do, that was illegal for others, but doesn't give the first group any benefit. But I don't think you can practically; so practically then democracy would be predicated on rule of law even if not technically required.
I'll grant that in practice it is very difficult to have rule of law without democracy. It is interesting to think about as a "gedankenexperiment" though. :)
Lee Kuan Yew was a benevolent dictator, except for his political opponents. Dictatorship is a spectrum :)
If everybody must follow the same laws, and nobody can change the laws at will, you will get a democracy. If some people can steer lawmaking at will or law does not apply to them, it doesn't matter if you vote on public representatives, they won't be submitted to your wills.
Some may argue a lot of things that may or may not be true.
This will probably always be an issue so long as there are classified operations... Everyone who isn't privy will have their own interpretation of what really happened.
All of the stuff recently leaked by Wikileaks (including today) was from CIA.
People didn't (or couldn't) update their systems with the appropriate patch.
How much money have their tools cost folks in time, money, hardware, and personnel, even in just the last year? How much money have their unpatched exploits cost?
How's that all compare to their (mostly undocumented) annual budget, anyway?
Why people run any systems on windows is beyond me (not that others are more secure, but windows is a bigger target)
Windows gets hacked because it is the most used desktop OS. If everybody started using Linux, it would get hacked as often as Windows. Quoting your comment again:
>because perfect security is impossible
I'm guessing you've never been near a large company that has a large investment in Windows and applications that run on Windows.
Many IoT companies, in contrast, have root-level exploits that are laughably trivial to hack, and some don't seem to care that much at all when exploits are discovered (https://www.trustwave.com/Resources/SpiderLabs-Blog/Undocume...). Windows is imperfect I'm sure, but I'd rather connect a Windows machine to the public network than any so-called "smart" appliance at this point.
And non-techies in these places couldn't live without Office. I once got an email with an attached Word document that contained an embedded Excel spreadsheet with exactly two cells -- an IP address and a hostname -- for a DNS entry I needed to update.
At least you could copy and paste. In some work environments, you would have been sent a JPG screenshot of the IP address and name.
Now enjoy the discussion of images in mail messages in https://news.ycombinator.com/item?id=14567074 (-:
I have watched very closely for 20 years now for any chatter or evidence of a FreeBSD/OpenBSD zero-day vuln that was weaponized or sold or used for state-sponsored covert action and I am not aware of any.
None of the leaked information from Snowden, et. al, gave any evidence of their existence either.
Technically speaking they should exist but I'm not seeing them ...
Further, if you were worried about attackers or hardening a system, etc., you wouldn't be running samba (or anything like it).
I'm talking about a remote-root vuln in the FreeBSD kernel or in any of the default system daemons that were bundled with an OS release (like openssh, crond, syslogd, etc.)
> Not the case in enterprise ;)
that's a funny way of spelling "subsidies"
Imagine if the creators of WannaCry had decided to brick everything they could, instead of _just_ holding data for ransom. What then?
Ben-Oni (from the article) says he sees it as "life-and-death". I agree. We're simply not prepared for a well-coordinated attack. I think it will take a true catastrophe before anyone really understands just how vulnerable the Internet is.
It utilized scary "cyberweapons" as a plot point but the goal was to steal.
2. Radio static
A radiolocation map of the planet showed hundreds of transmitters of white noise, which merged into shapeless blotches. Quinta was emitting noise on all wavelengths.
In the Cold War theory: “What came to mind was an image of “radio warfare” taken to the point of absurdity, where no one any longer transmitted anything, because each side drowned out the other… All bands of radio waves were jammed. The entire capacity of the channels of transmission was filled with noise. In a fairly short period of time the race became a contest between the forces of jamming and the forces of intelligence-gathering and command-signaling. But this escalation, too, penetrating the noise with stronger signals and in turn jamming the signals with stronger noise, resulted in an impasse.”
Other hypotheses considered: “The noise was either the scrambling of broadcast signals or a kind of coded communication concealed by the semblance of chaos.” [It’s a consequence of the Shannon–Hartley theorem that the maximum information is transferred on a channel in the form of white noise.]
Actually, I can imagine a future where we use AI to eliminate biological pathogens, but digital pathogens proliferate.
That's pretty much how viruses worked before people figured out how to monetize them. A whole industry was built to defend against bored, malicious teenagers.
I share your fear. At the same time, it's also important to remember that most other instruments trade at way deeper order books, with orders of magnitude greater liquidity and massively better controls (e.g. circuit breakers).
Or how bad an idea it is to connect anything and everything to that Internet, particularly if it does anything important or potentially dangerous. If the Internet is one of the best ideas humanity ever had, the Internet of Things may prove to be one of the worst.
My personal nightmare involves a vulnerability in a popular model of remotely connected and semi-autonomous or autonomous vehicle. I don't think Western governments have any idea how much harm something like that could do or how plausible it actually is, and I don't think the auto industry executives care enough to stop it.
Disconnection can stop drive-by malware, people trawling for additions to their botnet collections. Someone who wants to launch a coordinated attack will have no problem getting behind the firewall or across the air gap at enough interesting networks to cause serious harm.
We have to actually write secure software.
I've been thinking about this all week. I discovered a fairly big vulnerability in our software the other day that allows anyone in the company to access data they shouldn't, not national secret level data, but enough that it could be somewhat valuable. We also have a number of people of a certain nationality that's somewhat hostile to the west, many of those people are programmers.
How would you differentiate incompetence that lead to the vulnerability from maliciousness that intentionally caused it?
Sounds like the vulnerability isn't your primary problem.
The CAN bus is a fundamentally insecure system. Devices accept that you are the device ID you say you are. The only way for a device to vote you out is for it to see its forged ID go out on the bus and then trash the bus. Not remotely failsafe.
Increasingly vehicles are networked systems. Devices need to act like it - encrypt data between themselves and authenticate each other. Without subsystem-level access controls (should the head unit be talking to the brake controller?) there is just too much attack surface.
You can no longer be secure on a "friendly bus", this is now a mini-WAN as far as attack surface, and has been since wifi/bluetooth/cellular basebands were put on the bus. Firmware updates need to be cryptographically signed (or jailbreakable with a user-selectable root CA cert).
Everything else is vehicle manufacturers whistling past the graveyard. The CAN bus is dead, it passed away probably 10 years ago, it's just zombie companies who refuse to re-engineer appropriately when they can just ignore the problem instead (recalls don't happen right?). It's cheaper just to put the new head unit in.
lol. So is the PCI bus inside your PC. It's not fair to sneer at something that doesn't do something it was never intended for.
Airports have everything from huge IoT systems managing the climate of the terminals, thousands of monitors for public information, controls for automated baggage handling, traffic/parking management sensors, vast sewer systems with sensors to manage storm/sanitary/glycol recovery/water, security systems (tens of thousands of cameras/doors being monitored), and then hundreds of smaller systems doing everything from managing ground transportation through to emergency dispatch.
I'm not sure it's even feasible to air gap all of that - the loss of productivity and additional cost would be far greater than the perceived security risks. Airports typically don't have large IT departments either, they outsource much of the work to consultants and cloud solutions. Critical systems should be, and are, air gapped but if something took out some of the systems connected to the internet it would be chaos.
One random aside... I work at an airport and did a tour of the ATC tower when I started. One of my first questions was how do they handle a loss of critical systems for landing planes. They proudly whipped out a massive signaling light (https://en.wikipedia.org/wiki/Aviation_light_signals) and explained how they use it. I actually found it quite reassuring that despite all of the technology they clearly had contingency plans in place.
Which of those systems would suffer from decreased productivity were they disconnected from the Internet? Indeed, I imagine that they'd experience increased productivity: there's no need for an air conditioning system to get its updates over the Internet rather than, say, by a human being with a thumb drive. Ditto monitors, ditto baggage handling, &c.
Emergency dispatch for example - the airport acts as a PSAP (https://en.wikipedia.org/wiki/Public-safety_answering_point) and requires integration into regional systems. They have radio backups but my understanding is that they pull a lot of data via the internet. The first responders also have cloud apps that help them route to a location or see the position of other nearby resources.
There is also considerable coordination with regional/national infrastructure owned by the airlines for managing when aircraft will depart/arrive. That would be much harder without an internet connection.
The airport will continue to operate safely if they lose internet connected infrastructure but the efficiency will drop quickly and the national airspace is like a busy road network - congestion in one area can rapidly cascade and cause chaos.
See the SR-71 fuel aft transfer switch. Maintaining a forward CG keeps the bird in the air. Control is intentionally manual.
Average age has just ticked up to 11.6 years: http://www.autonews.com/article/20161122/RETAIL05/161129973/...
Regarding "IoT": There's no reason your light switches should talk to the Internet, even for home automation purposes.
Not very. In many areas, it is either already a regulatory requirement or about to become one that any new vehicle implements an automatic system that will notify emergency services in the event of an accident where no-one on board is able to call for help, sending information about the location of the vehicle and the nature of the accident. That inevitably requires both remote communications capability and integration with some of the other safety-critical systems in the vehicle. While this particular application may be a worthy goal that will genuinely save lives, the architecture it implies will inevitably also be more at risk of security vulnerabilities than an entirely disconnected vehicle.
I don't want to discuss it in depth for the same reasons, but joking around with friends has led to feeling physically ill and bouts of drinking because of the extremely dangerous possibilities that exist.
It's a fundamental mistake to network all of society to make it efficient: inefficiency is security. (In a broad, theoretical sense -- from capitalism to government to IT.)
Aren't we? We handle natural disasters, large-scale power grid failures, etc. If WannaCry bricked everything it touched, the results would have been much worse and more tragic but would they be worse than an earthquake or hurricane in a major metropolitan area?
So many people would starve because food couldn't be delivered in the efficiency it takes to feed the population. You've seen grocery stores wiped out during hurricanes, and those have several days lead time to prepare. We are much more dependent on technology than a lot of people realize.
1 - https://tools.ietf.org/html/rfc2827
I keep hearing that the government is taking cyber security seriously, but I see no evidence. Where is the DHS funded formal verification tool or subsidized penetration testing for critical infrastructure? I'm not saying these are the right ideas, but I don't see anything at all. Perhaps I just don't know of the programs that already exist though - looking forward to being educated here!
The fact it isn't implemented is down to apathy, ignorance and people making excuses.
I don't think throwing your hands up and saying "oh this is probably hard" is very constructive on this front. Stop making excuses for people whos lack of action puts everyone else is danger.
A possibility I recently heard about and thought sounded interesting would be requiring certain kinds of companies hold security insurance, and allowing damages for things like DDoS attacks. Then, if the insurance is functioning properly, doing things like this would decrease premiums. Mostly though, I'm just bummed that I don't seem to hear much about any ideas for how to actually attack the problem after things like WannaCry or big data leaks happen. This is clearly a systemic problem, and nobody really seems to be attacking it at that level.
You make DDoS mitigation sound easy, but most of what I would call successful attacks are from traffic that looks real at the ISP level and are relatively low bandwidth. Attackers achieve success through locking some aspect of their victims architecture.
Most websites are not prepared for large fluctuations relative to their normal traffic, which look like a drop in the bucket when you are at an ISP level. I don't blame websites for this because mitigation at this level can be expensive.
I think legislation for something like this would be a mess, because it's not simple and it has a technical solution.
And something like 90%+ of IP endpoints have, that last remaining percentages is a real bitch though
Um, no. Amplification attacks, and direct attacks from compromised hosts, such as IoT, are the majority of the traffic. BCP38 at the edge doesn't protect against devices spoofing their internal network address from someone else in their same subnet/routing block.
If I'm correct, we're going to need this sooner rather than later. As IoT gets rolled out (had to remove my new AC from the WiFi since it was just easier to let the install hook it up than explaining why it's a bad idea), we need to have someone be accountable for security. IoT manufactures should be on the hook, but so to the network admins.
Attacks from compromised hosts is another thing entirely. But even on those, IP spoofing makes it impossible to block the attack at the ISP level.
It also raises concerns about traffic control. What if my entirely legitimate but small service in Australia is blocked by US DDoS mitigation? They have no reason to care and I may have no legal or monetarily reasonable recourse.
Really properly permanently bricking is much harder, and hardware-specific.
Or even worse, a state actor with nuclear weapons to back it up.
Someone will get root somewhere and shut down some small percent of light vehicles (i.e. not medium and heavy trucks) on the road at an inconvenient time causing a massive economic panic and screwing up the used car market in a way that makes cash for clunkers seem well thought out.
Even a complicated attack vector that results in a narrow target selection (e.g. android malware -> infotainment system on a particular few model years of one brand) would have a massive psychological impact.
Edit: a similar occurrence could happen by accident (edge cases, poor testing and so on).
Edit: meant to reply to a comment. Oh well.
With stuxnet, there developed the sense that Windows received conspicuous attention from a special class of mysterious operators.
At this point, given the tiny cottage industry that feeds a handful of starving security analysts, I feel it's reasonable to presume that Windows is built to be a secure as possible, and that what's possible is mostly intentional and understood as a known quantity for special populations.
you may not have enough information.
You're basically saying Microsoft have perfected security to the point the can't overlook something or make a mistake with something????
Cyber weapons are not as dangerous as nukes, but much easier to copy, and much harder to know who attacked you.
The NSA/CIA has been very lax in allowing their weapons to be copied.
WHich means they are, in fact, as dangerous as nukes.
Good luck compromising nuclear arms C&C. There's a reason a lot of it runs on systems from the 70s.
On 21 September 1997, while on maneuvers off the coast of Cape Charles, Virginia, a crew member entered a zero into a database field causing an attempted division by zero in the ship's Remote Data Base Manager, resulting in a buffer overflow which brought down all the machines on the network, causing the ship's propulsion system to fail.
Global systemic risk is increasing. Masssive infrastructure and response disruption can be devastating.
What was the worst powerplant disaster in history? What was the primary kill mechanism?
Rogue, nuclear weapon doesn't scare me. Increasingly automated cars, unpatched cellphones, legal ID's hard go fix when stolen, or bank/medical/government/databases compromised all worry me more since they can impact me and happen a lot. SCADA, too, if non-nuclear given it might be my power plant or utility.
Though I also make the argument that the book hasn't been closed on our nuclear disasters, and won't be. For tens of thousands of years.
Banqiao, however, has been fully resolved. (It occurred in 1975.)
GIven that "cyber attacks," that is remotely tampering with telecom equipment, are as old as the Arab Israeli war of 1948, I'm not ready to be reassured by the antiquated hardware the US and Russia use for C&C
Suggested setup: I click on bad link, malware installed on my computer that infects my disk firmware. I remove the malware from my computer. My disk is no longer compromised.
Also, I plug a USB stick into my computer. It gets compromised. I unplug it. It gets uncompromised. No more USB spread malware.
Back in the 80's, most devices used ROMs, PROMs or EPROMS. ROMs were burned at the factory. PROMs could be user programmed once. EPROMs were erasable and reprogrammable, if you put the chip under a UV lamp. As far I can tell, this is a forgotten technology.
Later on came EEPROMS, electrically erasable programmable read only memory. Then someone had the bright (!) idea of connecting the write-enable line to the internet so anybody in the world could update anybody else's firmware, and welcome to the hell we have today.
Option A) What we have now
Option B) Code in PROM with physical interlock (e.g. push button to enable write)
Option C) Code in ROM
Downsides of A are obvious, we see them now.
Downside of B is that automatic updates are impossible, so the majority of devices that are network connected will remain vulnerable. At the peak of code red, it took less than 5 minutes for a fresh windows install to be compromised.
You also will have people who will forget to disable the network before pushing the button, or who may be tricked into pushing the button which will allow malicious code to persist anyways.
C) Absent a hardware recall, people will just use devices that are infected within 5 minutes of bootup.
This implies a lot of infected machines. As I mentioned earlier, machines getting uninfected at every reboot will reduce the number of infected machines at any point in time. It may reduce it far enough to provide "herd immunity", which is why we only need 90% of the population immunized against measles to prevent it from propagating.
Furthermore, if people are aware that reboots will de-infect malware, machines can be set up to regularly reboot. For example, I could set up my router to reboot once an hour. This would be only a minor inconvenience, as it reboots pretty quickly.
Regular reboot can be done by adding one of those simple hardware store lamp timers, or the device itself can contain a hardware circuit (outside of software control) to regularly reboot it.
By the way, do your really want automatic updates to your disk drive firmware? Your USB stick firmware? I don't. They don't happen anyway, yet I've seen articles about how those get infected with malware.
A better approach would be to create simpler systems that are easier to verify and monitor, and harder to corrupt. Another approach is to use micro kernel architecture and a container oriented system.
Finally, the backdoor strategy of the NSA is now proven to be harmful. With all their tools, they should know who is behind it.
> Worse, the assault, which has never been reported before, was not spotted by some of the nation’s leading cybersecurity products, the top security engineers at its biggest tech companies, government intelligence analysts or the F.B.I., which remains consumed with the WannaCry attack.
> “The world is burning about WannaCry, but this is a nuclear bomb compared to WannaCry,” Mr. Ben-Oni said. “This is different. It’s a lot worse. It steals credentials. You can’t catch it, and it’s happening right under our noses.”
This attack and WannaCry use the same exploitation vector (EternalBlue). It seems that his company was targeted with a custom payload, which is definitely unfortunate, but that is not related to the exploit itself, it is just another form of custom code being used to perform further actions (Instead of simply encrypting files as WannaCry was doing). This is probably even easier for an attacker since there is now even a Metasploit module for MS17-010.
> The attack on IDT went a step further with another stolen N.S.A. cyberweapon, called DoublePulsar. The N.S.A. used DoublePulsar to penetrate computer systems without tripping security alarms. It allowed N.S.A. spies to inject their tools into the nerve center of a target’s computer system, called the kernel, which manages communications between a computer’s hardware and its software.
This is not a "step further" though. DoublePulsar is the implant injected EternalBlue and was certainly used in WannaCry. I am not sure why they had not even taken the time to try to verify this, even the WannaCry Wikipedia page states this (https://en.wikipedia.org/wiki/WannaCry_ransomware_attack). Again, this is the same exploitation vector and same implant, but with a modified payload to specifically target IDT it seems.
> For his part, Mr. Ben-Oni said he had rolled out Microsoft’s patches as soon as they became available, but attackers still managed to get in through the IDT contractor’s home modem.
This tells me:
1. Even though machines internally were patched, a contractor was allowed to connect to the network with an unpatched machine.
2. If machines were internally patched, how would an infected contractor be able to do damage? I am not clear on this. They might be saying the network itself was not attacked, but rather the attacker was able to login with the legitimate employee's credentials and cause damage that way (In which case, something is very wrong internally if this was possible).
I know it is not nice to victim-shame regarding security issues, and I am trying not to do so, but it seems like the story here is phrased in a slightly disingenuous manner. It is essentially this: An IDT contractor with an unpatched machine and privileged network access was targeted using EternalBlue to steal their credentials with a custom payload. It worked. After this, it is unclear if the stated network intrusion occurred because EternalBlue spread (Would not make sense if patched) or the contractor credentials were used "legitimately" (Indicates poor access control and monitoring).
Why is NYT using this term? It’s been invented by NSA to redirect defense funding towards their mass surveillance activity. Shouldn’t journalists point out things like that?
Despite his much lauded protection, I find it odd to believe that he's not running Snort, and the lack of actual specifics make me believe this is really a piece for the mentioned Israeli security company with a "blackbox" IDS.
cyberattack can be easily overcome using the steps below:
1) DON'T connect your computer to the internet physically
2) if you want to use the internet, use another pc which has no credential data
3) for bank/shop online,etc. we should use a dedicate device which couldn't be reprogrammed.
To sacrifice a little convenience for the safe. That is it.
Why paint a target on your back saying things like this?
There’s not enough time to pursue hundreds of new attackers every day, most of whom are not competent enough to be a serious threat. Presumably a big part of Ben-Oni’s job is to figure out which attackers can safely be ignored and which ones he needs to worry about tracking down.
What a time to be alive
Why do journalists even try to explain things like this? Do they ever get it right? Does it ever not just go over people's heads?
I was wondering if people go through an alternative site/app to serve the articles - bypassing the paywall - or if everyones just in the same boat as me? Some other forums I'm with tend to make note of a paywalled link.
A little bit of taking responsibility please? They could at least lead the charge to get this stuff dealt with now.
Noobs: follow @hackerfantastic
Suggesting that the USA get rid of the NSA is like saying "Crap, terrorists got a hold of a nuclear weapon, lets unilaterally get rid of all of our weapons and hope for the best!"
That gives them 30 days to use 0-day exploits, but can still be effective contributors to greater overall security.
Terrorists WILL do a lot of damage (as demonstrated) with these exploits... the NSA might ... the world, and in particular US interests are far better served with secure systems all around.
That's cute. I wonder if he means Microsoft, people who use Microsoft products in safety-critical systems or maybe some nuke-capable nation state hiding behind tor, VPN, custom IoT botnet, another layer of tor and another VPN?