Live map: https://intel.malwaretech.com/WannaCrypt.html
Relevant MS security bulletin: https://technet.microsoft.com/en-us/library/security/ms17-01...
Edit: Analysis from Kaspersky Lab: https://securelist.com/blog/incidents/78351/wannacry-ransomw...
Wouldn't you want to hide a kill switch?
How is he able to add new supernodes to the cluster? I would expect a supernode to have some sort of credentials that are used for authentication. If not, isn't it possible to neutralize the botnet by overloading it with supernodes that don't send malicious commands?
So in some cases the only requirement for a node to be a supernode is that it can receive incoming connections. I take this to mean that any computer that is 1. infected with the botnet program, 2. can receive incoming connections, becomes a supernode. Under those circumstances there's no need to reverse engineer the botnet program, all you have to do is set up a vulnerable computer, allow it to be compromised and infected becoming a supernode; then monitor the traffic of incoming connections.
He later mentions that supernodes can be filtered based on "age, online time, latency, or trust." This tells me that certain botnets do have a level of trust that is defined in each peer list.
I believe your last question refers to the concept of sinkholing or blackholing. These methods have been used by the FBI to take down botnets through DNS hijacking, I think.
This would just discover supernodes though right? Or any node at some point broadcasts as a supernode?
From what I understand the process is:
1. Write a program to pretend to be a compromised peer requesting a connection to a Supernode in order to obtain a peer list of other Supernodes.
2. Recursively crawl for existing Supernodes + the list of Supernode IPs. Store all addresses found.
3. Set up one or more Supernodes and 'infiltrate' the peer list of already established Supernodes. Log incoming connections from Workers.
I'm just curious and would like someone with more experience to weigh in.
EDIT: To add on further to my question, I wonder why it does not use a terrain / city / province overlay instead of all black? It seems it would be much more useful to us network and sysadmins out there just in case we realized "Oh, hey that dot is right on top of where we work out of. I should probably fire up WireShark or something and test for infected systems."
It uses a vulnerability in a protocol that's used for network sharing, and that's usually blocked at your router
... the lab with ties to Russian intelligence, who are suspected of leaking the NSA tools.
> "The malware was circulated by email; targets were sent an encrypted, compressed file that, once loaded, allowed the ransomware to infiltrate its targets."
It sounds like the basic (?) security practices recommended by professionals - keep systems up-to-date, pay attention to whether an email is suspicious - would have covered your network. Of course, as @mhogomchunu points out in his comment - is this the sort of thing where only one weak link is needed?
Still. Maybe this will help the proponents of keeping government systems updated? And/or, maybe this will prompt companies like MS to roll out security-only updates, to make it easier for sysadmins to keep their systems up-to-date...?
(presumably, a reason why these systems weren't updated is due to functionality concerns with updates...?)
This is secondhand information (so take it for what it's worth, there could be pieces I'm missing), but I talked with a startup that was focusing on this problem, and the issue was not quite the computers and servers that IT were using (although sometimes it was), it was that many medical devices (like CT scanners, pumps, etc) come shipped with old outdated versions of operating systems and libraries.
No big deal right? Just make sure those are up to date too? Well, many times the support contract for these medical devices are so strict that you can invalidate the warranty by installing third party software like an antivirus, or even doing something like Windows update.
Even worse, many hospitals don't even know what devices they have -- it's easy for IT to know about laptops and computers, but when every single medical device more complicated than a stethoscope has a chip in it and may respond to calls on certain ports, it's a tougher picture to know.
The startup was https://www.virtalabs.com/ by the way, they really are doing some cool things to help with this.
Basically, the rule should be: if you are using general purpose consumer software, then you should be doing updates; if you are in an environment where updates are considered too risky, then running commodity software should also be considered too risky and you should be building very small locked down systems instead. Ideally without a direct internet connection (they can always connect through an updatable system that can't actually cause the medical device to malfunction, but can be reasonably protected against outgoing malware as well).
 I would be ok with some of these devices running a stripped down Linux (or NT) kernel, just not a full desktop OS. If you need a fancy UI, then that can be in an external (hopefully wired, not IoT) component that can be updated.
Moreover, the manufacturer have the obligation to assess every known bug of every SOUP and provide fixes if this can endanger the patient.
The issue is that to prove that a device is safe you have to execute costly tests. For a device I have been working on, we do endurance tests on multiple systems to simulate 10 years of use. Even with intensive scenario, on multiple systems it can take a few months. And if we encounter a single crash we reset the counter and start again. So in the end the product is safe but it is costly. This is why most of the time it is actually better to have the most simple stack possible on bare metal. But sometimes mistakes have been made, and you inherit a system full of SOUP and this is a nightmare to maintain.
I actually except some shitstorm on Monday morning, luckily I am working more on the embedded side so no Windows for me but some other divisions will be affected.
Except that people don't want to learn a new GUI for every machine...
Except that people want to be able to use a tablet for the interface...
Except that people want to control things from their phone...
Here's the reality: The end user doesn't give one iota of DAMN about security. People want to control their pacemaker or insulin pump from their PHONE. Ay yai yai.
Even worse: can your security get in the way when someone is trying to save a life? That's going to send you to court.
Life critical systems should be small, fully open stack, fully audited, and mathematically proven to be correct.
Non-critical systems, secondary information reporting, and possibly even remote control interfaces for those systems should follow industry best practices and try to do their best to stay up to date and updated.
Most likely many modern pieces of medical technology have not been designed with this isolation between the core critical components that actually do the job and the commodity junk around them that provide convenience for humans.
which is what happens when your whole computing network is remote-killed
It's not like medical devices have an entertainment system like cars and airplanes.
This is all doable, but it adds a bit of BOM cost and changes the development model.
The long and short don't use standard desktop Windows (or even standard embedded Windows), Linux or MacOS to run these devices.
If someone has a computer hooked to an MRI machine and to the hospital network, and it runs outdated/insecure software then someone made a mistake somewhere.
If you want a system to reach 100% it can't rely on not making mistakes. If all operating systems are supposed to be updated, then this has to be enforced as part of the software. The software e.g. shouldn't accept traffic unless it's up to date.
It's certainly ridiculous if you don't keep it utterly sandboxed and limited to only required use.
Also ridiculous is anyone falling for - or being allowed to fall for - a mail based phishing attack anywhere in the organisation.
This is a failure of management to properly train their employees.
I could understand if 1 would be a violation, but perhaps, after today, the FDA could fast track manufacturer patches to run software loads on VMs?
I don't imagine 2 would solve current infrastructure issues any time soon given the size of investments in current equipment, but could it be a best practice going forward?
In 2006 this involved a nice virus that sent all your photos and emails off to people they were not intended to go to, there was a psychological aspect to what was going on with this payload plus a full spectrum dominance aspect - the media were briefed with the cover story but I don't think any journalists deliberately infected a machine to see for themselves.
At the same time that this was going on there were some computer virus problems in U.K. hospitals, those same Windows XP machines they have today. The Russian stock market was taken down around this time too.
Suspiciously I tried to put two and two together on this, but with 'fog of war' you can't prove that the correlation = causation. The timing was uncanny though, a 'cyberstorm' exercise going on at the same time that the BBC News on TV was showing NHS machines taken out by virus.
So that was in 2006. A decade ago. If you found a hole in a hospital roof a decade ago they would have had ample opportunity to fix it. They had a good warning a decade ago, things could have changed but nothing did.
I had the pleasure of a backroom tour of a police station one night, don't ask why, luckily I was a 'compliant' person, no trouble at all, allowed to go home with no charges or anything at all. An almost pleasant experience of wrongful arrest, but still with the fingerprints taken - I think it is their guest book.
Every PC I saw was prehistoric. The PC for taking mugshots was early 1990's, running Windows 95 or 98. I had it explained to my why things were so decrepit.
Imagine if during the London riots of 2011 if the PCs PC network had been taken down with all of that police bureaucracy becoming unworkable?!? I believe that the police computers are effectively in the same position as the NHS, with PCs dedicated to one task, e.g. mugshots, and that a take down of this infrastructure would just ruin everything for the police. I think that targeting the UK police and getting their computers compromised (with mugshots, fingerprints, whatever) and then asking the police to pay $$$ in bitcoin before they were locked out for good next week, that would have made me chuckle with schadenfreude.
Anyone considering disclosure, responsible or not, should be aware of these types of secondary effects. Had these vulnerabilities hypothetically been discovered by a white hat or found their way to a leak-disseminating organization, the discoverers and gatekeepers should consider that not everything can be patched, and the ethical thing to do here would have been to notify Microsoft and wait for a significant product cycle to release technical details. I somehow doubt the Shadow Brokers had that aim, though. And it's saddening that even in the hypothetical case, many people would choose "yay transparency!" over a thoughtful enumeration of consequences.
We saw a fictional example of a scheme like this on Battlestar Galactica. Officers phoned and faxed orders around the ship, using simple devices that did not execute software. The CIC had its data punched in by radar operators, instead of networking with shipwide sensors. It was a lot of work, but it did keep working in the face of a sophisticated, combined malware/saboteur attack.
> security incidents
Also, the doctor's computer pretty much needs to interface with the system(s) that handles patient billing (and thus non-medical companies) and the system(s) that handle patient scheduling, reminders, etc.
Not really an issue in the NHS, apart from the occasional non-resident foreign national.
(The "fundholding" system does mean there's a certain amount of internal billing which the patient is never aware of, but the beating Bevinist heart of the free-at-point-of-use system is still in place)
A private practice where everything is paid by the patient in full by cash or CC could do without any integration with external systems (just run a standard cash register), but as soon as someone else is paying for it, you generally need to link the doctor's office systems to that in some way.
And yes, surely they should have super limited network features. The important word is "should."
Workstations absolutely should be patched with security updates. Running an intranet-wide update server is non-trivial, but is well within the reach of a competent sysadmin. And failing to do it is negligent.
In previous environments I've worked that were "regulated", any change to the environent, such as a firmware upgrade, triggered an entire re-regulation process (testing, paperwork, etc).
In this specific case, there are mitigations available that do not require installation of software, but merely a configuration change. Also in this specific case, the people who run IT at NHS are completely incompetent, and this has been well-documented for several years.
In the general case, "I have a lot of machines" is an excuse provided by the unable to evade being held responsible by the uninformed.
The phone company used to have (they still might, I'm not in the business anymore) large labs that were small replications of their network. I've been in meetings where the goal was to decide if we should try to get our latest release through their process - if yes and we were successful they would pay big for the latest features, but if yes and it failed [I can't remember, I think we had to pay them for test time, but the contract was complex]. A lot of time was spent looking at every known bug to decide if it was important.
That's what you get when you defund critical services.
Again, the problem is that rolling out patches quickly often leads to unplanned problems that can't be easily detected or rolled back from. That can cause problems worse than leaving security issues unpatched.
The business of cybercrime is changing. With the growing popularity of ransomware, we should expect a gradual decrease in time between a published remote vulnerability and your systems getting attacked. It may be useful to delay patches by a day to see if there aren't any glaring problems encountered by others - but it's not a reason do leave open holes that were patched in March. Frankly, there was no good reason why this attack hadn't happened a month ago; next time the gap may be much smaller.
Yes, there is a chance that installing a security update to break your systems. But there's also a chance that not installing a security update will break your systems, and that chance, frankly, is much higher.
Furthermore, "That can cause problems worse than leaving security issues unpatched" seems trivially untrue. Every horrible thing that might happen because of a patch broken in a weird way may also happen because of an unpatched security issue. Leaving security issues unpatched can take down all your systems and data, plus also expose confidential information. A MS patch, on the other hand, assuming that it's tested in any way whatsoever, won't do that - at most, it will take down some of your systems, which is bad, but not as bad as e.g. Spain's Telefonica is experiencing right now. What patch could have caused them even worse problems?
You just download the monthly rollup: http://www.catalog.update.microsoft.com/search.aspx?q=401221...
Any competent sysadmin will have these available on their internal update server and push updates+restart during off-peak hours.
There is such a thing as staged rollouts for this exact type of scenario.
Only non-critical machines can just automatically apply software patches from Redmond (or anybody). This is not laziness or incompetence - only a few weeks ago military grade exploits from the USG were leaked onto the internet and are currently being re-purposed for non-spying applications. Does anyone think any organisation is prepared for this? Chinese chatter indicates that ms17-010smb doesn't even fix all cases! Many organisations will have been saved by infra guys making sure ms17-010smb was rushed through and that McAfee sigs were updated 'just because'.
edit: fixed CVE (Eternalblue)
They should have formally validated software running on formally validated deterministic realtime hardware, running in non-networked environments (But with telemetry and remote control from networked computers if that's convenient) we just don't bother because it's cheaper and legal to get away with selling hacky nonsense.
Now the machine that you pull up the images on is most likely going to be a general purpose PC/Mac. You still need to patch that. Your IT dept needs to have patch cycles that deploy in sets, so all mission critical equipment can be tested before everything gets patched. It takes diligence, and planning. If you prepare at a very large hospital with two MRI machines, then a bad patch can leave you degraded, but not totally offline.
Yeah, not gonna happen.
As long as the chance of cyberattacks is larger than the chance of horrible patches, you simply accept the risk of horrible patches and install them anyway. Or keep the system totally isolated from everything, if it's that critical.
The IV drip machine is plugged into the wall, and is operated by buttons on the front.
Unfortunately, I think the active hours period cannot be set to more than twelve hours, which is less than the time required for some surgical interventions. I can almost imagine it: OK everyone, ten-minute break while Windows installs its updates, this guy who's been on life support for the last ten hours can wait a little longer.
That's why updates are not forced on business-grade installs, and forcing them would be a very, very stupid decision.
Forced updates make sense for home users, since Microsoft can't depend on someone requiring them to keep their networks secure. For other types of users, second-guessing update policies is always a bad idea.
If someone is going to die if a computer stops working for any reason at all, it should not be running Windows, or Linux, or macOS. It certainly shouldn't be connected to the internet or to any other network.
When we treat computers as nice-to-have mixed-use machines with all the bells and whistles, you need to treat them like nice-to-haves and not need-to-haves.
Surgeon workstations can absolutely be restarted once per month to install the monthly roll-up.
The article mentions patient records servers and receptionist computers being affected by the ransomware. Not life support equipment.
I was replying to the part about forcing updates. I didn't know about the group policy setting (rightfully pointed out by sp332); without it, you don't wait a month, you wait at most 12 hours :-).
And there have already been situations where updates have caused problems. Maybe not as severe as a full on attack, but enough to potentially disrupt production and thus risk someones job.
There was a Windows script file on the desktop, something like "UPS tracker.js", but it disappeared before I could grab it and a free space recovery didn't return it. (Possibly due to TRIM, it was on an SSD workstation.)
The problem with windows is that crap can not be upgraded without stopping workflow and rebooting.
With linux distros you can upgrade packages in a background (I think that's because you can upgrade the file being executed in linux, while in windows you can't, but I'm not sure) without even rebooting after upgrade. You can even patch your kernel without reboot.
It windows you have to see an upgrade screen for an hour without an opportunity to do something useful, and after that you have to reboot. That sucks.
I do not believe outsourcing saves money. It only does so either by cutting quality of service, or in cases where the IT department was heavily mismanaged anyway. Bring in capable management and you don't need to outsource.
Special-purpose stuff can still be cheaper to outsource, though. If I need something to work next week and it would take my staff a month to get up to speed, I'd spend the money on outsourcing it.
This shows that no agency is immune from leaks and when these tools fall into the wrong hands the results are truly catastrophic.
That's well known for a long time. During cold war a lot of Russian weapons were based on the US designs. There is a TV series, Americans, which shows how to manipulate people and steal secrets. Even atomic bomb secrets were stolen (by Klaus Fuchs and others).
So I guess a lot of people in military complex make a lot of money on these exploits, PRISM and other projects. And they just don't care about whole society.
But if you phrase it to something like "Can the government be trusted with backdoors to protect us from terrorists and Chinese hackers", then suddenly public sentiment will change dramatically.
> Göring: Oh, that is all well and good, but, voice or no voice, the people can always be brought to the bidding of the leaders. That is easy. All you have to do is tell them they are being attacked and denounce the pacifists for lack of patriotism and exposing the country to danger. It works the same way in any country.
Patriotism is both a wonderful and terrible thing, and it is made worse by fearing the "other". Any time people create a boogeyman (China, Mexico, Muslims, what have you), be on the lookout for what the true motivations are.
I found that hypothesis widely accepted, without so much for it.
Patriotism fuses core values like freedom or solidarity with a flag. That's why it is easier to pervert.
Patriotism tells people that because there are people born in the same line limits that you, you should be proud of what they do, and you should help them first.
Patriotism distorts history.
> "Fourteen thousand years ago, Sweden was still covered by a thick ice cap." https://sweden.se/society/history-of-sweden/
Bullshit. Sweden didn't exist 14000 years ago. All history is learned as if the current countries were an inevitable result thousands of years ago. World history, human history, gets displaced to be able to build a national sentiment.
> "The colonial history of the United States covers the history of European settlements from the start of colonization until their incorporation into the United States of America"
Again, we get that feeling of pre-determination. As if those people weren't free to choose their future as if they weren't individuals but just a means to create a country.
Patriotism narrows the mindset of populations. I don't see that usefulness. Anything that people does for patriotism will be better done for freedom, equality, fraternity, etc.
Why is patriotism a wonderful thing? What arguments am I missing?
Now we have more than enough resources to provide basics for all 10 billion of us (and decreasing) so patriotism has largely been confined to friendly rivalry around sports and regional cuisine. It was just a matter of mapping out the world's local customs and needs so the resources could be distributed intelligently.
And even at that, only about 4% of GWP goes to basic food, shelter, health, education, and cultural-ecological preservation these days. Entertainment and luxury goods make up the rest. This was unthinkable in the 2020s, but there was a lot of duplication of effort due to the maintenance of corporate moats in the basic sustenance industry at that time.
Sent from my iPhone 16S
Keep the temperature up, and it eventually leads to civil war, just like amped up patriotism / nationalism leads to wars between states.
This can be a wonderful and terrible thing.
But it's often patriotism that is seen as what enabled things like the congressional Republicans in the Nixon era to authorize the special investigations which brought him down.
That's only one example - there are plenty of others where an individual puts the interest of the group ahead of themselves. That isn't always a bad thing: the alternative is the tyranny of the strong, where the strongest individual has the most say.
Patriotism always leads to "us" vs "them", it seems.
But the implications of it are not. Otherwise, no one (including heads of TLAs) could continue to claim that gov't backdoors are a good idea without being widely perceived as an idiot.
The ethical concern here is whether the NSA should have reported the holes to the manufacturers and the failure to handle its privileged knowledge in a safe manner.
But every time they ask for there to be legally mandated backdoors - they need to be reminded of these incidents.
The NSA actively wants there to be "faults" like these. They just only want the "good" guys to have access to them.
(edit: my logic and phrasing was really bad)
I don't think they'd win; the ransomware authors and operators are the ones who perpetrated the act. The U.S. government probably wouldn't be found negligent since the software was stolen. NHS carries partial liability since it was negligent with its patching, according to industry-wide IT security standards.
Comparing it to firearms, I can be held partially liable for a wrongful death if I leave my Colt 1911 out on my porch; it's different if a burglar stole my gun safe and committed a crime.
(obligatory disclaimer that I am not a lawyer, I just play one on Hacker News)
I'd say the NHS is far more at fault than anyone else here.
FBI's (recently fired) James Comey has been asking for an encryption backdoor for the past 3 years:
At that time, he said unbreakable encryption should be illegal: http://www.newsweek.com/going-not-so-bright-fbi-director-jam...
2015 (asking for a backdoor): https://www.theguardian.com/technology/2015/jul/08/fbi-chief...
2016 (same): https://arstechnica.com/tech-policy/2016/03/fbi-is-asking-co...
2016 (tried to force apple to create a backdoor for the iphone): https://www.apple.com/customer-letter/
And then here recently, he's upped it to an international agreement to create a backdoor: https://www.techdirt.com/articles/20170327/10121437009/james...
He's not the first, only, or last person to ask for it.
A couple of bad business decisions and they are where yahoo is today. So be smart about how you use these services and educate the non-technical folks around you.
Since email in the modern world has this type of importance, what should I do? If you say gmail can't protect their data forever, do I not use gmail for email? What do I use then? No service will be free from data leakage, even an email server I run myself.
Distribute risk. Use multiple accounts. Don't handle all work/financial stuff on a single account. Keep work and personal accounts separate.
Reduce the number of hours you spend online being a data milch cow for these corps. This automatically reduces dependence. Don't allow messenger chat transcript backups to happen by just uninstalling the app every other night. Don't restore any saved transcripts on disk on reinstall.
I could go on and on but basic rule is use your imagination. Don't use these tools the way they want you to use them. Use them as you would use a tool in a workshed as an aid, not as a drug you are dependent on.
Sad that managing our own multi device services is so time consuming.
Deleting all my email would be a big cost to pay for a gain that I can't exactly quantify; I would have to figure out the likelihood of my data being leaked over time and the cost to me if the data was leaked. That isn't readily obvious what the risk factor is for me, but I KNOW the cost factor.
Personally, where I would point the finger squarely at Microsoft is in its recent attempts to conflate security and non-security updates. Plenty of people, including organisations who are well aware of what they're doing technically, have scaled down or outright stopped Windows updates since the GWX fiasco and other breaking changes over the past few years.
This also leads to silliness like the security-only monthly rollups for Windows 7 not being available via Windows Update itself for those who do update their own systems (not that this matters much if Windows Update was itself broken on your system by the previous updates and now runs too slowly to be of any use). Instead, if you don't want whatever other junk Microsoft feel like pushing this month, you have to manually download and install the update from Microsoft's catalog site. Even then, things like HTTPS and support for non-IE browsers took an eternity to arrive, and whether the article for the relevant KB on Microsoft's support site includes things like checksums to verify the files downloaded were unmodified seems to be entirely random.
I get that Microsoft would like everyone to use Windows 10, but since for some of us that isn't an option or simply isn't desirable. Since we bought Windows 7 with Microsoft's assurance that it would be supported with security patches until 2020, this sort of messing around is amateur hour and they really should be called out on it a lot more strongly than they have been.
Also, does Windows 10 Pro attached to a domain controller still have the same aggressive updates? Or do domain admins dictate that policy?
At one company I worked at, everyone in IT could volunteer for the patch group to get security patches a few days before the rest of the machines. That seems to work pretty well. Is there any evidence there might have been a 0 day involved that wasn't patched? I find it disheartening that so many machines in large managed networks like telecos and hospitals could be so far behind on patches! (3 months is A LOT in Internet time).
If people are just doing really basic stuff like order entry for doctors/nurses, we really need to get away from the full PC model. Seems like most of these machines should just be Chromebooks, Linux boxes that boot straight to a browser or something of that nature instead of a full PC/Macs. Lower the attack surface with something that's easy to update. Those machines would be lower cost too and easier to manage/patch -- moving back to the terminal/thin-client model.
BMJ released a report just two days ago alleging that up to 90% of the NHS's computers are still running XP.
> Many hospitals use proprietary software that runs on ancient operating systems. Barts Health NHS Trust’s computers attacked by ransomware in January ran Windows XP. Released in 2001, it is now obsolete, yet 90% of NHS trusts run this version of Windows.
Whilst this is true, it's probably also true that the impact of this attack is highly concentrated across organisations with chronic under-investment and a laissez-faire attitude to security.
Good developers are rare enough, but good IT security and security-minded developers are even more rare. And it's even more rare that they decide to work within healthcare.
There just isn't enough of you to go around and you can't be everywhere.
Even if you can afford to have a dedicated pentesting team (I'd like to work at a healthcare system/hospital network that did), physical security is still a major problem if only because it's very easy to impersonate people.
Military drones were using XP until they just had too much spyware on the machines to operate the drones.
While this is true, it doesn't address the point that you were responding to:
> this is an excellent example that we can all reference the next time someone says that governments should be allowed to have backdoors to encryption etc
...where "should be allowed to have" is interpreted as "should be given by software manufacturers".
The NSA has a specific mission to secure the nation's infrastructure. In witholding key information from US companies, it's failing that mission.
Edit: Apparently, not zero days. Vulnerabilities were patched months ago. I think the point still stands, which is that this outcome really has little to do with debate over encryption backdoors.
2nd Edit: On second thought, there is an argument that, if a backdoor were in place that only government agencies had access to, the means to access it could be leaked just as easily and in a similar manner to the way that information about these vulnerabilities was leaked. Then, we'd really be fucked since a backdoor could likely not be "fixed" with a simple patch (it might be fundamental to the design of a system). Considering this, I'll have to walk back my earlier statement and agree that the topic of backdoors is quite relevant here.
The exploits released by Wikileaks' Vault 7 dump went public months ago. They're as much a 0-day as JFK's assassination was just a few days ago.
The NSA leaks contained previously undisclosed security vulnerabilities that were patched only because they were stolen. In MSFT's case it was less than 30 days, and they basically skipped a patch week to make it happen.
It's manifestly obvious that 0day and 30day can both be considered extremely dangerous in the real world.
This issue is apparently based on a more recent leak by the Shadow Brokers, containing content from NSA and some other DoD elements who worked on offensive cyber operations.
Even if everything off the shelf and open source has some built-in escrow unlocking keys compiled in, hackers are just going to find those code paths and remove them. Encryption works because of certain mathematical principals and laws.
Backdoors will only let governments look at legitimately encrypted data and not anything made by criminals who know how technology works.
There's a bigger question here: what if the NSA or CIA or some other intelligence/defence organisation discovers a solution to solve some of these hard problems in polynomial time .. and then doesn't release that information so they can use it to spy.
In that situation you're going even further: you have agents who are literally holding back scientific research that could change the entire field of mathematics and human understanding, research that could advance number theory by orders of magnitude (a jump equal to that of going from the first flight Kitty Hawk to the Saturn 5 rocket), for limited political gain.
So "If encryption had a backdoor" is meaningless. It's really "If a given encryption implementation had a back door" and no one is making the criminals use certain algorithms.
Anyone who actually knows what they are doing, and are prepared to break the law, would just use AES. All of those law-abiding institutions would be forced to use a weak encryption scheme.
Sure, it might help stop script kiddies, but it won't help to stop professionals, and professionals are the ones that you have to worry about, since they end up hosing 45,000+ installations in a day.
This is the flaw in the logic. "Encryption" can't have a backdoor any more than math can have a back door.
Specific types of encryption can. But there's nothing to stop a malicious user from using a non-backdoored encryption algorithm or inventing their own.
Maybe one day, as a species, we'll learn not to create this kind of devices.
(sorry if the message seems too exaggerated)
Revealing the vulnerability would place the US Govt at a distinct disadvantage.
Your point is actually valid, but that doesn't mean I have the intention to pardon the NSA for having compromised the network of my university, the same network I used each and every single day during my studies (and no, I am not a terrorist, nor I know anyone involved in terrorism, child pornography, or what-else they had in mind).
Sorry to say, but "anyone is doing it", is not an excuse or a reason for doing something.
If instead of exploiting half of the world, they had dedicated their experience in making their (and everyone else) infrastructure safer (by sharing security conscious design concepts, considerations with software developers and hardware manufacturers), now we probably would not have had massive botnets, exploitations and leaks (least but not least the political consequences of perpetrating and sustaining this kind of decisions).
Where is the point when maintaining the supremacy of one's country over the others through deceit, intrigue, and espionage costs too much in terms of negative outcomes?
For me that line, US and many others included, has been passed a long time ago. But that's just my humble opinion. Each one is free to draw conclusions through his own point of view.
(Not that it's your fault, it's somewhat germane to the overall issue of government, I'm just whining)
All he did to get infected was plugging his laptop on the network at work(University of Dar Es Salaam).
The laptop is next to me and my task this night is to try to remove this thing.
I would suggest that you and your father spend the evening reading up on backup practices, and reconsider the value proposition of open source software.
I hope I am not coming off as a smug jerk. My hope is that rather than becoming frustrated and demoralized after an evening of fruitless hacking, you and your uni will recover, and become resilient against future attacks.
I personally use linux and my github repo is here where i have a bunch of encryption related projects(zuluCrypt,SiriKali and lxqt_wallet). The last windows computer i used was windows xp.
I dont want to move him to linux because i am not always around and he can ask other people for help when he is on windows.
My mother is in a similar situation. She is an elementary school teacher, and has little time for unrelated endeavors like this. What time she does have, is spent in the garden, as it should be.
Nevertheless, we are now seeing that the time-cost of closed source software, is greater than that of open-source software. My solution has been to prepare a KDE based distro for her, to work with her, side by side, whenever she needs to learn new tools. It is a good bonding experience, when both people can maintain a positive attitude about it.
The solution to the problem of malware, is education.
The solution to malware is obscurity. Have an OS that no one wants to break into, and you won't be broken into.
In the end, the software that we depend on, must be reviewable by anyone who is concerned about it. A prerequisite for that, is that software should be as small, clean, and simple as possible, to encourage such scrutiny. IIRC, the real problem with heartbleed, is that the OpenSSL codebase was a mess, and no-one wanted to work on it.
... and you'll have an OS for which neither malware authors nor legitimate software developers want to write applications.
There's a trade-off involved. We could all use pen an paper and be invulnerable to malware, but then how would we post on HN?
Certainly Windows has its issues, but it's biggest 'flaw' when it comes to malware isn't that it's closed-source, but that it's ubiquitous and therefore a highly attractive target.
Heck, some of my relatives are good with an iPad for 90% of their online activities.
My thoughts on the matter are, this is all a pointless waste of time/effort, or otherwise said, an arms race of exploits/bugs that will go on and on and produce nothing of value, except justifying a military budget in various govs.
If they truly were doing their jobs and being of benefit, we wouldn't have the corruption we do, the paedo rings, the drug cartels etc.
To be secure, you have to beat the smartest people on the planet I would have thought, and unless you have a nation's resources, that's tricky. Tightening laws I'm not sure is the answer either, it feels like human nature expressed in Internet terms.
One of the problems here, is that large organizations are reluctant to update software across a large population of computers. If those updates were smaller, more transparent, and could be separated based on whether they are a security fix, a new feature, or a new tool that allows a 3rd party to monitor user activity, then the sysadmins would be empowered to close security issues quickly, while introducing minimal risk.
% ssh -V
OpenSSH_7.4p1, LibreSSL 2.5.0
One of the reasons the infection rates are dropping off is that the malware had some kind of poorly implemented sandbox detection, where it would attempt to resolve a non-existent domain. However, now the domain has actually been registered by a researcher, so now every new infection thinks it's running in a sandbox.
This is the work of someone who doesn't really know what they're doing, and they probably copied a large chunk of the code from somewhere else.
I'm intrigued because I've seen people claim that "linux is just as vulnerable as windows to user stupidity," but I have a hard time understanding how. The vast majority of windows infections occur because somebody got tricked into running an executable file.
On every Linux distro I've used, scripts and binaries need the executable bit set or be explicitly run through the desired shell. As far as I know, no browser sets the executable bit on downloads. To run scripts, you need to know what you're doing.
curl http://... | sudo sh
Wipe the laptop and reinstall. It's more certain, and probably won't take much longer than trying to remove the malware. If the malware infects firmware or other subsystems below the OS, and thus won't be removed by a reinstall, buy a new laptop if that's an option.
So first person in a network has to have fallen for the phishing attack, but once it's in the network it can spread via the ETERNALBLUE exploit.
If you can get the right network packets to an unpatched machine, you can infect that machine.