Hacker News new | past | comments | ask | show | jobs | submit login
Victoria Police cancel hundreds of speeding fines after WannaCry virus attack (theage.com.au)
90 points by jaimex2 on June 23, 2017 | hide | past | favorite | 55 comments



No mention that this security hole has been kept secret by US government and leaked with devastating consequences around the world.

You can't imply that it's the sole customers' fault by saying that it's "the easiest thing to do - update operating system" without mentioning NSA not cooperating with Microsoft to patch the hole.

Many setups require certification which is void after modifications which may include system updates so it's not "the easiest thing to do". It needs to go through the certification process again.

Even without certification requirements it may not be as trivial as it looks at first to "just update OS" on every device.

The "easiest thing to do" is for the government to report holes just like everybody else does.

Isn't it kind of strange that the only ones that fail to report security issues are a) criminals and b) government?


> No mention that this security hole has been kept secret by US government and leaked with devastating consequences around the world.

This is irrelevant in this case. The infection happened months after the security holes had been made public and patches had been released. The same could have happened if the US government had immediately reported the security holes instead of keeping them secret.

> Many setups require certification which is void after modifications which may include system updates so it's not "the easiest thing to do". It needs to go through the certification process again.

That's seems like an utterly useless certification process then. Any certification worth its salt should refuse to certify a networked Windows computer that doesn't automatically install security updates, since things like this WILL happen.


It doesn't matter if it's utterly useless if it's legally required.


In this case it wasn't networked, operators infected the cameras using USB sneaker-net.


I don't see any mention of USB in the article. Can you link a source?


https://www.theguardian.com/australia-news/2017/jun/22/traff...

> The department of justice said Victoria’s infection was not the result of a targeted attack, but was caused by a contractor mistakenly connecting infected hardware to cameras.

Some sources are calling this a USB stick, the Guardian is being a bit more cautious.


Interesting, I always assumed the cameras transmitted the photos and data back to a source over mobile networks.

Technically this means if someone gets snapped and is desperate enough ( will lose job as a result of losing licence etc ) they can turn back and destroy the evidence.


Now that you mention it, i recall reading about a local case where someone tried to take out a camera using dynamite.

This revealed that the actual image storage was in a hardened cased buried in the ground some distance from the camera.

I suspect though that these are older installs. And that newer ones, particularly those that issue tickets based on time between two cameras, are networked.


Why is a comment like this always at the top of one of these HN threads? Does everyone here actually believe that the NSA shouldn't hoard vulnerabilities? I find it hard to believe that people here would collectively be that naive and simple-minded in their thinking. The NSA hoards vulnerabilities for the same reason the military has guns. Because other countries have guns too. This is too obvious a point to be lost on the readers here. So, i'm left wondering who it is that's upvoting these things.


  The NSA hoards vulnerabilities for the same reason
  the military has guns.
Vulnerabilities are fundamentally unlike guns.

Because vulnerabilities can be independently discovered or accidentally released, then reproduced in vast quantities and used against the public and civilian infrastructure of both us and our allies - largely with impunity.

If wannacry was a gun, it'd be a gun that fired backwards and sideways at the same time as forwards, and you can't stop it firing once it's started, and sometimes it starts firing on its own.


... and sometimes it makes new guns in other places.


Correct me if I'm wrong but WannaCry used vulnerabilities that already had patches. How does reporting these vulnerabilities earlier instead of keeping them fix this situation? You'd still have the problem of slow updates regardless.


Reporting them would trigger normal processes, Microsoft would have time to work on patches during which time bad guys wouldn't be writing WannaCry.

Normally full disclosure happens after about 45 days (I'm not an expert, I don't know exactly) but in special cases the time is extended.

This would probably be considered as a special case as Microsoft exceptionally released updates to unsupported, old versions of Windows and the hole itself was critical.

Please note that WannaCry hit in mid May - not that long time ago.

Shadow Brokers Group public disclosure of stolen tools from NSA happened in April.


Microsoft pushed out fixes in March, so WannaCry occurred two months later.

There would have been even less time if this was indeed a security researcher using a 30-45 day time period.


If the bug were responsibly disclosed to Microsoft, there'd be no proof of concept in the wild, available for anyone to integrate into their ransomware.

Instead, intelligence agencies irresponsibly hold onto them. And so they get leaked at best, or at worst end up in the wrong hands.


It sounds as if your argument is a variant on 'security by obscurity', here hoping that malware creators don't reverse engineer bug fixes (they do).

As bug fixes are reverse engineered, in your example, the malware could be created just as it was, and the patches had been out for months and the affected machines had not been patched, so again -- what difference would it have made?


Sometimes, a bit of obscurity will improve security. To get something like WannaCry to work from a security patch, you'd have to do the following:

  1. Analyze the update, determining what parts of the system it changes
  2. Analyze how the system behaved before the update (i.e. find the vulnerability)
  3. Find suitable parameters for the vulnerability to reliably work
  4. Build a proof of concept exploit
  5. Integrate it into your ransomware
Getting a working proof of concept from a leak saves you 4 out of 5 steps. If you are a financially motivated cyber criminal (and if you are distributing ransomware, you are), that can mean the difference between a waste of your time and a juicy return on investment.


Slippery slope though, tools like Metasploit are extremely important for security auditing and are generally regarded as a good thing for that reason, but your logic would apply to it as well.


Metasploit is a bit like a knife. You can use it to chop vegetables or stab people, and depending on who wields it, and in what circumstances, either of the outcomes is more likely.

I'm not arguing against the development on Metasploit though, and neither do I want to make an argument against vulnerability research. Every time Tavis Ormandy takes a shower, an AV vendor runs for cover; and on Christmas each year, Karsten Nohl cancels the vacations for some legacy system developers. That's a good thing, because those guys report their findings. They push vendors to fix the vulnerabilities, and they improve the security of systems we all depend on, every day.

Governments should do the same thing. I am all in favor of investing more in vulnerability research, but we need a process of disclosure. Stockpiling vulnerabilities puts everyone at risk, with little benefits.

Circling back to Metasploit: Yes, it makes work easier for cybercriminals. But even just the knowledge that a vulnerability will be available as a module quickly may be enough to make some vendors think twice about not reacting to a disclosure email, whether it's from Project Zero, independent researchers, or (hopefully more often) government CERTs.


Wait, so when it's your (side's) turn, you(r side) start(s) claiming "vulnerabilities can be independently discovered", but when it's my (side's) turn, your argument is "but there'd be no proof of concept"?

So are you arguing people going to discover these independently anyway, or not? Pick one and stick with it. You can't have it both ways...


> Does everyone here actually believe that the NSA shouldn't hoard vulnerabilities?

Nope! We know it's more complicated than that. But we think the NSA strikes the wrong balance between offense and defense. If you want to understand where people are coming from a little more you could start with some of Bruce Schneier's articles:

https://www.schneier.com/blog/archives/2016/08/the_nsa_is_ho...

https://www.schneier.com/blog/archives/2017/06/wannacry_and_...

... where, for example, Schneier proposes that NSA should keep vulnerabilities for no more than six months, based on how far ahead of adversaries it doesn't appear to be.

The general point Schneier tends to make is: we are a huge fat defensive target with limited offensive targets. Vulnerabilities hurt you in proportion to how much infrastructure you run; we run a lot. Sitting on a vuln so you can use it a dozen times, while powering your society with billions of machines that have the same vuln, is a bad tradeoff.


>The NSA hoards vulnerabilities for the same reason the military has guns. Because other countries have guns too. This is too obvious a point to be lost on the readers here

Not remotely the same thing. The Army hoards guns in case we enter a war and need them. The NSA hoards vulnerabilities so they can spend more time using them on our allies, fellow citizens, anf our enemies.


We use guns plenty on our allies and fellow citizens. We just call it 'police' when we do that.


I do get the impression a fair number of people actually believe the NSA shouldn't do this. Whether it's out of naiveté or wisdom is harder to tell, but I definitely don't get the impression everyone here is speaking from, say, 50 years of wisdom or experience in domestic and foreign policy or (cyber)warfare or what have you.


First mover advantage goes to those with already formed opinions. Or, shoot first, ask questions later.

It would be nice to see HN do some A/B testing. Divide viewers randomly into two bins: those who see posts in FIFO, vs those who see posts LIFO; see if there is correlation between which comments are upvoted.

Also, it may be that contentious comments push a thread comment rate up, more partisan opinions cause article upvoting too, collectively pushing the article up in rankings. Less emotive topics/replies let an article slide down into the bitbucket.


Another analogy is, NSA has find vaccine for Zika virus, but they haven't released it to public because as if now only they have weaponized it. If some rogue group manages to weaponize Zika, they release the vaccine. Though vaccinating whole world will take few years, US can do so in a matter of months. May cost millions of people in poor countries and 1000s in first world, that's just cost of business.

Unfortunately, for all 3 letter agencies of the world, the analogy may not be far fetched.


Bad analogy if you consider reality of Zika vaccine:

"Army is planning to grant exclusive rights to this potentially groundbreaking medicine–along with as much as $173 million in funding from the Department of Health and Human Services—to the French pharmaceutical corporation Sanofi Pasteur. "

https://www.thenation.com/article/the-government-created-thi...


A better analogy might be, the NSA knows that every other country in the world has weaponized a wide variety of viruses. The NSA has also weaponized and created vaccines for a wide variety of viruses. They know that they can't vaccinate the US population without tipping off the enemy to what they have. If they tip off the enemy to what they have, all their weaponized viruses become ineffective, and all of the sudden, the enemy is the only one with effective weaponized viruses.


> The NSA hoards vulnerabilities for the same reason the military has guns. Because other countries have guns too.

The key difference is that we can't make the enemies guns ineffective by accumulating even more guns. The NSA could weaken the enemy's vulnerability stockpile by aggressively chasing and releasing vulnerability information to vendors.


Sure, but as you well know, two distinct sets of actors working to find vulnerabilities are going to find some non-overlapping sets of vulnerabilities. We can't know which ones they've found. In fact, one good way to know what the enemy has is to have some of our own, so we can do some espionage. Which, of course, requires us to possess unpatched vulnerabilities.


Vulnerabilities go to the highest bidder. With our military budget being what it is, I'm pretty sure we could outspend anybody out there. If you start offering millions of dollars for serious vulnerabilities, people will turn them in, including employees of foreign intelligence agencies.


One possible argument: You can't get "mutually assured destruction" from vulnerabilities. With guns you can say if you invade here I'll shoot you, if you were to bomb me, I'd bomb you back. But with vulnerabilities you can't even say you have them as that would help the other party find them. You can't say unleash a cyber attack on me and I'll do the same back in the same way. It seems rather than being both an offence AND defence like guns, they are an offence at the expense of your defence.


The point is not to use them#, the point is that the opposition believes that you have them, they are effective and that you will use them. In that sense the Snowden leaks have been a powerful propaganda win for NSA offensive cyber: everyone knows they have real capability. Likewise we know that the Russians have offensive capability against civilians (DNC, Ukraine power grid, etc), a propaganda machine, and a counter-cyber team (Shadow Brokers). What we don't know is how good the Russian/* military cyber capability is, and how strong the defence would be.

Personally I think most defences are rubbish, it is MAD, and the financial implications would be dire.

It reminds me of a classic line from Spies Like Us: "A weapon unused is a useless weapon."

#Except against dissidents.


Why do so many commercial embedded devices use Window OS, and generally old versions like NT or XP?

Do vendors get a kickback from Microsoft to use Windows in systems that don't even have a display?

Otherwise I don't see why they would license Windows instead of using a no-cost BSD or Linux derived OS.


I have a oscilloscope which you can regularly find on eBay at a fairly high price. It's an awesome scope that runs XP and will always run XP because the vendor has moved on and no longer supports it. Given the age, it's unrealistic to expect that if they would have used a BSD or Linux kernel that wouldn't be vulnerable to any number of attacks. Thus, I would suggest the question isn't why are manufacturers using Windows, but rather, how can we get to a point where vendors either support their products for the anticipated life of the device or allow end-users to upgrade the kernel and related packages themselves?


I think a lot of it is programmer expertise. There's a lot of people out there who only use Windows and have never used a Unix and wouldn't know where to begin programming for it. Especially when you get into lowest-bidder and outsourcing situations, where the technology used isn't even a consideration when planning the project.


I have extensive experience in the embedded field and the answer is that most programmers and technicians know Windows and don't know Linux. Then of course there might be a few benefits to Windows in specific cases, but it's mostly the case of simply being familiar with it.


Concerning the display: I've seen it quite often at random public places that when a technician tries to fix something on these devices, he or she will connect an external device with display and touch screen (at least when Windows 8+ is used).

My guess is that maintenance of such systems is just much cheaper. Virtually everyone knows the Windows GUI.


Laziness. Even embedded linux things (routers, cameras, etc.) tend to have outdated packages and unnecessary binaries (netcat, etc.) left from the 'standard install'.

Lots of people only "do Windows" so that's the choice they make for the company.


And what about having a firewall on the devices? I could understand slow patching, crappy or processes and all. But if the cameras are networked, why is there no firewall on them?


They might not be networked; could have been infected while an operator was collecting photos. But probably networked.


Every cloud has its silver lining.


Indeed, that's too funny for words.


I am completely failing to see how Wannacry could have affected the operations of those devices AND at the same time leave some data "good enough" to produce the fines.

I presume that essentally it is a photo taking device that superimposes a date/time and detected speed (and maybe also OCR's the license plate).

More or less three or four pieces of data:

1) picture

2) date/time

3) speed

4) (maybe) OCR'ed liense plate number

If any of these items were encrypted it would be evident, and the following step (looking in a database for the ownere of the license plate number) would have returned a null result.

The only way for this to actually work (creating a wrong fine) would be if ONLY the license plate number was encrypted by Wannacry AND the encrypted string matched another existing license plate number.


I wonder if it has infected any critical military infrastructure at US? and any real way to know about it?


Critical military infrastructure isn't connected to the Internet, so it's unlikely as part of this regular epidemic.


Iran's nuclear fuel processing systems weren't connected to the internet either...


[serious] wasn't the Internet basically invented to support critical military infrastructure? Or do their have their own parallel thing?


Not "parallel" but a few networks.

NIPR - Unclassified DoD internet plus powerful defensive capabilities when traffic goes between the intranet and public internet.

SIPR - Private "internet" for military and defense contractors and such to allow for handling SECRET and below. Dedicated circuits.

JWICS - Like SIPR but for TS and SCI channels.

Then other dedicated networks for international partnerships which run on dedicated circuits and using sats.


The technology of the internet was definitely invented to support it (i.e resilient networks). The public internet that we use is separate though.


But it is collected to an internet, which might be just as good.


Oh no


"I cancelled the fines because I think it's important the pubic has 100 per cent confidence in the system" [emphasis added]

o_O


The Age is giving the Graudian a run for its money. There is almost nobody left working at Fairfax outside of management that was born last century.


saw that too... confused me at first




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: