I don't even get how this happens. Surely these things are not just plugged in to a modem? There has got to be some kind of LAN involved. If there is, then there should at least be an edge firewall. Or even a simple garden variety gateway with NAT, which would already prevent all of those open ports from being accessible. So what gives? Are people deliberately hooking this gear up to the internet, deliberately exposing ports without taking security into consideration? That is patently insane.
It's something like the Idiocracy Effect. In Idiocracy, the present-day world's best minds devote all their energy to eliminating hair loss and prolonging erections, eventually leading to a collapsed society. The real world is not too far off. Our best minds are dedicated to improving ad targeting and keeping users in vanity-gratifying loops on social media, while our industrial control systems that literally run the modern world wallow in the 70s.
Everyone has some culpability in that. How many of us yearn to work for the local power utility (or one of their upstream vendors) rather than Facebook, Google, or the latest VC pump-and-dump scheme -- err, I mean, hot SV unicorn? Even if they did compensate competitively, are there enough hackers with a sense of [realism/duty/patriotism/INSERT_OTHER_SOCIAL_VALUE_HERE] to heed the call?
There's definitely a role for some type of regulation or standards here. Industrial controls should be considered vital infrastructure that require serious and immediate investment. A brief visit to Shodan will show thousands of exposed industrial resources, and that should scare your pants off.
It's definitely a tragedy of the commons, idiocracy style.
Lots of embedded work related to critical infrastructure is now dumped to Eastern Europe or Asia as the good people in this field are mostly gray beards which are deemed too expensive by management.
I actually would (or a vendor as you say) if it didn't mean earning 1/3 or less as much money. I mean, what better way to give back to the world than to apply one's hard-earned skills to improving the technologies (water, gas, power, manufacturing) that make modern society possible? It's like real life Ark or Infinifactory.
But the pay is just not viable, and there are often stricter credential requirements than in valley software engineering.
I walked out of my IA job with the boss screeching about how I was "making the biggest mistake of my life", how "nobody can match our prestige"... after a four-year pay freeze. Nearly a 20% pay rise right off the bat, and that was the initial offer.
The other thing about IA is there's no career progression for engineers unless someone higher-up quits (which usually triggers a deckchair shuffling or a hiring). If you're good at sales, you can progress into that -- otherwise your only option is likely to be management.
If you want to make cool things, IA is not the industry for you.
If you're happy making the same thing day-in day-out, just with a different coloured box? You'll fly.
If you can play politics and ruthlessly spike your colleagues, you'll get all the way to the boardroom.
Few of the incentives in that industry lead to good security. More details here: https://news.ycombinator.com/item?id=3260127
It’s the path of least resistance in action.
If you make it significantly more work to do something than not then people simply won’t do it.
As the visible difference between secure and insecure is invisible to management it slides.
As an industry we’ve failed all horribly at making secure the easy default option.
And that’s going to haunt all of us.
This. Combined with the fact that the handful of people who have the knowledge/experience to properly secure the systems are probably too busy/overworked to expend the time to improve the situation. Note that the time to improve the situation includes not only the actual time to implement the changes, but also the time to play politics and get funding/resources.
At least at the company I work at, management cared about security for about a month after Wannacry happened. Once all the PCs were patched for that specific issue, it was back to business as usual.
That's not true. There's tons of people that could setup VPN's or whatever for them that are better than what we see. The actual problem is management doesn't care about security with no financial damages for that in most cases. So, they don't try to secure the project. I've known a few engineers doing embedded systems telling me how their customers almost always ignored warnings if dollars invested was somewhere above zero.
> Note that the time to improve the situation includes not only the actual time to implement the changes, but also the time to play politics and get funding/resources.
That's the problem. Explaining to the beancounters why this is needed and why Susan can't just RDP in from home anymore. The implementation of these things are fairly trivial, the hand holding takes forever. Add the fact that lots of us got into IT to avoid playing these games and you've got the current state of affairs.
I work on a safety critical piece of software. To give QE more than a day or two to test it is a fight every single release. After many unsuccessful attempts I stopped swinging at windmills. We pulled about half of the software releases we made last year due to patient safety issues discovered by customers. Management still wants "agile"
Agile and safety critical is fine. It works.
"Agile" is a problem, safety-critical or no.
Nor should they, if they act rationally.
To be honest, I've started thinking that having a lot stronger regulation for punishing companies is the only way to improve the situation. GDPR is a good start in the EU, although it is privacy-focused.
The free-market solution to this is that users (be they consumer or B2B) start shunning the companies and products with the worst security records, but the problem is that security (and privacy, it seems) are so hard to grasp that it's just not happening. Companies exist to make money, and it's sad how many of them won't actually care about anything else. So if that is the only thing that they care about, the only option to make them care about security is to make insecurity a bad road business-wise.
This will never and can never happen. Security and sharing are at odds.
If I don't want connectivity, I don't even buy a connected controller as it is more expensive. So, the fact that a connected controller got bought means someone wanted connectivity.
Of course, the person who wanted connectivity didn't allocate any actual money to the person actually doing configuration, maintenance, auditing, updating, etc.
Only at this point, do we have the fact that configuring a secured device is a pain. So, the default is to turn off all the security to get it up and running. After that, if someone can't tweak the software to turn the security back on in a couple of hours, it gets left off (remember--nobody actually allocated budget so these are unbillable, wasted hours).
All of this together collides into a nice big pile of fail.
> This will never and can never happen. Security and sharing are at odds
I don't think the GP was suggesting that products be made so secure to be functionally useless. There's a wide spectrum between completely open and completely locked down. But right now the gradient to go from open -> secure is so steep that most people aren't willing to put in the effort.
For example, have you ever tried using OpenSSL to create a self-signed certificate? The defaults are probably not what you want. Adding on stuff like the proper certificate extensions is even more work. Products should be designed so that it's not a herculean task to add security when you're ready to take that step.
The way I understand it (and what my brief experience in writing software for industrial use confirms), it's the usual: the people making purchase decision have little to no clue, which - in absence of regulations - allows companies selling these products and services to not care, employ people without clue, and/or purposefully trade off quality for sales points (think of how many pseudo-features you can add to a PLC once you hook it up to the cloud).
When asked customer does not want to pay any more for secure remote access.
I wonder if there's room to use this software to provide direct feedback to the organizations and let them know without being prosecuted?
Shodan Monitor is to the Internet as Google Alerts is for the web. And the membership (one-time payment of $49 for a lifetime upgrade) lets you monitor up to 16 IPs.
Disclaimer: I'm the founder of Shodan.
If you look at the pattern of really nasty cyber attacks against infrastructure and industry, they usually are the other way around. Stuxnet was the US attacking Iran, Ukraine was attacked by Russia.
Some ISPs that should be named and shamed for allowing this to be going on:
SingleHop - a US based cloud provider
CariNet Inc from San Diego - another small cloud provider
M247 Europe - a Colo provider in Romania
The above have been found to be hosing one or more of the servers that do the actual Shodan scanning. Servers are named censusX.shodan.io where X is a single digit.
I suggest that everyone annoyed with Shodan's activity emails those service providers and tells them what they think about it.
Making people jump through hoops to get a port scan also seems to be part of the problem.
The network operator community is slowly progressing on building out the necessary infrastructure (both institutional and technical) to deal with DDoSes in a kind of automated way. (Initially BGP null route / blackhole propagation, now some extra computation - so just drop certain percentage of packets, that match this trivial bitmask, etc.)
All this is because people forget how easy it was to get your unpatched Win XP machine owned by Blaster/MyDoom. And it's not much harder today with IoT devices, countless unpatched WordPress, phpMyAdmin, Django, RoR sites.
Of course, I'm not particularly happy about this state of things, but I have no issue with scanning, I have a lot more problem with real malware left unchecked, opportunists mistaken for real abusers.
There's enough 'internet background radiation', script-kiddies and virus's scanning the internet, that a heavier-weight scanner that has a few agents to do the scanning, and is somewhat clever about how they allocate IPs and ports to the agents can disappear into the background noise.
And if you really want to spread out the load, it'd probably be depressingly easy to recreate the 'internet cencus 2012'/ Carna botnet https://en.wikipedia.org/wiki/Carna_botnet
NB that having something like Shodan is invaluable to defenders in identifying potential hosts for botnets.
Continuing my "trying door handles on cars or houses" analogy imagine there is a service that has people walking from car to car and from house to house covering whole cities. Then once it compiled a database of which houses/flats/cars tend to be unlocked it made a business out of selling access to that DB. Would you have no moral reservations about that?
Almost everyone is for openness in matters of vulnerabilities in software so companies are forced to fix them, but still most 0day researches give heads up to the companies and shortly later mailing lists about the vulnerabilities they discover before they openly publish all the details including an example exploit. Therefore openness has widely accepted limits. Making a business out of selling information about third parties that are vulnerable is way past those limits.
It'll not force people to do anything, unless by forcing them you mean it will send attackers their way, they will get pwned, their systems screwed and they learn their lesson the hard way. You may well be in favour of this kind of mass education of inept-admins, but I'm not.
As for "whining" about morality, you know humans invented and use it when making decisions because it is useful in preventing conflict. What happens if companies don't give a fuck about morality of their actions? People hurt by them use forceful means to "make things right" in their mind.
ICS networks are often designed to be 'air gapped'. All too often the air gap is broken via a vpn into the network so that someone can RDP to a windows scada machine (that doesn't receive updates because it can't reach the internet itself).
How about not connecting your ICSs directly to the Internet?
The other thing we've noticed is that people are putting the ICS devices on non-standard ports in an attempt to hide them from Internet crawlers. This means that there are people that know this is a bad idea and instead of putting it behind a VPN or something more secure they just decide to change ports and leave it at that.
I've never heard of Shodan, it seems like a valuable service and seems like you care. I'm not in the 'industrial control systems space', but am in an industry which is 'sensitive'.
The 'last line of defence' is often audit. Are you able to reach out to auditors (Big4) and regulators and educate them on this service (audit often have a financial background, CPA etc, and it's rare to find an auditor with a deep technology understanding, and MBA programs, which a lot of company heads might have taken, tend to lack anything very information technology technically - basically finance rooted)? I'm thinking this could be a business development route for a valuable service; make it a win-win for them too.
In Poland, every couple months I see news that our government/military is supposedly creating some sort of "cyber force". If one day they actually create it, I hope this kind of stuff will be its focus.
They can (and have) act in an advisory capacity but they have no regularity authority to force companies to secure their shit.
Now whether such an organisation should exist that’s an interesting question I guess.
Over here in the UK we have a similar body.
As someone in the UK tech industry I’m not sure what they actually do on the ground.
Their main requirement was and is SIGINT. That drives about all their budget and power. Their secondary requirement was to protect the government and/or military (not sure) with communications security (COMSEC). They may have expanded that to computer security. They were also supposed to protect defense contractors since they were an extension of the military. That's why their most secure stuff is unavailable to average American but defense companies can buy it. Also, the penalties for failing to stop the next 9/11 are astounding compared to failing to prevent... (checks today's articles)... a Fortune 500 company from leaking 264GB in client, payment data.
So, they deny us good stuff and weaken what we have wherever possible in general case. Some tiny number of them in Information Assurance give us tools and guides to help us. NSA can't be trusted to protect us. I do think the people in IA who gave us the best tools should be hired by the organization that will protect us. :)
If China wants a model, the TCSEC is a decent start at one. It was made for military requirements, though. Like MLS. The next approach should focus on commercial needs. Also, both TCSEC and Common Criteria were paper heavy with long evaluations after product development was done. The next should focus on actual code with reviewers getting into the process early on, reviewing deliverable by deliverable, so they have better insight into what's going on with faster time to market. Lots of room for improvement over the current model.
Example of what industry was doing under TCSEC
Modern example from that lineage:
All talk, nothing's done.