Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
State of Industrial Control Systems in Poland and Switzerland (medium.com/woj_ciech)
183 points by achillean on June 9, 2019 | hide | past | favorite | 61 comments



This is mind boggling. Who installs these systems? Who maintains them? Surely this is supposed to be done by someone with at least a certain amount of clue? Enough clue not to hook up your insecure gear to the internet? No?

I don't even get how this happens. Surely these things are not just plugged in to a modem? There has got to be some kind of LAN involved. If there is, then there should at least be an edge firewall. Or even a simple garden variety gateway with NAT, which would already prevent all of those open ports from being accessible. So what gives? Are people deliberately hooking this gear up to the internet, deliberately exposing ports without taking security into consideration? That is patently insane.


How much do engineers specializing in these systems get paid? I interviewed at a couple of places that do industrial-style systems and in one case, I got an offer that clocked in far below the requested rate (not a negotiation tactic, they declined to revise it), and at another, the recruiter couldn't hang up fast enough when she heard salary requirements.

It's something like the Idiocracy Effect. In Idiocracy, the present-day world's best minds devote all their energy to eliminating hair loss and prolonging erections, eventually leading to a collapsed society. The real world is not too far off. Our best minds are dedicated to improving ad targeting and keeping users in vanity-gratifying loops on social media, while our industrial control systems that literally run the modern world wallow in the 70s.

Everyone has some culpability in that. How many of us yearn to work for the local power utility (or one of their upstream vendors) rather than Facebook, Google, or the latest VC pump-and-dump scheme -- err, I mean, hot SV unicorn? Even if they did compensate competitively, are there enough hackers with a sense of [realism/duty/patriotism/INSERT_OTHER_SOCIAL_VALUE_HERE] to heed the call?

There's definitely a role for some type of regulation or standards here. Industrial controls should be considered vital infrastructure that require serious and immediate investment. A brief visit to Shodan will show thousands of exposed industrial resources, and that should scare your pants off.


Don't know about the US market but in EU salaries in the embedded/automotive/industrial control are crap compared to frontend/app kiddies.

It's definitely a tragedy of the commons, idiocracy style.

Lots of embedded work related to critical infrastructure is now dumped to Eastern Europe or Asia as the good people in this field are mostly gray beards which are deemed too expensive by management.


How many of us yearn to work for the local power utility

I actually would (or a vendor as you say) if it didn't mean earning 1/3 or less as much money. I mean, what better way to give back to the world than to apply one's hard-earned skills to improving the technologies (water, gas, power, manufacturing) that make modern society possible? It's like real life Ark or Infinifactory.

But the pay is just not viable, and there are often stricter credential requirements than in valley software engineering.


As someone who worked in this industry -- it's peanuts, but they'll tell you its gold.

I walked out of my IA job with the boss screeching about how I was "making the biggest mistake of my life", how "nobody can match our prestige"... after a four-year pay freeze. Nearly a 20% pay rise right off the bat, and that was the initial offer.

The other thing about IA is there's no career progression for engineers unless someone higher-up quits (which usually triggers a deckchair shuffling or a hiring). If you're good at sales, you can progress into that -- otherwise your only option is likely to be management.

If you want to make cool things, IA is not the industry for you.

If you're happy making the same thing day-in day-out, just with a different coloured box? You'll fly.

If you can play politics and ruthlessly spike your colleagues, you'll get all the way to the boardroom.


May I ask how did you get out of industrial automation and where did you go after?


I used to work on those sorts of systems. I'm now making 3x as much at FAANG. I love working on systems where you own it top-to-bottom, but not going to turn down extra piles of cash every month.


It’s not deliberate in the sense they want people to be vulnerable.

It’s the path of least resistance in action.

If you make it significantly more work to do something than not then people simply won’t do it.

As the visible difference between secure and insecure is invisible to management it slides.

As an industry we’ve failed all horribly at making secure the easy default option.

And that’s going to haunt all of us.


> It’s the path of least resistance in action

This. Combined with the fact that the handful of people who have the knowledge/experience to properly secure the systems are probably too busy/overworked to expend the time to improve the situation. Note that the time to improve the situation includes not only the actual time to implement the changes, but also the time to play politics and get funding/resources.

At least at the company I work at, management cared about security for about a month after Wannacry happened. Once all the PCs were patched for that specific issue, it was back to business as usual.


"This. Combined with the fact that the handful of people who have the knowledge/experience to properly secure the systems are probably too busy/overworked to expend the time to improve the situation"

That's not true. There's tons of people that could setup VPN's or whatever for them that are better than what we see. The actual problem is management doesn't care about security with no financial damages for that in most cases. So, they don't try to secure the project. I've known a few engineers doing embedded systems telling me how their customers almost always ignored warnings if dollars invested was somewhere above zero.


The above commenter said:

> Note that the time to improve the situation includes not only the actual time to implement the changes, but also the time to play politics and get funding/resources.

That's the problem. Explaining to the beancounters why this is needed and why Susan can't just RDP in from home anymore. The implementation of these things are fairly trivial, the hand holding takes forever. Add the fact that lots of us got into IT to avoid playing these games and you've got the current state of affairs.

I work on a safety critical piece of software. To give QE more than a day or two to test it is a fight every single release. After many unsuccessful attempts I stopped swinging at windmills. We pulled about half of the software releases we made last year due to patient safety issues discovered by customers. Management still wants "agile"


"Agile" and safety critical, that is terrifying to me.


Not sure if your comment was aimed at agile vs. "agile".

Agile and safety critical is fine. It works.

"Agile" is a problem, safety-critical or no.


It's common nowadays. The new edition of IEC 61508 that's being worked on is supposed to incorporate agile workflows.


> The actual problem is management doesn't care about security with no financial damages for that in most cases. So, they don't try to secure the project.

Nor should they, if they act rationally.

To be honest, I've started thinking that having a lot stronger regulation for punishing companies is the only way to improve the situation. GDPR is a good start in the EU, although it is privacy-focused.

The free-market solution to this is that users (be they consumer or B2B) start shunning the companies and products with the worst security records, but the problem is that security (and privacy, it seems) are so hard to grasp that it's just not happening. Companies exist to make money, and it's sad how many of them won't actually care about anything else. So if that is the only thing that they care about, the only option to make them care about security is to make insecurity a bad road business-wise.


It was painful to admit as much myself in a counterpoint I wrote to Bruce Schneier:

https://www.schneier.com/blog/archives/2011/06/yet_another_p...


> As an industry we’ve failed all horribly at making secure the easy default option.

This will never and can never happen. Security and sharing are at odds.

If I don't want connectivity, I don't even buy a connected controller as it is more expensive. So, the fact that a connected controller got bought means someone wanted connectivity.

Of course, the person who wanted connectivity didn't allocate any actual money to the person actually doing configuration, maintenance, auditing, updating, etc.

Only at this point, do we have the fact that configuring a secured device is a pain. So, the default is to turn off all the security to get it up and running. After that, if someone can't tweak the software to turn the security back on in a couple of hours, it gets left off (remember--nobody actually allocated budget so these are unbillable, wasted hours).

All of this together collides into a nice big pile of fail.


> > As an industry we’ve failed all horribly at making secure the easy default option.

> This will never and can never happen. Security and sharing are at odds

I don't think the GP was suggesting that products be made so secure to be functionally useless. There's a wide spectrum between completely open and completely locked down. But right now the gradient to go from open -> secure is so steep that most people aren't willing to put in the effort.

For example, have you ever tried using OpenSSL to create a self-signed certificate? The defaults are probably not what you want. Adding on stuff like the proper certificate extensions is even more work. Products should be designed so that it's not a herculean task to add security when you're ready to take that step.


I used to work in the industrial controls industry. The systems are often designed by application engineers working for the industrial control equipment’s manufacturer or distributor. In the case of distributors, the engineering work is often provided for “free” and paid for with the markup the distributor applies over their cost to purchase the components direct from the manufacturer. Those same application engineers will be involved with helping to make the sale. If a customer asks “can I connect this to the Internet?”, any response other than “of course!” is liable to result in a talking to from the sales manager for that account.

Few of the incentives in that industry lead to good security. More details here: https://news.ycombinator.com/item?id=3260127


> Surely this is supposed to be done by someone with at least a certain amount of clue? Enough clue not to hook up your insecure gear to the internet? No?

The way I understand it (and what my brief experience in writing software for industrial use confirms), it's the usual: the people making purchase decision have little to no clue, which - in absence of regulations - allows companies selling these products and services to not care, employ people without clue, and/or purposefully trade off quality for sales points (think of how many pseudo-features you can add to a PLC once you hook it up to the cloud).


I don't know about ICS, but CCTV cameras, HVAC controls, parking gates are too often installed by technicians with little to no IT knowledge. They ask the local IT to open port (ie. port forwarding) for remote management or servicing without knowing the implications. When offered a VPN they often refuse - they do not want or know how to use it.


My (not vast) experience with this stuff is that it’ll often get plugged directly into an LTE-Ethernet modem. Or, depending on the vintage of the equipment, the chain looks more like LTE-Ethernet modem connected to a “serial port concentrator” with Ethernet on one side and a bunch of serial ports on the other side. Sometimes the installers will turn on IP Whitelisting on the inbound ports, but sometimes that gets skipped.


I can't say much about Switzerland, but on a slightly related note, in Poland we have automatic doors in trains that don't bother detecting if someone's in the doorway before closing. I'm honestly surprised that these didn't kill a single child yet.


Security is not mentioned in the specification for the control system so the company building it does nothing.

When asked customer does not want to pay any more for secure remote access.


SCADA and industrial stuff are absolutely terrible. Plenty of places where relays will happily change state when you send alternating packets of all 0's and 1's via UDP. Anything from heating / cooling, building lights, alarm systems and industrial processes are wide open. This is definitely not limited to certain countries.


Organizations using ICS equipment could use this tool to find their own systems that are accessible to the internet. However, I would imagine that companies that are responsible enough to perform checks like these hopefully already have procedures in place to prevent issues like this.

I wonder if there's room to use this software to provide direct feedback to the organizations and let them know without being prosecuted?


Shodan actually has a service that will notify you when it discovers a public industrial control system:

https://monitor.shodan.io

Shodan Monitor is to the Internet as Google Alerts is for the web. And the membership (one-time payment of $49 for a lifetime upgrade) lets you monitor up to 16 IPs.

Disclaimer: I'm the founder of Shodan.


Have you noticed any significant change as part of your work with Shodan? If you contact them, do the organizations even do fixes at a steady rate? What's the situation?


We've had great success working through other CERTs and enterprise customers that have existing relationships with affected customers. Reaching out ourselves has been a mixed bag. For us, we have more success directly working with vendors and trying to make sure that moving forward devices are properly configured. And to let them know who's impacted so they can follow-up as part of their regular support services.


I love how humble/modest your profile is ^_^


I know someone who works in cybersecurity for an oil company, he uses shodan to double check if they have exposed anything to the big scary internet.


It depends on Polish law, and how likely they would attempt to prosecute you for a polite letter mailed to them.


This illustrates something I'm worried about. Cyber as a battle space, and the extreme vulnerability of some countries negates some of the traditional strategic advantages that superpowers have had. That will rebalance in time. But I worry that for an up and coming power with a kick-ass cyberwarfare operation, there is no better time to start a war than right now.


One saving grace is that if a cyber attack was bad enough, it would likely result in retaliation in the physical space, provided attribution could be proven. Superpowers generally have armed forces far superior to asymmetric attackers and would be able to inflict punitive damages far beyond the cost of the initial cyber attack. There is some deterrence against some of the worst attacks eg: knocking out a power grid.

If you look at the pattern of really nasty cyber attacks against infrastructure and industry, they usually are the other way around. Stuxnet was the US attacking Iran, Ukraine was attacked by Russia.


This whole submission looks like an ad for Shodan. For those who don't know. Shodan is a basically a search engine on top of a DB created by mass port scanning. If it sounds shoddy as fuck to you, you would be right. They basically managed to find few ISPs that disregard hundreds or possibly thousands of abuse notifications they must be receiving and they are monetising their find. No doubt someone will reply "but port scanning is not illegal", well walking from car to car and trying door-handles to see if any are open in a supermarket car park is also not illegal, but don't be surprised if you get a security guy's baton treatment if you're spotted doing that. My point is, it is not illegal, but it is also not acceptable. Don't believe me? Try to do a mass port scan on any normal ISP's connection. You'll be getting a phone call or a letter in the post to stop it soon or they will disconnect you. Same with AWS, Azure, Rackspace and any other reputable cloud provider. "Oh, but we provide a much needed service to companies that need to be notified if any unsecured devices pop up on their network" - they'll say. My answer to this is that there are hundreds if not thousands of WhiteHat scanning companies that will happily provide you with a scanning service if you prove you own the range. It is only Shodan that will preemptively scan everyone and then let people search their DB. This is basically equivalent to a script kiddie running nmap on 0.0.0.0/0. Seriously, this is not OK.

Some ISPs that should be named and shamed for allowing this to be going on: SingleHop - a US based cloud provider CariNet Inc from San Diego - another small cloud provider M247 Europe - a Colo provider in Romania

The above have been found to be hosing one or more of the servers that do the actual Shodan scanning. Servers are named censusX.shodan.io where X is a single digit.

I suggest that everyone annoyed with Shodan's activity emails those service providers and tells them what they think about it.


People freaking out about port scans are pretty much part of the problem, not the solution. They are who block ICMP making P-MTU discovery broken, etc.

Making people jump through hoops to get a port scan also seems to be part of the problem.

The network operator community is slowly progressing on building out the necessary infrastructure (both institutional and technical) to deal with DDoSes in a kind of automated way. (Initially BGP null route / blackhole propagation, now some extra computation - so just drop certain percentage of packets, that match this trivial bitmask, etc.)

All this is because people forget how easy it was to get your unpatched Win XP machine owned by Blaster/MyDoom. And it's not much harder today with IoT devices, countless unpatched WordPress, phpMyAdmin, Django, RoR sites.

Of course, I'm not particularly happy about this state of things, but I have no issue with scanning, I have a lot more problem with real malware left unchecked, opportunists mistaken for real abusers.


Shodan is far from the only 'player' in that space. And the scan's aren't necessarily as noisy as you think.

There's enough 'internet background radiation', script-kiddies and virus's scanning the internet, that a heavier-weight scanner that has a few agents to do the scanning, and is somewhat clever about how they allocate IPs and ports to the agents can disappear into the background noise.

And if you really want to spread out the load, it'd probably be depressingly easy to recreate the 'internet cencus 2012'/ Carna botnet https://en.wikipedia.org/wiki/Carna_botnet

NB that having something like Shodan is invaluable to defenders in identifying potential hosts for botnets.


Hiding our heads in the sand only means that the vulnerabilities will not be fixed and that only especially crafty attackers (i.e. the most dangerous ones) can exploit them. We need more openness, not security based on someone's feelings of morality.


How does Shodan promote openness? Anyone who wants to scan their own network can already do so with paid for and free tools. What this service does is, it scans your network whether you want it or not, then allows others to search for vulnerable hosts. It is a script kiddie wet dream come true service.

Continuing my "trying door handles on cars or houses" analogy imagine there is a service that has people walking from car to car and from house to house covering whole cities. Then once it compiled a database of which houses/flats/cars tend to be unlocked it made a business out of selling access to that DB. Would you have no moral reservations about that?

Almost everyone is for openness in matters of vulnerabilities in software so companies are forced to fix them, but still most 0day researches give heads up to the companies and shortly later mailing lists about the vulnerabilities they discover before they openly publish all the details including an example exploit. Therefore openness has widely accepted limits. Making a business out of selling information about third parties that are vulnerable is way past those limits.


If it forces people to invest in securing their systems, it will have already done more good than whining about morality.


>If it forces people to invest in securing their systems, it will have already done more good than whining about morality

It'll not force people to do anything, unless by forcing them you mean it will send attackers their way, they will get pwned, their systems screwed and they learn their lesson the hard way. You may well be in favour of this kind of mass education of inept-admins, but I'm not.

As for "whining" about morality, you know humans invented and use it when making decisions because it is useful in preventing conflict. What happens if companies don't give a fuck about morality of their actions? People hurt by them use forceful means to "make things right" in their mind.


Industrial control systems are experiencing the growing pains of letting go of older technology that was designed prior to security being much of a concern.

ICS networks are often designed to be 'air gapped'. All too often the air gap is broken via a vpn into the network so that someone can RDP to a windows scada machine (that doesn't receive updates because it can't reach the internet itself).


Ugh. None of the industrial protocols like Bacnet or modbus have any built in security so this looks pretty bad


Agreed. These protocols are difficult to secure. However, it shouldn't be difficult to isolate devices from the internet. Isolation doesn't protect against inside attackers or an external use from causing trouble after getting into the network. It should be obvious these devices shouldn't have access to the internet.


Right. Even internally they should ideally be on their own subnet.


“Everyone knows how fragile these systems are”

How about not connecting your ICSs directly to the Internet?


The number of ICS directly connected to the Internet has grown 10% every year since we started tracking them at Shodan (https://exposure.shodan.io) so even worse this is an increasing problem. This is a known issue in the security industry and has been for a while but fixing it is a hard problem.

The other thing we've noticed is that people are putting the ICS devices on non-standard ports in an attempt to hide them from Internet crawlers. This means that there are people that know this is a bad idea and instead of putting it behind a VPN or something more secure they just decide to change ports and leave it at that.


> This is a known issue in the security industry and has been for a while but fixing it is a hard problem.

I've never heard of Shodan, it seems like a valuable service and seems like you care. I'm not in the 'industrial control systems space', but am in an industry which is 'sensitive'.

The 'last line of defence' is often audit. Are you able to reach out to auditors (Big4) and regulators and educate them on this service (audit often have a financial background, CPA etc, and it's rare to find an auditor with a deep technology understanding, and MBA programs, which a lot of company heads might have taken, tend to lack anything very information technology technically - basically finance rooted)? I'm thinking this could be a business development route for a valuable service; make it a win-win for them too.


Isn't it the role of State Security/Defense Services to conduct these sort of scans, and notify companies of their vulnerability?


No. In North America, we have cybersecurity compliance auditing for large power plants and other bulk electrical system facilities done under the auspices of NERC.

https://www.nerc.com/Pages/default.aspx


It's also very difficult to identify the end-user of these control systems:

https://blog.shodan.io/taking-things-offline-is-hard/


You can notify into a bunch agencies in Spain, but AFAIK they have no way to enforce it. Just what I heard from a fried that works at netsec, not that I have any direct knowledge.


It is not. What made you think it was?


People keep saying here and elsewhere that this is what the NSA was supposed to be about in the US.

In Poland, every couple months I see news that our government/military is supposedly creating some sort of "cyber force". If one day they actually create it, I hope this kind of stuff will be its focus.


The NSA has a dual role, break other people’s stuff and secure the US govs stuff (in theory).

They can (and have) act in an advisory capacity but they have no regularity authority to force companies to secure their shit.

Now whether such an organisation should exist that’s an interesting question I guess.

Over here in the UK we have a similar body.

https://en.m.wikipedia.org/wiki/National_Cyber_Security_Cent...

As someone in the UK tech industry I’m not sure what they actually do on the ground.


One thing those are good for is being a call center for white hat hackers. I.e. if you find some holes you can report to those agencies, and they'll take it from there. I know that's what people that speak about their findings on CCC do.


It's a myth:

https://news.ycombinator.com/item?id=17216853

Their main requirement was and is SIGINT. That drives about all their budget and power. Their secondary requirement was to protect the government and/or military (not sure) with communications security (COMSEC). They may have expanded that to computer security. They were also supposed to protect defense contractors since they were an extension of the military. That's why their most secure stuff is unavailable to average American but defense companies can buy it. Also, the penalties for failing to stop the next 9/11 are astounding compared to failing to prevent... (checks today's articles)... a Fortune 500 company from leaking 264GB in client, payment data.

So, they deny us good stuff and weaken what we have wherever possible in general case. Some tiny number of them in Information Assurance give us tools and guides to help us. NSA can't be trusted to protect us. I do think the people in IA who gave us the best tools should be hired by the organization that will protect us. :)


I have started thinking this is a major systemic weakness the US has vs China. Companies in America operate as individual entities more or less vs the top down model in China. Every company I have worked with in China had a group of government agents it just seems to be standard operating procedure there. Maybe they weren't around for day to day operations but they were definitely around whenever Americans were there. It's apparent they have vast cyber and intel efforts intertwined with the major corporations. Contrast this to our model, I don't even know how to alert the US government if I see something suspicious related to cyber security.


It's both a strength and weakness. For innovation, U.S. was among the strongest in the world during Strategic Computing Initiative where DARPA funded all kinds of industry work. Led to many innovations of today. Then, the weakness comes in with them caring only about profit (security is cost center), short term gains tied to executives' bonuses, and so on. That's when state involvement can help. We did have that under the TCSEC with DOD making security standards, incentivizing private sector to build them, and evaluating their security. Multiple agencies also offer security advice and testing. The middle ground looks to be regulations ensuring the basics are in place on top of continual improvement.

If China wants a model, the TCSEC is a decent start at one. It was made for military requirements, though. Like MLS. The next approach should focus on commercial needs. Also, both TCSEC and Common Criteria were paper heavy with long evaluations after product development was done. The next should focus on actual code with reviewers getting into the process early on, reviewing deliverable by deliverable, so they have better insight into what's going on with faster time to market. Lots of room for improvement over the current model.

TCSEC

https://en.wikipedia.org/wiki/Trusted_Computer_System_Evalua...

Example of what industry was doing under TCSEC

https://csrc.nist.gov/csrc/media/publications/conference-pap...

Modern example from that lineage:

https://os.inf.tu-dresden.de/papers_ps/nizza.pdf


> In Poland, every couple months I see news that our government/military is supposedly creating some sort of "cyber force".

All talk, nothing's done.


It should be, but usually they’re trying to accomplish the opposite (to have exploits everywhere)




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: