Hacker News new | past | comments | ask | show | jobs | submit login
Hacker increased chemical level at Oldsmar's city water system, sheriff says (wtsp.com)
579 points by bschne 69 days ago | hide | past | favorite | 290 comments

Internet-accessble industrial control systems have been a problem for many years now. It's a documented issue but it's difficult to fix for a variety of reasons:

1. Difficult to identify the owner: a lot of the devices are on mobile networks that don't point to an obvious owner.

2. Unknown criticality: is it a demo system or something used in production?

3. Security budget: lots of smaller utilities don't have a budget for buying cyber security products.

4. Uneducated vendor: sometimes the vendors of the device give very bad advice (https://blog.shodan.io/why-control-systems-are-on-the-intern...)

That being said, based on the numbers in Shodan the situation has improved over the past decade. And there's been a large resurgence of startups in the ICS space (ex https://www.dragos.com, https://www.gravwell.io). Here's a current view of exposed industrial devices on the Internet:


I've written/ presented on the issue a few times:




"Internet-accessble industrial control systems have been a problem for many years now ..."

They are a problem the way drunk driving is a problem.

You just don't ever do it. Ever.

No cyber security products are needed. No budget required.

These "startups in the ICS space" are like turbotax/HRBlock: only continued idiocy allows their business model to exist.

In a perfect world, maybe there would be unlimited budgets for small rural water districts to have 24/7 onsite staff and run highly secured networks.

I regularly work with these sorts of water districts (larger, better funded ones as well). In reality, some of these small districts may only have 2 or 3 SCADA operators on staff. Sending them home with a pager, a tablet, a VPN password, and some overtime pay is a lot easier to get past the city council then taking on another two employees to cover the night shift for those rare events that need to be handled ASAP.

I could share some real horror stories, but it wouldnt be professionally appropriate. Suffice to say, this story did not surprise me at all.

"In a perfect world, maybe there would be unlimited budgets for small rural water districts to have 24/7 onsite staff and run highly secured networks."

I reject this line of thought.

A small rural water district can run with looser tolerances and looser guarantees - and have done so for decades.

They should spend half the time (and a quarter of the money) setting up systems that fail safely and revert to known states and operate with looser tolerances.

As for telemetry ...

I am not joking at all when I say that a green light on the building that turns red and everyone in the county knows to call either Jed or Billy if that light is red is a completely reasonable system. It's a small rural water district (your words) after all, right ?

re: alerting telemetry - I'm not finding photos to cite right now, but I've absolutely seen "infrastructure" buildings (here in rural Ohio) with warning annunciators (lights and bells) on their exteriors along with signs reading "If this light is flashing call xxx." It's definitely a viable system for alerting.

I like your thinking and I try to espouse it myself (keeping things as simple as they can be-- keeping "technology" out of voting, not connecting things to networks that have no business being connected, etc). Short of a Battlestar Galactica-type "our machines rise up and try to kill us" event, though, I don't think the average person will ever understand the vulnerability inherent in networked computers or the risk/benefit tradeoff of connected vs. disconnected systems.

Even down at the level of local politics in a rural setting the "optics" of bringing technological solutions to bear on problems is seen as forward-thinking-- particularly when it "saves" the taxpayer money. I can't imagine trying to convince a local water board that moving away from a PLC-based system with a remote support vendor world fly, even citing this example.

This event will be another opportunity for more security vendors to cite in case studies justifying their products. More layers of garbage will build up on a foundation of protocols and design philosophies that grew up in an era of disconnected systems with lower stakes and a less complex threat model.

I don't see big money to be made in providing sensible levels of connectivity and security to this kind of infrastructure. I don't see industry stepping-up because of that. Maybe regulation is the answer, though I'd just expect regulatory capture to take over, and have it become another "PCI". Maybe a lot of people have to die before society takes it seriously, as has been the case with so many other safety codes over human history.

It makes me really sad, embarrassed for our industry, and more disappointed in human nature.

> re: alerting telemetry - I'm not finding photos to cite right now, but I've absolutely seen "infrastructure" buildings (here in rural Ohio) with warning annunciators (lights and bells) on their exteriors along with signs reading "If this light is flashing call xxx." It's definitely a viable system for alerting.

I can confirm these exist, as growing up my father's phone number was on a some of those signs. Fancy systems would automatically call an answering service, but for smaller ones, he'd get a call (or page, given the era), usually from a neighbor or something.

It wasn't limited to rural areas either. I distinctly remember him taking service calls in ultra-wealthy neighborhoods on the weekends. One time, the gentleman who called in let my sister and I watch movies in his personal movie theater while my dad fixed the system.

There is little problem on pushing telemetry into the internet.

The largest issues is that you must gather the data from sensors that can't interfere with the thing you are measuring, and that you must process it with computers that don't connect to the ones controlling the process. The first one is really just good engineering practice, and the second is already cheap and getting cheaper by the day.

Also, whatever you do at your process control, you should have some emergency overrides that set when the conditions get too abnormal. Those should be simple (AKA, no computers if possible) and stand-alone. Looks like they got this one right.

Plus the benefit of setting something up to get telemetry as you described, is that someone won't later be tempted to use teamviewer (as in the article example) to open up the whole control system just to view some of that telemetery.

I invite you to become a pentester for a year and see if you still reject this line of thought.

You misunderstand - I am saying they need neither the computers nor the networks.

By running with loose tolerances and loose guarantees (and keeping systems as simple as possible) they remove the need for these tools - and their attack surface.

pen testing often involves walking into a building looking like you should be there. If no one is around to let you in it also can involve half a minute of getting the door open with some primitive every day tools. No computers or software required.

You just switched the attacker model from "script kiddie somewhere in the world, changing stuff for the fun of it" to "physical presence with specific malicious intent". Those are not in the same ballpark, not even the same country.

You underestimate the stupid things local kids can get up to for fun if they thought they wouldn't get caught. So you can at least throw out the "specific malicious intent".

Ok, but how many stupid local kids are there and how many worldwide script kiddies?

Kids who are trespassing are not stupid. You have to take the risk, observe the behaviour of the security and then behave reasonably so that people would forgive you if you get caught.

The action is stupid, but trespassing idiots get caught quickly - that's just "survival of the fittest" mechanics.

If you never trespassed in your life then you were probably not smart enough to get away with it?

Security is a big word, we are talking about large facilities with like 1 or 2 people on site.

As a kid I've tresspassed plenty, it often takes 'security' hours to even spot you

I’m not particularly worried about a group of kids contaminating a single rural water supply, I’m more worried about coordinated effort by a malicious nation state to accumulate security cracks and then use them in a coordinated effort during a war time scenario.

I think it would be quite possible to have the control systems completely offline, while installing a reputable alarm system that is connected to the internet. If those two things have no network connection, then you could monitor the premises, but even a remote hacking of that system would not enable changing of chemical levels.

But you have this problem as well, even if you use networked systems. Now you need to mitigate both.

You must be a pentester, because the commenter above you wasn't talking about pentesting at all.

His comment says that small rural town water treatment plants don't have to run for 24/7. Not sure what you thought he was saying.

>You must be a pentester, because the commenter above you wasn't talking about pentesting at all.

This sounds like it should be a punchline to one of those funny programming jokes https://news.ycombinator.com/item?id=25850739

It would make waay more sence for the entire state / country to buy software for their water systems all at once and set up a department of a few dozen people that travel the country and make sure its setup properly. Why would you not have all water treatment in the country on the same software platform?

A lot of these problems with inefficiency comes down to "muh state's rights", and less federal involvement. I think my stance shines through, but I understand it's debatable per case.

You parent is saying: "just never put your stuff in the Internet". I suppose it really is "that hard" to tell clients this and we can expect more problem down the line.

How did they operate before computing became a mainstream thing? Utilities as a comcept are significantly older than computing itself, and certainly mainstream, internet-connected computation.

They had to pay more for employees who could be on site at unreasonable hours.

Perfect, unemployment is a major problem because of Covid19 anyway.

COVID19 is also decimating budgets

If all it takes to prevent the poisoning of an entire city's water supply is two employees, I certainly hope my governments are choosing to hire those two employees.

I will make an assumption here that you don’t live in a rural city...

Or, let's face it, in the United States.

Of all the commentary I’ve read on this issue so far, this might be the scariest anecdote so far.

I work in industrial automation, and I agree. There’s constant rhetoric about buzzwords like “Industry 4.0,” which, if it means anything specifically, means “connect all the things.”

There doesn’t seem to be a whole lot of thought around “is it even necessary for this three-ton industrial robot to be dynamically reprogrammed from a service center in Stockholm,” and it seems like everyone just assumes that everyone else will do a perfect job implementing and configuring security. I fear the tune will only change after the first multi-million dollar lawsuit, and I hope all that costs is the money.

> There doesn’t seem to be a whole lot of thought around “is it even necessary for this three-ton industrial robot to be dynamically reprogrammed from a service center in Stockholm,”

That's because the value proposition is only obvious when you substitute for "Stockholm" a city from one of the countries with cheaper labor.

In my limited experience with Industry 4.0, it smells like a combination of forcing a goldrush to sell shovels (so many players that want to be the platform which connects everything) on one side, and ongoing search to turn capex into opex on the other (that latter thing is a trend in pretty much all industries, though). I think there's enough companies that would happily replace their control systems (and control engineers) with prepackaged control-as-a-service which they don't have to know anything about, supplied by the lowest bidder, to which they can shift any liability if anything happens. This kind of setup does require remote access.

The thing is, there is no "one size fits all" option most of the time.

There are some packaged solutions but they all involve lots and lots of expert design, setup and management.

So quite often it actually is for the interaction with possibly expensive engineer from random place in the world who has specific knowledge for the system involved, as well as enabling remote operations when facilities are in less accessible location.

As for IIoT 4.0 - truth is a lot of industry was already heavily connected, and many functions I've seen so far are about getting deeper integration between ERP, MES, and individual work cells and workpiece tracking.

Even when the workpiece is fried chicken waiting to be put in a bun, or a cut of pipe that will next need to be appropriately cleaned, bent, welded, painted and finally become part of a ship assembly.

You hit the nail on the head. Once (legally) viable, I guarantee a large portion of our public sector infrastructure maintenance will be outsourced to the lowest bidder in the guise of saving tax money. All without any due diligence as to the safety ramifications of said actions.

> “is it even necessary for this three-ton industrial robot to be dynamically reprogrammed from a service center in Stockholm,”

The answer is Yes. Very very yes. Especially when said programmer can't travel across borders due to Covid restrictions.

But even without Covid, it's a lot cheaper and time-effective to let people look at stuff and fix things from stockholm, or antwerp, or warsaw or whatnot. Else every time your robot sneezes, you have to book plane tickets and a hotel. But worst of all, you risk losing many hours of production due to travel time.

In contrast, with remote operation, you can log in, fix problems in well under 30 minutes, and Get Production Running Again.

In a situation where any kind of stoppage basically means the factory is Not Making Money, you can see the very strong value proposition here.

I don't think it's an all-or-nothing proposition. You can share telemetry and get patches from a remote team without having the equipment connected to the internet to reduce the risk of destroying expensive equipment.

VPN or jump host, sufficiently firewalled in all directions.

Putting a defenseless PLC or robot controller on the open internet is clearly not the best of plans.

(though the amount of people using teamviewer is telling)

I think these days it is becoming a business need though. These systems are made by vendors who probably need remote access. Also if the plant has a relatively unsophisticated IT department then someone is just going to shove an AP in the ceiling so they can check things when they get called at 1AM.

Several ICS vendors like Tosibox and EWON make devices to accomplish this. I think Tosi has the more secure model, though I hate their proprietary dongles.

VPNs are also used pretty successfully here. Several large companies also don't let you directly connect to anything. You vpn in and connect to a machine with Citrix and then you can use whatever was setup for your there. Usually whatever version of Logix/Studio 5000 the plant is on. You have to talk to someone in IT to get your files moved in/out.

I think Amazon went a different direction and uses Versiondog to monitor their automation systems and check for changes. I don't work there or know anyone on their automation team so I'm not aware of the details.

Still, I think you can have external access and be secure. You just need to balance things out with your business needs.

VPNs are also used pretty successfully here. Several large companies also don't let you directly connect to anything. You vpn in and connect to a machine with Citrix and then you can use whatever was setup for your there. Usually whatever version of Logix/Studio 5000 the plant is on.

I support an environment almost exactly like this (albeit in a small manufacturing company). I don't love having one of the controls networks attached, in any way, to the LAN, but I understand the business requirements justify it.

It happens that there's a controls system running devices that could cause massive environmental impact in some malfunction scenarios. I am happy to report the plant, being held to account for things like public evacuation plans and hazmat filings with local first responders, has never asked about connecting that network to anything. That would be a walk-out-the-door type scenario for me. I worry that they'd just find somebody who wouldn't have those scruples, though.

> I worry that they'd just find somebody who wouldn't have those scruples, though.

If I may speak slightly out of turn to a stranger, using a possible & currently imagined future person less scrupulous does not modify in any way your obligation, however you perceive it, to act ethically.

I've warned them about my concerns. If I stop working with them, and I have no further knowledge of thir situation, I don't see what else there would be for me to do.


Nobody was suggesting that it does. The person you're replying to specifically said the opposite.

What are your thoughts on something like this in your line of work? https://cloud.google.com/beyondcorp

I may be too much of a simple "IT guy" to grok the deep meaning of BeyondCorp. I read thru some of the various papers when they came out and always came back to the thought "Yeah, that's nice if you have the resources to exert control over that much of your technology stack."

I don't have those resources, nor do my Customers. I've got the various mix of Windows, Linux, and embedded devices that the Customer has purchased to serve their business applications. They (and I) don't have the clout or purchasing power to demand application vendors bend to our desires, so I'm left with making the best out of sub-optimal architecture, protocols, etc.

Google says, in the BeyondCorp III paper under the heading "Third-Party Software"[1]:

Third-party software has frequently proved troublesome, as sometimes it can’t present TLS certificates, and sometimes it assumes direct connectivity. In order to support these tools, we developed a solution to automatically establish encrypted point-to-point tunnels (using a TUN device). The software is unaware of the tunnel, and behaves as if it’s directly connected to the server.

So, they just do what I do and throw a VPN at it, albeit a client-to-server VPN serving an individual application rather than a client-to-network VPN like I might.

I do my best to segment the networks at my Customer sites, to use default-deny policies between security zones, to authenticate traffic flows to users and devices where possible, and when unable (because of limitations of client software/devices, usually) restrict access by source address. Within each security zone I try to make a worst-case assumption of an attacker getting complete access to the zone (compromising a host within the zone and getting arbitrary network access, for example) with things like private VLANs and host-based firewalls. I have to declare "bankruptcy" in some security zones (usually where there are embedded devices) where I have to rely only on network segmentation because the devices (or vendors) are too "stupid" to have host-based firewall functionality, authentication, encryption, etc. (These are the devices that fall over and die when they get port-scanned, yet somehow end up in mission-critical roles.)

I think the harsh reality is that, operating at the scale of small to mid-sized companies, IT and infosec are forced into a lot of bad places by vendors who don't care, and management who are focused on the bottom-line and who don't see security as anything other than something to purchase insurance for.

To put it another way: I have to make all this crap work. If I make it too difficult for the end users to work or for the vendors to support I'll be kicked to the curb and they'll find somebody else who will be less "difficult".

[1] https://storage.googleapis.com/pub-tools-public-publication-...

That's an understandable scenario and good on you guys for balancing the risks.

It is not a business "need". These systems have functioned without remote access perfectly well for decades. It is a business "want" and thus must be balanced against any new risks relative to historical risks.

The risk of adding remote access to critical systems is the introduction of globally accessible single-point-of-failures. Given the nature of software, such an attack has an unlimited amount of time to be perfected before deployment and when finished can be deployed at effectively zero cost and complete in effectively zero time which provides no meaningful way to respond except with already deployed automated systems. So, the risk added with remote access is the risk of malicious catastrophic total system failure.

In this case, the water treatment facility treated the water for ~15,000 residents. In a similar case many years ago [1], a similar event occurred to a water treatment facility that treated the water for ~12,000 residents which resulted in 100 affected individuals before the effects were detected. So, we can reasonably assume that undetected water treatment tampering on a facility serving ~10,000 individuals will result in about ~100 affected individuals before the effects are detected. If there exists a way to tamper with a water treatment facility that would result in deaths for the affected individuals, which is quite likely, then that means the risk of remote access to the water treatment facility is ~100 deaths. So, as a society, we should ask the question: What is the standard of care that should be applied to a system where failure may result in the deaths of 100 people? And any business that wishes to add remote access to such a system must demonstrate to the satisfaction of society that they are taking that degree of care. It is not the role of society or the people to suffer for the convenience of business.

And in this case, I am certain that they are not taking an appropriate amount of care. The fact that you honestly suggested that an IT department would shove an AP in the ceiling for their convenience shows just how low our expectations are. In any other industry, such an act would be, in no uncertain terms, criminal negligence. That our standard assumption about the standard of care taken is criminal negligence shows just how far any of these companies is from actually deploying systems that have external access and have adequate security.

[1] https://www.spencerma.gov/sites/g/files/vyhlif1246/f/uploads...

Oh, you misunderstand. IT is an impediment to many control engineers. It's the automation techs and engineers that will work around the IT department if IT can't supply solutions. One of the more common ones being hide an AP or like in the article, use teamviewer or other remote access software. Then just share a common credential because nobody wants to actually pay for teamviewer.

Businesses need lower cost because they are under price pressure. Especially with small utilities. Remote access is one of those ways to lower their costs on personnel or vendor support.

There is still a whole lot of low hanging fruit in automation for improving security and access control. We're not going to get it from Rockwell for sure though.

I understood perfectly. I am just saying that such actions should be criminal and any reasonable lay person who was properly made aware of what is occurring would agree. Lowering costs is no excuse for engaging in criminal negligence and any tradeoff that has an outcome that would qualify as criminal negligence is socially unacceptable. That is not a proper balancing of business needs, that is pawning off immense risk to society for the convenience of a business.

Just so I am clear, doing what you say they are doing should be so unacceptable that it is not even viewed as an option. Anybody attempting to do so should incur costs so great that there would be no competitive advantage to offloading risk to society to the detriment of the people as the costs of doing so outweigh the benefits. If that prevents businesses from making certain profitable decisions due to the collateral damage they will cause then that seems like their problem.

Maybe we will get there someday, but we are not even close to that right now. Hell we are not even in same galaxy.

So right now things the op posted are pretty much standard practice everywhere in most industries. I mostly work in EU, I have worked with construction companis, medical companies, hospitals and telcos, and practice like this is standard.

They will have some ungodly expensive security product that makes them change password ever 14 days, and makes intranet barely usable, but will have holes the size of the mountains in their infrastructure, because of this vendor or that cost savings etc.

Rockwell definitely has some questionable security on individual products, but they partnered with Cisco for their Converged Plantwide Ethernet Design [0] which is actually pretty well thought out, and if implemented properly covers off most of the biggest risks. The problem is either that people don't know about it, don't bother to read it, or can't get organizational buy-in to implement it.

When downtime is expensive, the pressure from the business is to err on the side of being able to get experts in to troubleshoot the system as easily as possible, vs guaranteeing that bad guys can't get in. The first they see all the time, and the second seems unreal until it actually happens...

[0] https://literature.rockwellautomation.com/idc/groups/literat...

Remote Desktop was exactly the mechanism here, the attacker used TeamViewer to work the UI on a plant operator’s desktop and he happened to be watching.

Seeing reports now that it was indeed Teamviewer!

They also have a satellite office over in Clearwater, Florida (which is trying to be like a little bay area copy, v2/3)

Interesting, but, Teamviewer also has been exploited and leaked creds, but took three years to confirm it: https://www.bleepingcomputer.com/news/security/teamviewer-co...

Or, if the client computer browsed a site, it'd actually start open an SMB share on the perps computer: https://www.bleepingcomputer.com/news/security/teamviewer-fi...

and a few other interesting vulnerabilities, hmm.

Citrix is a little more than teamviewer. And can be encapsulated in a VPN as well.

Teamviewer on a desktop, probably with a shared credential isn't very secure. Knowing this though, I doubt that it was a teamviewer exploit. My guess would be a disgruntled employee since they knew what to get into to change chemical set points.

They both expose Remote Desktop to the internet given the proper credentials, and I’m guessing here but I think it’s pretty likely that the attacker had credentials. Whether it was a disgruntled insider, a dumb password, or (most likely) a reused password from a leak somewhere.

I would be interested to know if the TeamViewer account in question had 2FA... probably not.

Ideally, we should align our incentives such that having na internet-connected automation system is far more expensive than having one disconnected from the network. You should be forced by law to have a certain number of security experts on-call for any such system, periodic audits and pen-tests on your own expense etc.

It's OK for a huge city operating many water treatment plants to decide that it is more efficient to automate and centralize and secure the network. It is horrendous that this is seen as the cheap solution for a small town.

I agree with your comment but want to ask a couple of questions to see how you see it working it practice:

What will stop the local city council be compliant on paper, ie them doing a tick box exercise and saying that their summer IT intern is the security department?

I'm not a policy design expert by any means, and it's not like I've given this thorough thought. I expect some amount of red tape and controls from a government agency would be the proper way to enforce it.

It would of course require significant political will to create these institutions and system of laws and regulations, but it could be similar in spirit to the kinds of controls the military has for software vendors that want to work with it.

Yeah, but it's sadly common. I have personal experience with two such situations in my work over the last 16 years and I'm just some two-bit general IT contractor in Ohio, US.

Until the decision makers who demand the interconnection of these networks are held accountable it's isn't going to stop.

The cases I've seen have been to facilitate 24x7 off-site PLC vendor support access. I certainly see the business argument for the economics of off-site support for infrequently/improbably failure scenarios. At the very least, though, some type of physical interlock could have been employed (at the expense of some response time).

Edit: I think controls can exist to make this kind of situation tenable for at least some types of industrial controls applications. When you start getting to things like municipal water and power I start getting more antsy.

Totally agreed. The reason why these systems are network connected is to save a few pennies on periodical drive-by's but they open up a whole can of worms in terms of risk that those entities are very ill equipped to deal with. The same was happening with SCADA systems for building management. Systems that were quite literally wide open were given an IPV4 address based on the assumption that since all they did was run HVAC controllers on obscure UDP ports that they were safe and nobody would bother with them.

Sure, now you're paying 4x the price because you need two more operators so you can staff this place 100% of the time. And it's not even this guy. He probably reacts to like 4 different plants. Now you need like each of those places to have this guy in driving distance. You're not going to make it.

Honestly, I think this is a fine outcome. There is a dollar value per life. I don't think we're undervaluing the life yet.

Odd to pick HRBlock/Intuit as they are used for convenience and because we don't have a central system like Sweden and have in the US 50 different states and territories with 50+ different tax rules and so many edge cases in existence it's almost the rule than the exception each individual has one.

Likewise these consultants are not just coming in and pointing their fingers at the obvious ICS on the internet. They are also providing services to understand why they were attached in the first place and where that process broke down, how to keep the current ease with which to operate the system, and implement the transitions.

Yet on the internet there are astronomical levels of 'griefers', people who just want to see the world burn. The internet magnifies this tremendously. The internet offers the appearance of anonymity. This is a dangerous combination. Drunk drivers are in the accident too. They take damage. Not so for a hack.

In real life, the internals of a water plant are behind locked doors. Not everybody from Nairobi to Nantucket can get in and do as they like.

I'm afraid that trust in the public is definitely not the way to go with infrastructure and networked control systems.

> No cyber security products are needed.

I'm fairly sure the Iranian ultracentrifuges were not connected, and were hacked anyway. Stuxnet was complicated, but being disconnected is not a 100% protection.

Not 100%, but it takes orders of magnitude more motivation (and a nuclear program that would threaten your country if it were successful undoubtedly provides that) to accomplish...

Sure, but apart from low to medium effort hacking, cyber-warfare is still a possibility. Disabling infrastructure would be a high priority.

That is a slightly different Threat Model though, as well as(though I could be wrong on this one) the capabilities of attackers.

Except everyone already does it and that toothpaste is never going back in the tube.

We're fighting to keep these people from using unlicensed copies of TeamViewer for their primary access.

>These "startups in the ICS space" are like turbotax/HRBlock: only continued idiocy allows their business model to exist.

Nah, Dragos knows their shit. They'd be around even if ICS had good security.

We work with Dragos fairly regularly, they're solid. The main problem is that people who even consider the security or integrity of these systems are brought in years after they were specced, built, and more or less abandoned as built.

The contractor and integrators then move on to the next project and copy what they did last time. Rinse , repeat.

We've been actively pressing for realistic security and access control planning in the contract stage, but that's slow going in and of itself and still only affects new or upgraded installations -- on facilities with an expected lifetime of 10-30 years.

Open your modbus port on your server and see how often you get hit!

Then look for a list of open modbus ports on the Internet and be wowed at all the industrial machinery that is just sat on the Internet...

best comment i've read in a while.


From personal experience. I was working in a factory producing food (sorry not saying what type). The "machine" producing and packaging the food was a huge 20m by 3m by 3m metal box (imagine a bus). One end - raw material & packaging goes in, far end, packaged food comes out nice and neat.

That machine's interface was a Windows 95 (YES!) German language version. I am not German. I do not speak/read/write German. It was in that factory's IT admin & support. Nobody in that factory's operations staff could read German. So the rule is "we never touch this machine - never EVER. Anything that goes wrong (sounds, visuals, etc.) we ring the bell, escalate, get the vendor in."

Sidenote: For the youngsters, W95 was an OS by Microsoft, before you were born, and it did not have the multi-use/control environment (admin = god, user = cannot install sofrware, etc.)

The machine had two 'terminals'. One ON the machine (physically - on the front of the 'bus') and one 'remote' (50m away) in an office, with a huge window where you could observe the machine. Both screens displayed the exact same desktop (Win95, German)(basically a single computer with two monitors 'duplicating'.

Geniuses operation staff got bored looking at a machine with no error/faults (German built!) and installed a software that came along with adult video CDs (we're talking early 00's). Geniuses were watching porn on a machine that was worth many millions and was the production machine. When the geniuses were watching porn, it was being displayed in BOTH screens. Factory floor, AND office 50m away. Sound and all...

So.. adding to your points:

5. System limitations and customisation/hardening (no need for extra software - just basic security hardening/configuration)(win95)

6. Uneducated users (employers installing video player from an adult video CD)

This explanation has a lot of good reasons, but is missing an important one - the value proposition of cyber security. Decisions makers (assuming informed) will make an assessment of risk vs cost. Absolute cyber security is rarely a relevant consideration. The assessment is always going to be (at best) an evaluation of investment in cyber security vs risk of greater costs (in the form of compromised security, organisational changes, etc). We need to understand that these decisions are not made from a purely technical perspective. Real costs exist and decision-makers will (rightly) always compare those costs against the estimated benefits.

And because it's an expenditure that only hypothetically might decrease a larger expenditure in the future, many managers will decide to do only the minimum necessary to check the compliance boxes.

Another problem is that when the security systems get in the way of expediency, there's always somebody around who can disable or severely cripple the security to make it easy for people to e.g. work from home during a pandemic.

I think the economics of cyber security are poorly modelled/understood at present. I’m of the opinion that building a slightly higher wall than a similar target is generally sufficient (as an economic deterrent) vs most enemies. However, this is a simplistic model and doesn’t account for targeted attacks. It’s a complex problem space and has a lot of room to mature. I expect great changes in this space over the coming years.

Even non-connected systems can be a problem. Stuxnet was an example. But I think the main point is that owners of those systems think they are protected just by being disconnected.

> Stuxnet

Yeah, and even the best security practices aren't going to work too well if someone drops a nuke on your facility.

Stuxnet was an extraordinarily sophisticated attack well beyond what a typical industrial system will need to protect against, or even be able to protect against. It's not really in the same league as anyone being able to just remote in and change settings, and while it's realistic to expect a bloke called Steve who runs the computers at the water processing plant to prevent someone just remoting in willy-nilly, it's not as realistic to expect him to defend against two nation-states working together explicitly targeting that facility.

"...well beyond what a typical industrial system will need to protect against..."

And the target system also had security systems well beyond the capability of your local water treatment plant. Let's not forget that these assets deemed as critical infrastructure could be the target of nation states.

All I'm saying is that not being connected is only a small part of security for industrial systems, and that some people wrongly rely on it being enough.

The target system probably fared better than you think - as a whole, it certainly wasn't destroyed.

As with this incident, operators were physically present. That seems to be the real lesson (even if - see other war stories on this thread - operators tend to themselves have a creative approach to network security).

The issue was not identified for months, and from what I understood, a significan fraction of the certrifuges were destroyed.

The attack worked exactly as designed - wasting time, destoying equipment while being stealthy. Had the attack tried to destoroy all equipment at once" it would have beem spotted immediately.

Yes, a significant fraction.

My point is, physically-present operators did spot AN issue immediately. It wasn't properly attributed to malice for months, sure, but they could still mitigate during that time.

Without their presence, would you (as attacker) really bother with all the stealthiness? It certainly hasn't seemed to avoid long-term attribution. Set the controller to +INF RPM and let whoever pours over the logs in the morning spit out their coffee.

Would have been better with “homer” than “steve”!

the stuxnet attack had a significantly higher level of sophistication than this. if your threat is a competent nation state the bar is much much higher.

Is the threat not from a competent nation-state or supranational entity? Is that not the intention of designating power & water & electricity systems as "critical infrastructure?"

I don’t think the water treatment plant of Oldsmar, Florida falls under the same threat model as a uranium enrichment facility.

I don't think the threat is much different, but an attacker doesn't have the patience for a stuxnet level attack on one of many water treatment facilities.

If it was airgapped it wouldn't be available to easily be used by a nation state to attack infrastructure in case of other simultaneous attacks.

Remember the concerns during the first gulf war about Iranians potentially planning to contaminate drinking water in the US?

> 1. Difficult to identify the owner

That's not necessary: just make the board of directors of the companies that operate it, have it on their premises, or use it personally responsible. That should give them more than enough incentive.

Wrt uneducated vendors: Industrial Control systems tend to be built by people with an electrical background rather than an IT background, and they have their own culture, and strong Not Invented Here effect.

A strong cross-disciplinary startup could make a killing in industrial automation. (And extant companies that remotely meet that criterion already seem to do so.)

5. Plausible deniability.

I wanted to add this point, because lack of security measurements and the convenient existence of hackers allows a company plausible deniability.

As long as companies are not legally forced to take precautious security measurements, they won't.

And it plays into their advantage, because insurance providers have rarely clauses in them regarding minimum security measurements.

There’s probably a dumb reason I’m not thinking of, but why does the US have such a higher count than other large, industrialized nations?

The US is a much larger country than countries like Germany or France. If you add up a roughly equal-sized amount of the European Union for comparison, you get a number of hosts around 30k-ish, which is somewhat lower than the US's 34-35k, but not by all that much.

I’m not talking about European countries. I’m looking at countries like China, Russia, Brazil, India, etc.

The BRIC (Brazil Russia India China) are still considered developing nations. In that context the level of industrialization is probably lower even though they are much more populous.

As for China it's not impossible that they are already monitoring for that and blocking Shodan from accessing their Internet.

why does the US have such a higher count than other large, industrialized nations?

Maybe because it's comparing the entirety of the United States with much smaller countries like Italy and Spain.

A comparison of the United States with the European Union would make more sense.

There are other industrialized nations besides those in Europe, some with population much bigger the the US? Hence my use of the word large. I was thinking more of China, India, Russia, Brazil.

All four of those are considered developing economies, not developed economies (as the US and western Europe). There's a reason they're often grouped together as the the BRIC economies (sometimes with South Africa as BRICS).

Some mobile networks in the US will give you a public IP whereas in most other countries they do Carrier-NAT. You can get a better sense of it when looking at the IP space owners for the devices:


Underpaid IT/Infosec. People conflate IT and Infosec, once it's on an Govt payroll for billing purposes, no one touches the system if it's on a network provider, and not internal. If not internal, it won't show up on audits, most IT departments deal with a Windows Domain/Network and that's most locked down, but if it doesn't share a true connection physically, it's exempted from most audits.

The question is, why are the telecom providers allowing this, but there's also alot of legacy stuff they don't want to touch as it may violate the terms/contract and the bandwidth isn't the issue, so telecoms largely ignore it as they're just a bridge/

They don't even need to be internet-accessible, physical security is often weak as well. Surprisingly relevant:


There was this website a while back called "vnc roulette". It would randomly connect you to a open VNC host. Many of those where control systems all over the world.

from https://twitter.com/zackwhittaker/status/1358868187656388611:

    I can't immediately verify the veracity of the claims made by the sheriff but,
    the fact that the authorities *set up* a public-facing and/or remotely
    accessible system that allowed someone to change the water chemical levels is by
    far the bigger issue here.

I worked at a water treatment facility for a few summers, and the SCADA system there was on a physically separated network. Actually, there were two SCADA networks, one for each of the plants, with the distribution system (the water towers and pumping stations randomly scattered throughout the service area) attached to one of those networks. I don't know how secure those remote links were, but I suspect they were the easiest ingress into the network.

A couple computers did bridge the two networks, but (IIRC) they were simple embedded systems doing read-only access (for compiling reports). I know when they did a pen-test, the pen-tester could compromise most of the corporate network (including service accounts), but they couldn't punch through to the SCADA systems.

I'm familiar with the systems you outline, and yes, those are more difficult to penetrate. However, those systems are significantly more expensive and more complex than the simpler ICS systems. Oldsmar Fl doesn't sound like a place that could afford such a system. Of course, can they afford not to have higher security systems is an open question?

The biggest cost of having physically separate networks (or at least network separated) is the HR cost of increased staffing and on-call requirements due to not being able to support the system remotely.

For a small city, it's non-trivial.

There has been for quite a while a big concern that industrial control systems are accessible, often poorly hardened (and by that I mean to the extent of having default passwords), and quite vulnerable to attack.

The only thing surprising about this is that we don't hear about it tenfold more.

> The only thing surprising about this is that we don't hear about it tenfold more.

If you're someone who stands to gain from disrupting a nation's infrastructure... you don't tip your hand until it most benefits you.

If it really is the case that large parts of the infrastructure are very unsecure, expect to hear about it all at once, instead of little by little.

Water seems like a really weird system to sabotage though - power can bring businesses offline in a serious way but a city reservoir likely isn't supplying any businesses with a real need of water for any sorts of industrial needs... It's more of an inconvenience. Messing with chemical balances in particular seems like a prank or someone really twisted trying to give a bunch of folks long term health complications.

You can't cause a rash of serious short term problems that increase the load on your health care system? That would be pretty compelling from a terrorist perspective or for nation states trying to demoralize/reduce trust in the current government.

Sometimes attacks are probes and discoveries meant to determine or validate efficacy of a set of attack vectors including but not limited to human assets. Other uses are for distractions from other efforts. And yeah, sometimes they’re pranks. It’s not clear with the given facts what’s really going on.

Depends how long you can shut it down for. Even if only a few hours, an _unexpected_ shutdown of water across an entire city would certainly cause a panic with people descending on any stores open to buy water.


Meanwhile, we live in a world where VPNs are sold to the casual user while critical systems are left on internet facing networks.

I've never understood why, if these critical systems need remote access, it's not all done through a VPN of some sort. VPNs are not infallible, but it significantly increases the bar for entry from script kiddie to nation state real quick (depending on choice of crypt), while choosing a well supported implementation ensures long term bug fixes and security patches.

In all honesty, why is a system so critical on the internet at all? People say, ease of administration, but there are other methods of achieving the same thing. Up to and including running your own network. On the one hand your engineers and chemists won't be able to fiddle with the aeration stage using their brand new, whiz bang, iPhone. On the other, the people in your community won't be put in harms way.

No idea why this wasn’t. Often times they’re not for this reason _and_ because the hardware itself is too difficult or impossible to get online and can’t be upgraded. Forget networks for a moment - a system running Windows XP is way less risk than an upgrade to Windows 7. Plenty of companies have older systems running vital hardware that if it went offline could cause massive outages, revenue loss being only one if the impacts. So air gapped networks are pretty common in ICS environments as a result.

> On the one hand your engineers and chemists won't be able to fiddle with the aeration stage using their brand new, whiz bang, iPhone

Maybe not at home, but couldn’t they have a local 802.11 network set up for this?

To be honest, these days I just take it as a given that all critical US infrastructure (the power grid, hospitals, and now apparently water treatment plants) is riddled with time bombs, and that if we were to ever get in a shooting war with the countries which put them there, they'd all go off at once and we'd be in a world of hurt. I hope government/military planners are making the same assumption.

It would be sort of darkly amusing if we've done the same thing to other countries, and so time bombs in infrastructure essentially replace nuclear weapons as the guarantors of Mutually Assured Destruction.

After meeting enough SAP consultants in the ICS space, all I can say is I'm shocked it doesn't happen every day.

Seems to me that the real issue is lack of security, not the fact this system exists at all. Eg Every cell tower has remote access protocol and we rarely hear about those being hacked.

There's probably 100x more cell towers than there are water plants. The impact of hacking a cell tower isn't direct loss of human life (granted, knowing off a large number of cell towers would be very disruptive). The answer to the question "should it be online" and "how much $$$ should we spend securing it" is going to be different in these two cases.

I think there's also a fair question of "ownership of damages" here - cities get sold water treatment management systems and want them online as cheaply as possible - city councils end up owning the mistakes in misconfiguration but companies selling the systems are incentivized to make those default bad configurations possible - even while, in bold lettering, mentioning that you should not use the default authentication.

Cell towers are a really integral part of carrier's business - I'm not certain whether most are owned by providers or other companies, but either way the folks that put the tower up owe the customer (be it a phone user, a phone provider or some subcontactor of the provider) an explanation and pay the costs of bad configuration... I'd also assume that making sure these towers stay up is someone's fulltime job (likely multiple people) - while there won't be an employee constantly monitoring city water systems since it would take so little of a single person's time.

I'm not sure I agree that this is /wrong/ per se - the issue arises from the city council's disinterest / lack of expertise (which itself comes from disinterest) in these systems. If the issues are disclosed clearly, and the city council continues to sign off on the implementation (due to disinterest, cost pressure, whatever) without consulting knowledgeable third parties, then it's only realistic that the blame falls on the ultimate decision-maker (in this case, the city council).

The issue is that that strikes me as being incredibly socially inefficient. This town is probably going to be suuuper careful with water system security from here on out but the next town over might hit the same issue a few years down the line. There probably aren't more than a few dozen vendors of this type of service nationally and it'd be easier to learn the lesson at that consolidated level.

impact of hacking a cell tower isn't direct loss of human life

Not a direct loss, but plenty of opportunity for indirect loss. Disrupting emergency systems is the first that comes to mind. Covert hacking and surveillance could also be used for assassination plots.

Cell towers are generally going to be actively defended though - they tend to connect to private backhaul circuits, or link by IPsec to the security gateway in the mobile network.

The difference here is that nobody takes responsibility for a water treatment works in the same way a mobile operator looks after base stations - most operators aren't putting their base stations anywhere near the public internet. When they do it's under very careful control, like with femtocells.

As someone who has some experience with hydroxide and water-treatment systems (as well as other potentially-dangerous industrial controls): always design your system such that even if your feed pump runs full-bore continuously, the system cannot harm anyone.

The dilution of the solution stored in the hydroxide tank generally allows you to make this so.

Sleeping well at night is a great side-effect.

* The motors are out to destroy the machine

* The PLC is out to destroy the motors

* SCADA/IPC is out to destroy the PLC

Assuming these things in your design definitely helps with sound sleep. Especially when the company is running 3 shifts and you are on-call.

From the article: "Thanks to a vigilant operator and several redundancies, the heightened level of sodium hydroxide never caused a public threat."

I have a problem with the language here. This was absolutely a public threat. The attacker demonstrated intent and capability to inflict public harm. That's the definition of a threat.

But the language downplaying the severity will mean this all blows over in a couple of months, without the actual mobilization/funds to properly secure not just this one site, but any similarly affected plants.

I've come to the conclusion that humans in general aren't very good at preventing catastrophic events we haven't seen before (see climate change). We'll need to see n=1 disasters with this first, before there's public outcry to fix it.

I am so glad to see this is the top comment. Hacks on public infrastructure feel to me like one very small step away from actual military actions. I don’t understand why they never seem to be reported with the gravity they deserve.

For all we know, this is the 100th such attack on US infrastructure and this is just the first one reported in recent memory.

A successful attack is much less likely to be made public, for obvious reasons. We may have suffered from successful attacks and not know it (small enough concentrations of contaminants can’t be tasted)

Install water filters, HN. Use them. We have AquaSana under-the-sink in several locations through the house... no pitchers. Whole-house filters do not filter nearly the same variety of crap that under-the-sink and PUR pitchers do. Say no to Brita. Learn your NSF ratings and choose wisely.

You’re only paranoid if you’re wrong.

A chemist may be able to correct me, but I'm pretty sure an AquaSana filter will do nothing to remove excess sodium hydroxide.

It will if it's reverse osmosis (RO), but not all filters do that. That particular brand sells both RO and non-RO units. If it's a vast excess of NaOH, you'll have other problems besides your water filter failing, like chemical burns.

Most US tapwater is fantastically clean and drinkable, and doesn't generally need a filter. The Safe Drinking Water Act is pretty powerful stuff.


US tap water is generally so high in chlorine that to people from Western Europe it smells like pool water, even in places that are proud of their tap water like NYC. Having lived here for ten years now I can no longer smell it when I turn on the sink, but visitors still can.

I live in Seattle, WA, which apparently has some of the cleanest water in the country.

I use one of those under-the-sink inline charcoal cartridge filters on the sink we use to make tea or cook with. If I grab some water from a different tap, you can tell immediately by the smell (chlorine) and the taste.

I'm surprised the filter takes out the chlorine honestly, but it's clearly taking out a bunch of stuff from what is otherwise considered very clean.

That said, having travelled extensively through places like India, South America, East Asia, etc., I'm certainly grateful for the water we have "on tap" in the house. It's easy to take for granted.

You can blame the EPA for mandating chlorination in Seattle's water supply. The watersheds that feed into Seattle drinking water are "Surface Water" and considered high risk by the EPA. This risk assessment is probably more accurate in the rest of the country; our protected watersheds are fairly uncommon. But we don't get any special exemption.

EPA mandates a floor of 0.2 mg/L chlorine for all Surface Water based drinking water supplies at all times[0]. There are additional chlorine requirements depending on what sort of filtration you perform, if any, and how far the first service connection is from the chlorine insertion, in minutes. (They also mandate a safety ceiling of 4 mg/L for all drinking water.) This level is continuously monitored.

Seattle does about 1 mg/L to meet these EPA-imposed requirements.[1]

Chlorine evaporates out of water, so if you don't like the taste, you can just let tap water sit a while. Sunlight helps. Boiling water (e.g., for tea) also removes most of the chlorine.

[0]: (PDF) https://www.epa.gov/dwreginfo/swtr-plain-english-guide

[1]: (PDF, p. 8) https://www.seattle.gov/Documents/Departments/SPU/Services/W...

Weird. Here in Germany we have some protected watershed areas on smaller rivers that directly feed a surface reservoir, created in the river valley through a dam.

They get filtered, but there is no chlorine directly. Some chlorine dioxide is used at the end, though. Here's the official description of the utility, translated to english:

- Via a raw water pumping station, the dam water first reaches the micro-screening plant. It removes coarse contaminants over 35 ~ µm in diameter through stainless steel mesh filters. This provides special safety in times of mass algae growth or during floods.

- Subsequently, the raw water is de-stabilized with a flocculant; and turbid matter accumulates to form large flocs.

- In filter stage 1, two filter materials of different coarseness are used to remove the flocs.

- Ozone is then added to disinfect the raw water.

- Filter stage 2 is equipped with activated carbon and frees the raw water from the reaction products of ozonation. Excess ozone reacts to form oxygen and is thus removed from the raw water.

- The further filter stage 3 uses natural limestone material over which the water flows. Here the excess carbonic acid in the water is removed. Finally, a small protective disinfection with chlorine dioxide takes place before the drinking water leaves the clean water tank in the direction of <city>.

The steps are similar in Seattle, although I think we filter less. I'm having a hard time finding a concise but also technical description of water treatment steps. We definitely do:

- Ozone disinfection, and removal

- Tolt river supply only: water conditioning by filtering through "granular media." (Cedar river supply is clear enough without this step.)

- UV disinfection

- pH adjustment to avoid corroding pipes

- Flouridation for public health

- Chlorination as a final step as water leaves the treatment plant, and also at some downstream facilities (like a networking repeater; just to maintain chlorine levels that would otherwise have fallen due to distance from the upstream chlorination site)

Tolt: http://www.seattle.gov/utilities/your-services/water/water-s...

Cedar: http://www.seattle.gov/utilities/your-services/water/water-s...

Small tidbit, ascorbic acid kills chlorine and chloroamine.

You can also just let the water sit and the chlorine will evaporate out.

True with Chlorine, chloroamine not as much.

Sure; I did not say otherwise. Seattle (where OP lives) does not use chloramine.

> You might hear about different forms of chlorine. Seattle's water system uses "free chlorine" (not chloramines).


Is there a health risk to consuming chlorine in water?

I had some tap water in Scotland about 20 years ago and I still remember how amazing it tasted. This was in Aberdeen area if that makes a difference. It was like the finest artesian spring water I’ve ever had.

I remember being able to taste the chlorine through the soda machine in Phoenix.

Yup. Thats one of the things I remember most vividly from my trip to NY as a kid in the 90s.

> Most US tapwater is fantastically clean and drinkable, and doesn't generally need a filter.

It is my understanding that most municipal water utilities only test water quality every 3 months. A problem can come and go between testing cycles.

Even with weekly testing, I’d expect the same risk (there’s still a window between tests). Basically you’re only going to know about a problem when it’s too late.

Your understanding is incorrect.

It depends on what contaminant you are measuring, but the testing frequency can vary from "every several years" to "continuously monitored and sets off a SCADA alarm if it exceeds a given threshold." The biggies--IIRC, turbidity, pH, and dosages of coagulant and treatment chemicals--are logged every 15 minutes, with more tests happening on hourly 6-hourly, and daily frequencies, followed by yet more contaminants happening largely on monthly or quarterly assessment bases. The issue in question would have shown up in a pH measurement, so there's no reason it shouldn't have been caught within minutes.

You also have to look at the success and failure rates of those tests. Most tests reveal no problems, which implies periodic sampling is plenty to handle the rare problems that crop up. If we found more problems, we would demand more testing, but increased testing is pointless if there is no problem to be found. In reverse, if the tests are not specific enough, they can cause issues when you over test due to false positives on the tests.

Indeed, you'll see that if a water test comes back positive, there will be multiple retests and a much greater rate of testing until the problem is abated, at least at my local drinking water board.

> It is my understanding that most municipal water utilities only test water quality every 3 months. A problem can come and go between testing cycles.

This isn't remotely true.

Which part?

The whole of it. You stated that most municipal water supplies aren't monitored for months at a time. This is extremely incorrect. The EPA-mandated quarterly report is a summary, not the entirety of samples collected. It would be dangerous and reckless not to monitor drinking water for months at a time.

E.g., Seattle explicitly states:

> We monitor your water 24 hours a day, 365 days a year. We test samples from the region between 10 and 100 times per day.

https://www.seattle.gov/Documents/Departments/SPU/Services/W... (PDF)

> To ensure the safety of our drinking water, SPU's water quality laboratory analyzes over 20,000 microbiological samples each year (more than 50 a day) and conducts chemical and physical monitoring daily, 365 days per year.


Someone asked a couple of days ago if they should go into security.

Yes - they should. Because there is going to be a lot more of this happening in the not so distant future.

They've been saying that for ~30yr.

While you can definitely make a respectable living in the cybersecurity industry the fact of the matter is that over that same time period the people vomiting JavaScript trackers all over the internet made the same or more money with less effort invested.

This is all “Do as I say, not as I do” advice.

Sure, go into security, help make the world more secure... meanwhile I’ll be here writing some JavaScript making twice what you make and working probably half the hours you do.

Yeah we're still in kiddie shit days playing with firecrackers and poprockets.

Wait till we get our M2 Browning.

I suspect that the first iterations of the M2-equivalent already exist, we just haven't seen them put to use against visible targets.

It's been established that security itself does not increase revenue nor make the quarterly returns look good. Unless there's an incentive for key stakeholders to spend more resources to strengthen the security of their deliverables, it is unlikely for things to change in the near future.

Perhaps a change in KPI or regulation requirements may create such incentive to ensure appropriate actions are taken.

This is over-reported in my opinion.

Because this is most likely "teenager broke into a poorly secured shack and turned a random valve to be naughty", not "state actor sabotaged critical infrastructure".

How about "state actor could have easily sabotaged critical infrastructure but teenager got there first" ?

Sounds like we need more teenagers.

Teenagers are the OG chaos monkey.

(Still, the problem remains: if a naughty teenager can turn a valve for shits and cause a threat to public health, then perhaps that valve needs some access control.)

They specifically said "the heightened level of sodium hydroxide never caused a public threat," not the attack. The most important thing he had to do was inform the public that had been drinking their water all day that they were not in any danger.

This isn't downplaying the potential risk, basically every other thing said highlighted the risk.

> The attacker demonstrated intent and capability to inflict public harm.

This is if you take the facts in this story at face value. In my mind, if someone can raise the level of a chemical to become dangerous, you already have a problem. 11000 ppm sounds huge to me (1.1%). What if instead of an external hacker you had an internal disgruntled employee. What if you had a leaky gasket. The system should have some multiple redundancies to not allow a dangerous level of a chemical to end up in the water supply.

The reports on this incident have all stated that there were indeed multiple redundancies that helped prevent the high level from being actualized.

It's easy to say that, of course, but the fact of the matter is that designing systems with multiple redundancies is difficult and expensive.

Yeah, and it’s not like a city water supply is the type of thing where such expenditure would be justified!

(I recognize you also mentioned the difficulty, I just wanted to poke some fun :P)

Even once isn't enough, or we learn the wrong lesson. Ask Flint, MI how well we've done fixing that catastrophic event.

For Americans who stopped following the Flint water crisis after its first few gritty chapters, it might come as a surprise how far the city has come: Today, after nearly $400 million in state and federal spending, Flint has secured a clean water source, distributed filters to all residents who want them, and laid modern, safe copper pipes to nearly every home in the city that needed them. Its water is as good as any city’s in Michigan.

Sounds like they did a great job fixing it.


Only took, what, 6 years? And still nobody has been held accountable?

6 years does feel too long, but it does seem like a case is still making its way through the courts. The former governor (and other officials) just had charges against them announced a few weeks ago: https://www.cnn.com/2021/01/14/us/michigan-flint-water-forme...

Are you moving the goalposts here? You seemed to be bemoaning that things don’t get fixed after a single crisis (n=1), not that fixes are good but move slowly. Flint got fixed.

Also other cities won't upgrade their infrastructure until they have an equally public event, at which point it'd be too late.

Currently the case is ongoing and up to 9 face charges: https://www.npr.org/2021/01/14/956924155/ex-michigan-gov-ric...

To be fair, public works projects (especially replacing pipes and water treatment facilities) takes a long time.

The government drags it's feet as much as possible when it comes to holding the government accountable.

I think it's surprising it's even going to trial.

former governor goes to trial this year

A “great job fixing it” is an exceedingly generous characterization. And the link you’ve provided makes it clear the issue hasn’t been “fixed” just by replacing the pipes. The people of Flint don’t trust their drinking water. (With good reason!) As long as these (predominantly low-income) people feel the need to spend money on bottled drinking the water the issue isn’t fixed just because a lab has determined the water is safe to drink again.

So with the weight of contrary scientific evidence, the people still don't believe the water is safe? This sounds like the same arguments about climate change and election fraud that get dismissed out of hand.

> So with the weight of contrary scientific evidence, the people still don't believe the water is safe?

That is what a reasonable human would expect "the people" to do when multiple agencies overseeing the utilities were initially so irresponsible/incompetent/negligent that water superheroes from three states over had to swoop in to warn residents their fucking pipes are poisoned[1].

[1] https://en.wikipedia.org/wiki/Flint_water_crisis#Virginia_Te...

Edit: clarification

That's not at all equivalent. The issue is they were lead (heh) to believe the water was safe before. So when you're told the water is safe, but then it isn't why would you believe them next time?

safe -copper- pipes?

LOL, copper is poisonous in presence of corrosives. They are just replacing a poison with other.

Not even in the same ballpark. Humans actually need trace amounts of copper. Regulations for safe copper levels are almost 100 times that of lead.

All pipe materials are poison if enough is ingested, lead is however is toxic in extremely low amounts, while copper is actually needed in low amounts. You think PVC is better?

Lead is the worst by a mile, but If you expect to have water in the very low or very high PH rank, or water with a lot of chemical activity or too hot, copper is not totally safe either.

We, mammals, are relatively well protected to deal with it, but the real problem here is in the long term exposure. Can produce several forms of inner bleeding in the gut, and harm permanently the liver and kidneys. There is a lot of copper messing around for some reason in Alzheimer's patients also.

Moreover copper is particularly toxic for all aquatic life and invertebrates also causing an acute poisoning. I would not use that water in an aquarium for example. I had seen the stuff in action and is devastating for fishes.

Fish tanks require copper to maintain proper levels of nitrifying bacteria, the first two nitrifying stages require copper to convert stuff to the "safe" nitrogen that requires flushing to remove (unless you have real plants in the aquarium, then you rarely need water changes) - furthermore, copper is used to cure several fish diseases[0], so it it's impossible to be as bad as you claim.

[0]https://smile.amazon.com/Seachem-67105650-Cupramine-Copper-1... for example

This is a myth. Fish tanks definitely don't require a surplus of copper. Not unless they are hospital tanks. I have experience using copper to cure fish diseases and cant guarantee you that is a notoriously treacherous stuff to work with it

What is, what I'll call from my layman perspective, the leeching factor of copper vs lead pipes though? As in, how much copper vs lead ends up in the water being transported?

The choice is not between copper or lead, is between copper or pvc, steel... or even ceramics. Lead is unsuitable for drinking water.

Stainless steel would be better, excluding exotic materials. You cant really get iron poisoning.

Turns out you can, though probably not from water pipes (if I skim the article correctly, it'll need to be ferrous iron).


Sodium hydroxide, also known as lye and caustic soda,[1][2] is an inorganic compound with the formula NaOH. It is a white solid ionic compound consisting of sodium cations Na+ and hydroxide anions OH− .

Sodium hydroxide is a highly caustic base and alkali that decomposes proteins at ordinary ambient temperatures and may cause severe chemical burns. It is highly soluble in water, and readily absorbs moisture and carbon dioxide from the air. https://en.wikipedia.org/wiki/Sodium_hydroxide

Lye is actually a very common household chemical. It's used in cooking and even grooming products--quality shaving creams have sodium hydroxide or potassium hydroxide. You can easily buy large containers of the stuff everywhere, including your local hardware store and, traditionally, grocery store. (These days you may need to go to a speciality grocer to find food-grade lye in bulk.)

The nice thing about lye is that it's typically sold in solid form, excepting in one of the most common household products, drain cleaner. Drano is sodium hydroxide in solution with aluminum, with which reacts in the presence of water, presumably to help mechanically break up clogs. Solid lye tends to be safer as there's less chance of ingestion, and less chance of it lingering on your skin--it turns your skin to soap.

Much more dangerous is stuff like sulphuric acid, which you can buy (at least in California) in concentrations of over 95% at the hardware store as Rooto and similar drain cleaners. That stuff is nasty as its in liquid form, easy to spill and even inhale as an aerosol. Also not a good idea for pipes despite how they're sold because such acids are hell on cast iron--i.e. what main sewage drains are made out of in older buildings and in jurisdictions that aren't favorable to PVC.

There are so many ways for evil people to do evil things it's amazing (and, frankly, fascinating and even instructive) that it doesn't happen more often. I'm curious to see how the situation will change as it becomes easier to be evil while remaining anonymous and remote. Still, I imagine it would be extremely difficult if not impossible to actually cause significant harm by changing the concentration of lye in the water suppler. For example, I'm skeptical that there would be enough lye in the dispenser at the treatment facility to cause serious harm. The worst effect would probably be disrupting the pH of the water system and possibly causing other ill effects, such as by leeching lead or rendering antimicrobials less effective.

>That stuff is nasty as its in liquid form and easy to spill and even inhale. Also not a good idea for pipes despite how they're sold because such acids are hell on cast iron--i.e. what main sewage drains are made out of in older buildings and in jurisdictions that aren't favorable to PVC.

Eh, at the end of the day it's just acid. You can always throw something basic at it to neutralize it. It's not like it's a heavy metal.

I agree with your level of alarm in principle. I'm curious whether the "several redundancies" are generally sufficient, and pervasive across plants.

Some system has redundancies and some system doesn’t.

Here is an article where plant operators accidentaly left a sodium hydroxid pump in manual mode. Dumpibg way too much of it in one go causing chemical burns to the customers. There were Ph alarms but nobody heard them. https://www.google.co.uk/amp/s/www.telegram.com/article/2007...

This is the website of an other water treatment company explaining what processes they have in place to prevent an issue like the above: https://www.mwra.com/01news/2007/042507nosodiumhydroxide.htm

It’s incredible that anyone thought it was a good idea to connect this kind of infrastructure to the internet.

It’s new public management. Why have a hundred people maintaining these things when you can just have 1 person do it remotely.

I don’t think it’s a good idea either, but it’s exactly why it happens.

While I said it’s new public management, it’s also a common management style in any form of private sector enterprise.

I mean, you can still have one person control it without connecting to the outside world...

Why have one person onsite controlling it, when you can have one person offsite remotely controlling MULTIPLE sites!

... and have that person be a contractor from another country, which "saves taxpayers money".

I think you’re misinterpreting that remark? Probably their main point is that the water was always okay. That’s what the people who live there are going to want to know about first.

> But the language downplaying the severity will mean this all blows over in a couple of months, without the actual mobilization/funds to properly secure not just this one site, but any similarly affected plants.

Our society’s inability to prioritize the solving of obvious problems is pervasive enough that it’s probably due to more than some badly chosen verbiage.

> We'll need to see n=1 disasters with this first, before there's public outcry to fix it.

It’s worse than that. It would have to be a really awful disaster, people would need to understand the causes and effects, and the prevention of future disasters would need to not threaten established businesses and political interests.

Even n=1 is not enough if Covid is any evidence of a pattern.

Seveso - Bhopal - Chernobyl - Fukushima is n > 1, but most people forgot Seveso and Bhopal. n=1 needs to occur every ~10 years.

There's no downplaying. A vulnerability was identified and system with multiple levels of protection worked.

Call them. Someone may be willing to listen.

Yeah I totally agree here. This is a public official saying "Alright folks nothing to see here", and the reporters walking away writing down "nothing to see here" in their notebooks. They got their quote.

While horrifying, this is just the tip of the iceberg.

Huge amounts of important infrastructure sits internet connected due to individual laziness, coupled with a lack of willingness to understand and think about cyber security. Often it seems simply from a lack of willingness to spend money on an ongoing basis to maintain anything.

There's a culture and mindset in ICS that you don't change what isn't broken. And stability and reliability is important - this is an industry where you don't install patches due to the fear of breakage or regression.

When the world shifted towards "code fast and break things", the ICS world didn't accept this change. They can't have Windows (yes, Windows) reboot unexpectedly to do an update. That pretty much rules out the supported versions like Windows 10. I mention consumer versions, because a culture of multi layer outsourcing means nobody wants to pay for a server version - an OEM Pro version of Windows saves a subcontractor some money.

That OS won't be patched, unless the SCADA software vendor has validated the patch with the software being run. Expect crazy things like Windows XP SP2 (not 3) to be requirements. Everything is about stability and using tested configurations.

You could be forgiven for thinking this is less scary, as you can airgap this, and treat it like a fixed appliance. Often that doesn't last, and (if you're lucky) an unpatched VPN box gets thrown in front of it with a weak password. More commonly, some consumer grade remote access software gets installed, so a bean counter can count how many beans they're making or spending. Airgap eliminated.

The fix isn't single step - there's a need for more understanding about safety critical engineering in the IT world - the lack of testing and regression validation isn't acceptable to this industry. The ICS industry needs to be willing to pay for software maintenance and assured development processes. Simpler code that isn't running on full consumer operating systems is needed. And ultimately we need to go and replace systems that "ain't broke", but are insecure. And that's going to be expensive. No security appliances are needed here, just some basic common sense.

Expect to see versions of Windows you didn't even know existed in use in very important places... Seeing pre-NT or very early NT wasn't a huge surprise...

This is kind of horrifying. I find myself often coming to the conclusion that our increasing tangle of software dependencies is a massive liability to society. Suddenly code that was never designed to be "load-bearing"--intended to be part of a critical system, written in a hurry to glue together two components, or written for maximum performance and not maximum security, or code without proper tests, without quality control, code review, code written by hobbyists--finds itself into critical systems. And our amazing ability to constantly zoom out and plug and reuse software in huge amounts leads to this giant tangle of dependencies where everything is needed to get everything off the ground. And suddenly a JPEG vulnerability [1] leads to remote code execution.

I don't know what the answer is, but I don't feel super great about the future, given the giant sinkhole of dependencies that everything seems to be getting sucked into.

[1] https://www.techspot.com/community/topics/critical-vulnerabi...

This one of main challenges in information security (and probably in IT in general) - code is completely invisible for a lay person and even for a professional it takes a lot of time to tell the difference between good and bad code.

A hardware device, say a water pump made by industry leader would look very different from one made by a hobbyist in garage. It's obvious even to a lay person that a door from thick still is more break resistant than one from hardboard. But code written by a summer intern without any review and tests looks not much different from one written by seasoned professionals and carefully tested. Even if some software if full of RCE it takes a lot of time and motivation to find them.

Huge amount of dependencies and many layers of abstractions make it even harder to see the software system as a whole.

This article specifically calls our Teamviewer as the vector:


It's baffling to me that TeamViewer is still used in a corporate setting, after all the vulnerabilities it's had over the years.

This was nothing but gross negligence from whoever is in charge of their IT infrastructure.

Not to mention Teamviewer is the go to software scammers in the name of tech support.

Let's not blame the knife company for the actions of a couple murderers.

To put it in a different way: "Teamviewer: so reliable and easy to use than even your grandparents can install it."

Deviant Ollam, well known presenter at various hacker conferences and professional penetration tester, has alluded to this a number of times in his presentations featuring photos of him in such water facilities:

Featured in a few of these videos, hotlink to the slide: https://youtu.be/Rctzi66kCX4?t=2438

> 100 -> 11100

I find it slightly amusing that it looks like the hacker just added two 1's to the front of a text box, rather than chose a specific value

How can they have remote access without any access controls and logging?

an operator noticed someone had remotely entered the computer system that he was monitoring

How could they not know immediately who it was (or at least, whose credentials were used)?

Do they even know that it was a hacker and not someone trying to type "111.00 ppm but either they or the software dropped a decimal and typed "11100"?

They released a little more information -- they are apparently using some sort of remote desktop software for remote access:

The intruder broke into the system at least twice on Friday, taking control of a plant operator's computer through the same methods a supervisor or specialist might use. The hack didn't initially set off red flags, because remote access is sometimes used to monitor the system or trouble-shoot problems, Gualtieri said

So it almost certainly doesn't have enough auditing to know who made a change.

The other question besides "why is this even on the Internet" is "why does this even need to be adjustable remotely from a computer?"

Maybe I'm just really old-school, but it sounds like this sort of thing should really be something that's set once to the right values, and then if it ever needs changed, someone has to physically access a building and adjust a physical control --- likely alongside doing various other maintenance tasks on the system.

This is different from remotely viewable, which is a much better idea, and I dare say should even be public.

As long as who ever decided to connect critical infrastructure to the Internet is not held accountable we will hear more and more such stories. People are driven by incentives. There is a weak incentive for connecting system and practically none against.

The press conference can be found here:


If my vague memories of high school chemistry serve me correctly, then 11,100 ppm is 0.2775M, which would have a pH of about 13.4. That's definitely not something I'd want to drink.

11K ppm is 1% of the entire water supply. I doubt the plant had that much lye in stock.

But they probably had enough lye in stock to start producing water at that concentration.

I hope they followed industry standards, and used a pump that was simply not able to supply harmful amounts of the chemical even when running at maximum speed.

Surely a water treatment plant that opens up controllers to the public Internet follows all industry standards!

Why would 11,100 be recognized as a valid value to begin with?

Vs, why was this remote facing, and why don't they have a definitive answer on if it's an USA or non USA ip address.

Sounds like no logs, probably showed up on shodan and someone wanted to have fun/many people did.

The geolocation of the IP isn't all that useful, it could be a VPN or an owned machine.

Disagree, if it was USA, it is easily possible to enforce a warrant and maybe you're lucky it's residential.

If it was an VPN, you know it's a more competent person, org, and most VPN's also, keep logs.

> Disagree, if it was USA, it is easily possible to enforce a warrant and maybe you're lucky it's residential.

parent mentioned "owned machine" (as in, "hacked" not "ownership"), which means you might be able to find the source if you can seize the computer and analyze it in time. If the attacker wiped all traces from the computer then at best the trail ends there and at worst an innocent person gets blamed for it.

>If it was an VPN, you know it's a more competent person, org, and most VPN's also, keep logs.

"no log" is a commonly sought after feature in VPNs, and if you're planning to do shady stuff I doubt you'll go with a logged vpn.

>"no log" is a commonly sought after feature in VPNs, and if you're planning to do shady stuff I doubt you'll go with a logged vpn.

It's marketing puffery, they all log and they all keep it and will comply. Many VPN say no log, and then logs leak. You don't have control over that system/service, you can not fully verify and there is much mistrust around it for nefarious deeds.

>parent mentioned "owned machine" (as in, "hacked" not "ownership"), which means you might be able to find the source if you can seize the computer and analyze it in time. If the attacker wiped all traces from the computer then at best the trail ends there and at worst an innocent person gets blamed for it.

So, yes, and no. The IP address will determine location and possible people of interest. It could also lead to a chain or more documentation/possible past interest/threat.

The wiping/forensics imo are hard to ensure for chain of custody, but if an IP address is honed to residental, it's easy to grab a DNS log from that ISP and see what requests they amde and if it makes sense it was targeted, random shodan or possible hijacked/RAT machine.

More info never hurts, but "tracing" an IP address is the first step.

Even if it is residential IP, it could be a residential proxy. A proxy from end-user machine that allows anonymous user to connect through its ISP for online activities.

Maybe to have a Cleaning Cycle once a year? But yeah there is a lack of security there.

What evidence do we actually have that this was a hack and not maybe an accident or an inside job?

It sounds really suspicious that the hack took the form of some sort of remote control which was evident to the actual operator who was present there. At the same time there was an actual operator, who wasn’t even suspicious the first time because apparently remote control was common by the supervisors.

I think there’s a good chance we’re gonna find that either the operator, or one of the remote controllers accidentally, or maliciously, made this change, and blamed it on a “hack”.

My first thought. While there is definitely major hype in infosec for ICS and other physical hacks, this sounds like either a disgruntled former employee or current employee going postal. Feds probably went along with this press conference for additional funding.

The whole town got really lucky. There was absolute intent to harm here, the attacker changed it from 100 to 10100, to fool a casual observer looking for the 100 pattern.

We were told since the 9/11 time that our industrial control systems are in really bad shape, not sure if anything is done to strengthen it if at all. May be someone that's knowledgeable can chime in with information. I see a lot of scope for controls and operational procedure that can be streamlined and standardized across the whole country, if we have the will.

I have an under-sink Reverse Osmosis water filter. If this wasn't caught I wonder if the RO filter would have removed the sodium hydroxide or not.

Efficacy of RO systems is highly dependent on incoming water pressure and temperature. I got a professional water test done on our RO system and was surprised at how much remained in our (hard well) water. It turned out that our incoming water temp was far too low for the system to reach peak efficiency.

Another aspect is TDS creep. The membrane only reaches the specified rejection rate after a few minutes of use. If they under sink RO system is used frequently for small amounts of water it can cause frequent cycling of the RO system which will reduce the water quality produced.

My RO system can take the 450 TDS tap water down to about 30 under my normal use. If I close off the tank and run the water for about 10 minutes it will get down to about 20.

Mine's tankless (Waterdrop brand), is TDS creep still an issue?

Yes, its a problem with the technology, so anything using RO will have the issue to some extent. Fancier systems will automatically flush the filter to help prevent this, but its rare on consumer grade equipment.

The built-in TDS sensor shows 004 after filtering, but the unfiltered tap water is good here so that probably helps.

In the 90s there was a very campy TV Show called Superhuman Samurai. It was an Americanized Super Sentai show--think off-brand Power Rangers.

The very scenario was the plot of one of the episodes. Here's a link, if anyone remembers the show and wants a dose of nostalgia.


NaOH raised from 0.01% to 1.11%

That seems to be a pH 13.4 result. There probably isn't much in the water to buffer that.

It's baffling that the facility is even able to reach that number. There's absolutely no reason for a water treatment facility to put that much sodium hydroxide into water, so there is no reason to create hardware that can handle it.

How it could happen:

Pump sizes can do 10, 20, 40, 80, 160, 320, 640, or 1280 units.

Sometimes the water needs more chemicals. Maybe today it needs 10, but typical is 30, and it could go to 100.

That's just the water rate for winter though. In summer, people wash cars and water lawns. Triple the number, so 100 becomes 300.

The town is growing fast, at least when the equipment is installed. (thus the upgrade) In case the fast growth might continue, triple the number. The 300 becomes 900.

Well, the smallest pump that will work is the one that does 1280 units. Also there is a federal grant from the EPA for big upgrades, and that one qualifies. Buy it.

One thought that comes to mind: how do we know this same individual hasn't successfully gotten away with this elsewhere? How long would it take for people to report symptoms?

There are QA checks on the "finished" water. Automated, and manual (i.e. a chemist manually sampling and analyzing the water). In this particular case, a high pH would have indicated that something was wrong and would have quickly been investigated before severe system-wide effects occurred.

Can confirm (former water treatment plant employee) but usually these are carried out first thing in the morning when the QA lab people get in as far as manually goes.

Individual? I don't see any evidence that points to whether it is a single, collective, state, private, or any other actor.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact