1. Difficult to identify the owner: a lot of the devices are on mobile networks that don't point to an obvious owner.
2. Unknown criticality: is it a demo system or something used in production?
3. Security budget: lots of smaller utilities don't have a budget for buying cyber security products.
4. Uneducated vendor: sometimes the vendors of the device give very bad advice (https://blog.shodan.io/why-control-systems-are-on-the-intern...)
That being said, based on the numbers in Shodan the situation has improved over the past decade. And there's been a large resurgence of startups in the ICS space (ex https://www.dragos.com, https://www.gravwell.io). Here's a current view of exposed industrial devices on the Internet:
I've written/ presented on the issue a few times:
They are a problem the way drunk driving is a problem.
You just don't ever do it. Ever.
No cyber security products are needed. No budget required.
These "startups in the ICS space" are like turbotax/HRBlock: only continued idiocy allows their business model to exist.
I regularly work with these sorts of water districts (larger, better funded ones as well). In reality, some of these small districts may only have 2 or 3 SCADA operators on staff. Sending them home with a pager, a tablet, a VPN password, and some overtime pay is a lot easier to get past the city council then taking on another two employees to cover the night shift for those rare events that need to be handled ASAP.
I could share some real horror stories, but it wouldnt be professionally appropriate. Suffice to say, this story did not surprise me at all.
I reject this line of thought.
A small rural water district can run with looser tolerances and looser guarantees - and have done so for decades.
They should spend half the time (and a quarter of the money) setting up systems that fail safely and revert to known states and operate with looser tolerances.
As for telemetry ...
I am not joking at all when I say that a green light on the building that turns red and everyone in the county knows to call either Jed or Billy if that light is red is a completely reasonable system. It's a small rural water district (your words) after all, right ?
I like your thinking and I try to espouse it myself (keeping things as simple as they can be-- keeping "technology" out of voting, not connecting things to networks that have no business being connected, etc). Short of a Battlestar Galactica-type "our machines rise up and try to kill us" event, though, I don't think the average person will ever understand the vulnerability inherent in networked computers or the risk/benefit tradeoff of connected vs. disconnected systems.
Even down at the level of local politics in a rural setting the "optics" of bringing technological solutions to bear on problems is seen as forward-thinking-- particularly when it "saves" the taxpayer money. I can't imagine trying to convince a local water board that moving away from a PLC-based system with a remote support vendor world fly, even citing this example.
This event will be another opportunity for more security vendors to cite in case studies justifying their products. More layers of garbage will build up on a foundation of protocols and design philosophies that grew up in an era of disconnected systems with lower stakes and a less complex threat model.
I don't see big money to be made in providing sensible levels of connectivity and security to this kind of infrastructure. I don't see industry stepping-up because of that. Maybe regulation is the answer, though I'd just expect regulatory capture to take over, and have it become another "PCI". Maybe a lot of people have to die before society takes it seriously, as has been the case with so many other safety codes over human history.
It makes me really sad, embarrassed for our industry, and more disappointed in human nature.
I can confirm these exist, as growing up my father's phone number was on a some of those signs. Fancy systems would automatically call an answering service, but for smaller ones, he'd get a call (or page, given the era), usually from a neighbor or something.
It wasn't limited to rural areas either. I distinctly remember him taking service calls in ultra-wealthy neighborhoods on the weekends. One time, the gentleman who called in let my sister and I watch movies in his personal movie theater while my dad fixed the system.
The largest issues is that you must gather the data from sensors that can't interfere with the thing you are measuring, and that you must process it with computers that don't connect to the ones controlling the process. The first one is really just good engineering practice, and the second is already cheap and getting cheaper by the day.
Also, whatever you do at your process control, you should have some emergency overrides that set when the conditions get too abnormal. Those should be simple (AKA, no computers if possible) and stand-alone. Looks like they got this one right.
By running with loose tolerances and loose guarantees (and keeping systems as simple as possible) they remove the need for these tools - and their attack surface.
The action is stupid, but trespassing idiots get caught quickly - that's just "survival of the fittest" mechanics.
If you never trespassed in your life then you were probably not smart enough to get away with it?
As a kid I've tresspassed plenty, it often takes 'security' hours to even spot you
His comment says that small rural town water treatment plants don't have to run for 24/7. Not sure what you thought he was saying.
This sounds like it should be a punchline to one of those funny programming jokes
There doesn’t seem to be a whole lot of thought around “is it even necessary for this three-ton industrial robot to be dynamically reprogrammed from a service center in Stockholm,” and it seems like everyone just assumes that everyone else will do a perfect job implementing and configuring security. I fear the tune will only change after the first multi-million dollar lawsuit, and I hope all that costs is the money.
That's because the value proposition is only obvious when you substitute for "Stockholm" a city from one of the countries with cheaper labor.
In my limited experience with Industry 4.0, it smells like a combination of forcing a goldrush to sell shovels (so many players that want to be the platform which connects everything) on one side, and ongoing search to turn capex into opex on the other (that latter thing is a trend in pretty much all industries, though). I think there's enough companies that would happily replace their control systems (and control engineers) with prepackaged control-as-a-service which they don't have to know anything about, supplied by the lowest bidder, to which they can shift any liability if anything happens. This kind of setup does require remote access.
There are some packaged solutions but they all involve lots and lots of expert design, setup and management.
So quite often it actually is for the interaction with possibly expensive engineer from random place in the world who has specific knowledge for the system involved, as well as enabling remote operations when facilities are in less accessible location.
As for IIoT 4.0 - truth is a lot of industry was already heavily connected, and many functions I've seen so far are about getting deeper integration between ERP, MES, and individual work cells and workpiece tracking.
Even when the workpiece is fried chicken waiting to be put in a bun, or a cut of pipe that will next need to be appropriately cleaned, bent, welded, painted and finally become part of a ship assembly.
The answer is Yes. Very very yes. Especially when said programmer can't travel across borders due to Covid restrictions.
But even without Covid, it's a lot cheaper and time-effective to let people look at stuff and fix things from stockholm, or antwerp, or warsaw or whatnot. Else every time your robot sneezes, you have to book plane tickets and a hotel. But worst of all, you risk losing many hours of production due to travel time.
In contrast, with remote operation, you can log in, fix problems in well under 30 minutes, and Get Production Running Again.
In a situation where any kind of stoppage basically means the factory is Not Making Money, you can see the very strong value proposition here.
Putting a defenseless PLC or robot controller on the open internet is clearly not the best of plans.
(though the amount of people using teamviewer is telling)
Several ICS vendors like Tosibox and EWON make devices to accomplish this. I think Tosi has the more secure model, though I hate their proprietary dongles.
VPNs are also used pretty successfully here. Several large companies also don't let you directly connect to anything. You vpn in and connect to a machine with Citrix and then you can use whatever was setup for your there. Usually whatever version of Logix/Studio 5000 the plant is on. You have to talk to someone in IT to get your files moved in/out.
I think Amazon went a different direction and uses Versiondog to monitor their automation systems and check for changes. I don't work there or know anyone on their automation team so I'm not aware of the details.
Still, I think you can have external access and be secure. You just need to balance things out with your business needs.
I support an environment almost exactly like this (albeit in a small manufacturing company). I don't love having one of the controls networks attached, in any way, to the LAN, but I understand the business requirements justify it.
It happens that there's a controls system running devices that could cause massive environmental impact in some malfunction scenarios. I am happy to report the plant, being held to account for things like public evacuation plans and hazmat filings with local first responders, has never asked about connecting that network to anything. That would be a walk-out-the-door type scenario for me. I worry that they'd just find somebody who wouldn't have those scruples, though.
If I may speak slightly out of turn to a stranger, using a possible & currently imagined future person less scrupulous does not modify in any way your obligation, however you perceive it, to act ethically.
Nobody was suggesting that it does. The person you're replying to specifically said the opposite.
I don't have those resources, nor do my Customers. I've got the various mix of Windows, Linux, and embedded devices that the Customer has purchased to serve their business applications. They (and I) don't have the clout or purchasing power to demand application vendors bend to our desires, so I'm left with making the best out of sub-optimal architecture, protocols, etc.
Google says, in the BeyondCorp III paper under the heading "Third-Party Software":
Third-party software has frequently proved troublesome, as sometimes it can’t present TLS certificates, and sometimes it assumes direct connectivity. In order to support these tools, we developed a solution to automatically establish encrypted point-to-point tunnels (using a TUN device). The software is unaware of the tunnel, and behaves as if it’s directly connected to the server.
So, they just do what I do and throw a VPN at it, albeit a client-to-server VPN serving an individual application rather than a client-to-network VPN like I might.
I do my best to segment the networks at my Customer sites, to use default-deny policies between security zones, to authenticate traffic flows to users and devices where possible, and when unable (because of limitations of client software/devices, usually) restrict access by source address. Within each security zone I try to make a worst-case assumption of an attacker getting complete access to the zone (compromising a host within the zone and getting arbitrary network access, for example) with things like private VLANs and host-based firewalls. I have to declare "bankruptcy" in some security zones (usually where there are embedded devices) where I have to rely only on network segmentation because the devices (or vendors) are too "stupid" to have host-based firewall functionality, authentication, encryption, etc. (These are the devices that fall over and die when they get port-scanned, yet somehow end up in mission-critical roles.)
I think the harsh reality is that, operating at the scale of small to mid-sized companies, IT and infosec are forced into a lot of bad places by vendors who don't care, and management who are focused on the bottom-line and who don't see security as anything other than something to purchase insurance for.
To put it another way: I have to make all this crap work. If I make it too difficult for the end users to work or for the vendors to support I'll be kicked to the curb and they'll find somebody else who will be less "difficult".
The risk of adding remote access to critical systems is the introduction of globally accessible single-point-of-failures. Given the nature of software, such an attack has an unlimited amount of time to be perfected before deployment and when finished can be deployed at effectively zero cost and complete in effectively zero time which provides no meaningful way to respond except with already deployed automated systems. So, the risk added with remote access is the risk of malicious catastrophic total system failure.
In this case, the water treatment facility treated the water for ~15,000 residents. In a similar case many years ago , a similar event occurred to a water treatment facility that treated the water for ~12,000 residents which resulted in 100 affected individuals before the effects were detected. So, we can reasonably assume that undetected water treatment tampering on a facility serving ~10,000 individuals will result in about ~100 affected individuals before the effects are detected. If there exists a way to tamper with a water treatment facility that would result in deaths for the affected individuals, which is quite likely, then that means the risk of remote access to the water treatment facility is ~100 deaths. So, as a society, we should ask the question: What is the standard of care that should be applied to a system where failure may result in the deaths of 100 people? And any business that wishes to add remote access to such a system must demonstrate to the satisfaction of society that they are taking that degree of care. It is not the role of society or the people to suffer for the convenience of business.
And in this case, I am certain that they are not taking an appropriate amount of care. The fact that you honestly suggested that an IT department would shove an AP in the ceiling for their convenience shows just how low our expectations are. In any other industry, such an act would be, in no uncertain terms, criminal negligence. That our standard assumption about the standard of care taken is criminal negligence shows just how far any of these companies is from actually deploying systems that have external access and have adequate security.
Businesses need lower cost because they are under price pressure. Especially with small utilities. Remote access is one of those ways to lower their costs on personnel or vendor support.
There is still a whole lot of low hanging fruit in automation for improving security and access control. We're not going to get it from Rockwell for sure though.
Just so I am clear, doing what you say they are doing should be so unacceptable that it is not even viewed as an option. Anybody attempting to do so should incur costs so great that there would be no competitive advantage to offloading risk to society to the detriment of the people as the costs of doing so outweigh the benefits. If that prevents businesses from making certain profitable decisions due to the collateral damage they will cause then that seems like their problem.
So right now things the op posted are pretty much standard practice everywhere in most industries. I mostly work in EU, I have worked with construction companis, medical companies, hospitals and telcos, and practice like this is standard.
They will have some ungodly expensive security product that makes them change password ever 14 days, and makes intranet barely usable, but will have holes the size of the mountains in their infrastructure, because of this vendor or that cost savings etc.
When downtime is expensive, the pressure from the business is to err on the side of being able to get experts in to troubleshoot the system as easily as possible, vs guaranteeing that bad guys can't get in. The first they see all the time, and the second seems unreal until it actually happens...
They also have a satellite office over in Clearwater, Florida (which is trying to be like a little bay area copy, v2/3)
Interesting, but, Teamviewer also has been exploited and leaked creds, but took three years to confirm it: https://www.bleepingcomputer.com/news/security/teamviewer-co...
Or, if the client computer browsed a site, it'd actually start open an SMB share on the perps computer: https://www.bleepingcomputer.com/news/security/teamviewer-fi...
and a few other interesting vulnerabilities, hmm.
Teamviewer on a desktop, probably with a shared credential isn't very secure. Knowing this though, I doubt that it was a teamviewer exploit. My guess would be a disgruntled employee since they knew what to get into to change chemical set points.
I would be interested to know if the TeamViewer account in question had 2FA... probably not.
It's OK for a huge city operating many water treatment plants to decide that it is more efficient to automate and centralize and secure the network. It is horrendous that this is seen as the cheap solution for a small town.
What will stop the local city council be compliant on paper, ie them doing a tick box exercise and saying that their summer IT intern is the security department?
It would of course require significant political will to create these institutions and system of laws and regulations, but it could be similar in spirit to the kinds of controls the military has for software vendors that want to work with it.
Until the decision makers who demand the interconnection of these networks are held accountable it's isn't going to stop.
The cases I've seen have been to facilitate 24x7 off-site PLC vendor support access. I certainly see the business argument for the economics of off-site support for infrequently/improbably failure scenarios. At the very least, though, some type of physical interlock could have been employed (at the expense of some response time).
Edit: I think controls can exist to make this kind of situation tenable for at least some types of industrial controls applications. When you start getting to things like municipal water and power I start getting more antsy.
Honestly, I think this is a fine outcome. There is a dollar value per life. I don't think we're undervaluing the life yet.
Likewise these consultants are not just coming in and pointing their fingers at the obvious ICS on the internet. They are also providing services to understand why they were attached in the first place and where that process broke down, how to keep the current ease with which to operate the system, and implement the transitions.
In real life, the internals of a water plant are behind locked doors. Not everybody from Nairobi to Nantucket can get in and do as they like.
I'm afraid that trust in the public is definitely not the way to go with infrastructure and networked control systems.
I'm fairly sure the Iranian ultracentrifuges were not connected, and were hacked anyway. Stuxnet was complicated, but being disconnected is not a 100% protection.
We're fighting to keep these people from using unlicensed copies of TeamViewer for their primary access.
Nah, Dragos knows their shit. They'd be around even if ICS had good security.
The contractor and integrators then move on to the next project and copy what they did last time. Rinse , repeat.
We've been actively pressing for realistic security and access control planning in the contract stage, but that's slow going in and of itself and still only affects new or upgraded installations -- on facilities with an expected lifetime of 10-30 years.
Then look for a list of open modbus ports on the Internet and be wowed at all the industrial machinery that is just sat on the Internet...
That machine's interface was a Windows 95 (YES!) German language version. I am not German. I do not speak/read/write German. It was in that factory's IT admin & support. Nobody in that factory's operations staff could read German. So the rule is "we never touch this machine - never EVER. Anything that goes wrong (sounds, visuals, etc.) we ring the bell, escalate, get the vendor in."
Sidenote: For the youngsters, W95 was an OS by Microsoft, before you were born, and it did not have the multi-use/control environment (admin = god, user = cannot install sofrware, etc.)
The machine had two 'terminals'. One ON the machine (physically - on the front of the 'bus') and one 'remote' (50m away) in an office, with a huge window where you could observe the machine. Both screens displayed the exact same desktop (Win95, German)(basically a single computer with two monitors 'duplicating'.
Geniuses operation staff got bored looking at a machine with no error/faults (German built!) and installed a software that came along with adult video CDs (we're talking early 00's). Geniuses were watching porn on a machine that was worth many millions and was the production machine. When the geniuses were watching porn, it was being displayed in BOTH screens. Factory floor, AND office 50m away. Sound and all...
So.. adding to your points:
5. System limitations and customisation/hardening (no need for extra software - just basic security hardening/configuration)(win95)
6. Uneducated users (employers installing video player from an adult video CD)
Another problem is that when the security systems get in the way of expediency, there's always somebody around who can disable or severely cripple the security to make it easy for people to e.g. work from home during a pandemic.
Yeah, and even the best security practices aren't going to work too well if someone drops a nuke on your facility.
Stuxnet was an extraordinarily sophisticated attack well beyond what a typical industrial system will need to protect against, or even be able to protect against. It's not really in the same league as anyone being able to just remote in and change settings, and while it's realistic to expect a bloke called Steve who runs the computers at the water processing plant to prevent someone just remoting in willy-nilly, it's not as realistic to expect him to defend against two nation-states working together explicitly targeting that facility.
And the target system also had security systems well beyond the capability of your local water treatment plant. Let's not forget that these assets deemed as critical infrastructure could be the target of nation states.
All I'm saying is that not being connected is only a small part of security for industrial systems, and that some people wrongly rely on it being enough.
As with this incident, operators were physically present. That seems to be the real lesson (even if - see other war stories on this thread - operators tend to themselves have a creative approach to network security).
The attack worked exactly as designed - wasting time, destoying equipment while being stealthy. Had the attack tried to destoroy all equipment at once" it would have beem spotted immediately.
My point is, physically-present operators did spot AN issue immediately. It wasn't properly attributed to malice for months, sure, but they could still mitigate during that time.
Without their presence, would you (as attacker) really bother with all the stealthiness? It certainly hasn't seemed to avoid long-term attribution. Set the controller to +INF RPM and let whoever pours over the logs in the morning spit out their coffee.
If it was airgapped it wouldn't be available to
easily be used by a nation state to attack infrastructure in case of other simultaneous attacks.
That's not necessary: just make the board of directors of the companies that operate it, have it on their premises, or use it personally responsible. That should give them more than enough incentive.
A strong cross-disciplinary startup could make a killing in industrial automation. (And extant companies that remotely meet that criterion already seem to do so.)
I wanted to add this point, because lack of security measurements and the convenient existence of hackers allows a company plausible deniability.
As long as companies are not legally forced to take precautious security measurements, they won't.
And it plays into their advantage, because insurance providers have rarely clauses in them regarding minimum security measurements.
As for China it's not impossible that they are already monitoring for that and blocking Shodan from accessing their Internet.
Maybe because it's comparing the entirety of the United States with much smaller countries like Italy and Spain.
A comparison of the United States with the European Union would make more sense.
The question is, why are the telecom providers allowing this, but there's also alot of legacy stuff they don't want to touch as it may violate the terms/contract and the bandwidth isn't the issue, so telecoms largely ignore it as they're just a bridge/
I can't immediately verify the veracity of the claims made by the sheriff but,
the fact that the authorities *set up* a public-facing and/or remotely
accessible system that allowed someone to change the water chemical levels is by
far the bigger issue here.
A couple computers did bridge the two networks, but (IIRC) they were simple embedded systems doing read-only access (for compiling reports). I know when they did a pen-test, the pen-tester could compromise most of the corporate network (including service accounts), but they couldn't punch through to the SCADA systems.
For a small city, it's non-trivial.
The only thing surprising about this is that we don't hear about it tenfold more.
If you're someone who stands to gain from disrupting a nation's infrastructure... you don't tip your hand until it most benefits you.
If it really is the case that large parts of the infrastructure are very unsecure, expect to hear about it all at once, instead of little by little.
Meanwhile, we live in a world where VPNs are sold to the casual user while critical systems are left on internet facing networks.
I've never understood why, if these critical systems need remote access, it's not all done through a VPN of some sort. VPNs are not infallible, but it significantly increases the bar for entry from script kiddie to nation state real quick (depending on choice of crypt), while choosing a well supported implementation ensures long term bug fixes and security patches.
Maybe not at home, but couldn’t they have a local 802.11 network set up for this?
It would be sort of darkly amusing if we've done the same thing to other countries, and so time bombs in infrastructure essentially replace nuclear weapons as the guarantors of Mutually Assured Destruction.
Cell towers are a really integral part of carrier's business - I'm not certain whether most are owned by providers or other companies, but either way the folks that put the tower up owe the customer (be it a phone user, a phone provider or some subcontactor of the provider) an explanation and pay the costs of bad configuration... I'd also assume that making sure these towers stay up is someone's fulltime job (likely multiple people) - while there won't be an employee constantly monitoring city water systems since it would take so little of a single person's time.
Not a direct loss, but plenty of opportunity for indirect loss. Disrupting emergency systems is the first that comes to mind. Covert hacking and surveillance could also be used for assassination plots.
The difference here is that nobody takes responsibility for a water treatment works in the same way a mobile operator looks after base stations - most operators aren't putting their base stations anywhere near the public internet. When they do it's under very careful control, like with femtocells.
The dilution of the solution stored in the hydroxide tank generally allows you to make this so.
Sleeping well at night is a great side-effect.
* The PLC is out to destroy the motors
* SCADA/IPC is out to destroy the PLC
Assuming these things in your design definitely helps with sound sleep. Especially when the company is running 3 shifts and you are on-call.
I have a problem with the language here. This was absolutely a public threat. The attacker demonstrated intent and capability to inflict public harm. That's the definition of a threat.
But the language downplaying the severity will mean this all blows over in a couple of months, without the actual mobilization/funds to properly secure not just this one site, but any similarly affected plants.
I've come to the conclusion that humans in general aren't very good at preventing catastrophic events we haven't seen before (see climate change). We'll need to see n=1 disasters with this first, before there's public outcry to fix it.
A successful attack is much less likely to be made public, for obvious reasons. We may have suffered from successful attacks and not know it (small enough concentrations of contaminants can’t be tasted)
Install water filters, HN. Use them. We have AquaSana under-the-sink in several locations through the house... no pitchers. Whole-house filters do not filter nearly the same variety of crap that under-the-sink and PUR pitchers do. Say no to Brita. Learn your NSF ratings and choose wisely.
You’re only paranoid if you’re wrong.
Most US tapwater is fantastically clean and drinkable, and doesn't generally need a filter. The Safe Drinking Water Act is pretty powerful stuff.
I use one of those under-the-sink inline charcoal cartridge filters on the sink we use to make tea or cook with. If I grab some water from a different tap, you can tell immediately by the smell (chlorine) and the taste.
I'm surprised the filter takes out the chlorine honestly, but it's clearly taking out a bunch of stuff from what is otherwise considered very clean.
That said, having travelled extensively through places like India, South America, East Asia, etc., I'm certainly grateful for the water we have "on tap" in the house. It's easy to take for granted.
EPA mandates a floor of 0.2 mg/L chlorine for all Surface Water based drinking water supplies at all times. There are additional chlorine requirements depending on what sort of filtration you perform, if any, and how far the first service connection is from the chlorine insertion, in minutes. (They also mandate a safety ceiling of 4 mg/L for all drinking water.) This level is continuously monitored.
Seattle does about 1 mg/L to meet these EPA-imposed requirements.
Chlorine evaporates out of water, so if you don't like the taste, you can just let tap water sit a while. Sunlight helps. Boiling water (e.g., for tea) also removes most of the chlorine.
: (PDF) https://www.epa.gov/dwreginfo/swtr-plain-english-guide
: (PDF, p. 8) https://www.seattle.gov/Documents/Departments/SPU/Services/W...
They get filtered, but there is no chlorine directly. Some chlorine dioxide is used at the end, though. Here's the official description of the utility, translated to english:
- Via a raw water pumping station, the dam water first reaches the micro-screening plant. It removes coarse contaminants over 35 ~ µm in diameter through stainless steel mesh filters. This provides special safety in times of mass algae growth or during floods.
- Subsequently, the raw water is de-stabilized with a flocculant; and turbid matter accumulates to form large flocs.
- In filter stage 1, two filter materials of different coarseness are used to remove the flocs.
- Ozone is then added to disinfect the raw water.
- Filter stage 2 is equipped with activated carbon and frees the raw water from the reaction products of ozonation. Excess ozone reacts to form oxygen and is thus removed from the raw water.
- The further filter stage 3 uses natural limestone material over which the water flows. Here the excess carbonic acid in the water is removed.
Finally, a small protective disinfection with chlorine dioxide takes place before the drinking water leaves the clean water tank in the direction of <city>.
- Ozone disinfection, and removal
- Tolt river supply only: water conditioning by filtering through "granular media." (Cedar river supply is clear enough without this step.)
- UV disinfection
- pH adjustment to avoid corroding pipes
- Flouridation for public health
- Chlorination as a final step as water leaves the treatment plant, and also at some downstream facilities (like a networking repeater; just to maintain chlorine levels that would otherwise have fallen due to distance from the upstream chlorination site)
> You might hear about different forms of chlorine. Seattle's water system uses "free chlorine" (not chloramines).
It is my understanding that most municipal water utilities only test water quality every 3 months. A problem can come and go between testing cycles.
Even with weekly testing, I’d expect the same risk (there’s still a window between tests). Basically you’re only going to know about a problem when it’s too late.
It depends on what contaminant you are measuring, but the testing frequency can vary from "every several years" to "continuously monitored and sets off a SCADA alarm if it exceeds a given threshold." The biggies--IIRC, turbidity, pH, and dosages of coagulant and treatment chemicals--are logged every 15 minutes, with more tests happening on hourly 6-hourly, and daily frequencies, followed by yet more contaminants happening largely on monthly or quarterly assessment bases. The issue in question would have shown up in a pH measurement, so there's no reason it shouldn't have been caught within minutes.
Indeed, you'll see that if a water test comes back positive, there will be multiple retests and a much greater rate of testing until the problem is abated, at least at my local drinking water board.
This isn't remotely true.
E.g., Seattle explicitly states:
> We monitor your water 24 hours a day, 365 days a year. We test samples from the region between 10 and 100 times per day.
> To ensure the safety of our drinking water, SPU's water quality laboratory analyzes over 20,000 microbiological samples each year (more than 50 a day) and conducts chemical and physical monitoring daily, 365 days per year.
Yes - they should. Because there is going to be a lot more of this happening in the not so distant future.
Wait till we get our M2 Browning.
Perhaps a change in KPI or regulation requirements may create such incentive to ensure appropriate actions are taken.
Because this is most likely "teenager broke into a poorly secured shack and turned a random valve to be naughty", not "state actor sabotaged critical infrastructure".
(Still, the problem remains: if a naughty teenager can turn a valve for shits and cause a threat to public health, then perhaps that valve needs some access control.)
This isn't downplaying the potential risk, basically every other thing said highlighted the risk.
This is if you take the facts in this story at face value. In my mind, if someone can raise the level of a chemical to become dangerous, you already have a problem. 11000 ppm sounds huge to me (1.1%). What if instead of an external hacker you had an internal disgruntled employee. What if you had a leaky gasket. The system should have some multiple redundancies to not allow a dangerous level of a chemical to end up in the water supply.
(I recognize you also mentioned the difficulty, I just wanted to poke some fun :P)
Sounds like they did a great job fixing it.
I think it's surprising it's even going to trial.
That is what a reasonable human would expect "the people" to do when multiple agencies overseeing the utilities were initially so irresponsible/incompetent/negligent that water superheroes from three states over had to swoop in to warn residents their fucking pipes are poisoned.
LOL, copper is poisonous in presence of corrosives. They are just replacing a poison with other.
All pipe materials are poison if enough is ingested, lead is however is toxic in extremely low amounts, while copper is actually needed in low amounts. You think PVC is better?
We, mammals, are relatively well protected to deal with it, but the real problem here is in the long term exposure. Can produce several forms of inner bleeding in the gut, and harm permanently the liver and kidneys. There is a lot of copper messing around for some reason in Alzheimer's patients also.
Moreover copper is particularly toxic for all aquatic life and invertebrates also causing an acute poisoning. I would not use that water in an aquarium for example. I had seen the stuff in action and is devastating for fishes.
https://smile.amazon.com/Seachem-67105650-Cupramine-Copper-1... for example
Sodium hydroxide is a highly caustic base and alkali that decomposes proteins at ordinary ambient temperatures and may cause severe chemical burns. It is highly soluble in water, and readily absorbs moisture and carbon dioxide from the air. https://en.wikipedia.org/wiki/Sodium_hydroxide
The nice thing about lye is that it's typically sold in solid form, excepting in one of the most common household products, drain cleaner. Drano is sodium hydroxide in solution with aluminum, with which reacts in the presence of water, presumably to help mechanically break up clogs. Solid lye tends to be safer as there's less chance of ingestion, and less chance of it lingering on your skin--it turns your skin to soap.
Much more dangerous is stuff like sulphuric acid, which you can buy (at least in California) in concentrations of over 95% at the hardware store as Rooto and similar drain cleaners. That stuff is nasty as its in liquid form, easy to spill and even inhale as an aerosol. Also not a good idea for pipes despite how they're sold because such acids are hell on cast iron--i.e. what main sewage drains are made out of in older buildings and in jurisdictions that aren't favorable to PVC.
There are so many ways for evil people to do evil things it's amazing (and, frankly, fascinating and even instructive) that it doesn't happen more often. I'm curious to see how the situation will change as it becomes easier to be evil while remaining anonymous and remote. Still, I imagine it would be extremely difficult if not impossible to actually cause significant harm by changing the concentration of lye in the water suppler. For example, I'm skeptical that there would be enough lye in the dispenser at the treatment facility to cause serious harm. The worst effect would probably be disrupting the pH of the water system and possibly causing other ill effects, such as by leeching lead or rendering antimicrobials less effective.
Eh, at the end of the day it's just acid. You can always throw something basic at it to neutralize it. It's not like it's a heavy metal.
Here is an article where plant operators accidentaly left a sodium hydroxid pump in manual mode. Dumpibg way too much of it in one go causing chemical burns to the customers. There were Ph alarms but nobody heard them.
This is the website of an other water treatment company explaining what processes they have in place to prevent an issue like the above: https://www.mwra.com/01news/2007/042507nosodiumhydroxide.htm
I don’t think it’s a good idea either, but it’s exactly why it happens.
While I said it’s new public management, it’s also a common management style in any form of private sector enterprise.
Our society’s inability to prioritize the solving of obvious problems is pervasive enough that it’s probably due to more than some badly chosen verbiage.
> We'll need to see n=1 disasters with this first, before there's public outcry to fix it.
It’s worse than that. It would have to be a really awful disaster, people would need to understand the causes and effects, and the prevention of future disasters would need to not threaten established businesses and political interests.
Huge amounts of important infrastructure sits internet connected due to individual laziness, coupled with a lack of willingness to understand and think about cyber security. Often it seems simply from a lack of willingness to spend money on an ongoing basis to maintain anything.
There's a culture and mindset in ICS that you don't change what isn't broken. And stability and reliability is important - this is an industry where you don't install patches due to the fear of breakage or regression.
When the world shifted towards "code fast and break things", the ICS world didn't accept this change. They can't have Windows (yes, Windows) reboot unexpectedly to do an update. That pretty much rules out the supported versions like Windows 10. I mention consumer versions, because a culture of multi layer outsourcing means nobody wants to pay for a server version - an OEM Pro version of Windows saves a subcontractor some money.
That OS won't be patched, unless the SCADA software vendor has validated the patch with the software being run. Expect crazy things like Windows XP SP2 (not 3) to be requirements. Everything is about stability and using tested configurations.
You could be forgiven for thinking this is less scary, as you can airgap this, and treat it like a fixed appliance. Often that doesn't last, and (if you're lucky) an unpatched VPN box gets thrown in front of it with a weak password. More commonly, some consumer grade remote access software gets installed, so a bean counter can count how many beans they're making or spending. Airgap eliminated.
The fix isn't single step - there's a need for more understanding about safety critical engineering in the IT world - the lack of testing and regression validation isn't acceptable to this industry. The ICS industry needs to be willing to pay for software maintenance and assured development processes. Simpler code that isn't running on full consumer operating systems is needed. And ultimately we need to go and replace systems that "ain't broke", but are insecure. And that's going to be expensive. No security appliances are needed here, just some basic common sense.
Expect to see versions of Windows you didn't even know existed in use in very important places... Seeing pre-NT or very early NT wasn't a huge surprise...
I don't know what the answer is, but I don't feel super great about the future, given the giant sinkhole of dependencies that everything seems to be getting sucked into.
A hardware device, say a water pump made by industry leader would look very different from one made by a hobbyist in garage. It's obvious even to a lay person that a door from thick still is more break resistant than one from hardboard. But code written by a summer intern without any review and tests looks not much different from one written by seasoned professionals and carefully tested. Even if some software if full of RCE it takes a lot of time and motivation to find them.
Huge amount of dependencies and many layers of abstractions make it even harder to see the software system as a whole.
This was nothing but gross negligence from whoever is in charge of their IT infrastructure.
To put it in a different way: "Teamviewer: so reliable and easy to use than even your grandparents can install it."
Featured in a few of these videos, hotlink to the slide:
I find it slightly amusing that it looks like the hacker just added two 1's to the front of a text box, rather than chose a specific value
an operator noticed someone had remotely entered the computer system that he was monitoring
How could they not know immediately who it was (or at least, whose credentials were used)?
Do they even know that it was a hacker and not someone trying to type "111.00 ppm but either they or the software dropped a decimal and typed "11100"?
The intruder broke into the system at least twice on Friday, taking control of a plant operator's computer through the same methods a supervisor or specialist might use. The hack didn't initially set off red flags, because remote access is sometimes used to monitor the system or trouble-shoot problems, Gualtieri said
So it almost certainly doesn't have enough auditing to know who made a change.
Maybe I'm just really old-school, but it sounds like this sort of thing should really be something that's set once to the right values, and then if it ever needs changed, someone has to physically access a building and adjust a physical control --- likely alongside doing various other maintenance tasks on the system.
This is different from remotely viewable, which is a much better idea, and I dare say should even be public.
Sounds like no logs, probably showed up on shodan and someone wanted to have fun/many people did.
If it was an VPN, you know it's a more competent person, org, and most VPN's also, keep logs.
parent mentioned "owned machine" (as in, "hacked" not "ownership"), which means you might be able to find the source if you can seize the computer and analyze it in time. If the attacker wiped all traces from the computer then at best the trail ends there and at worst an innocent person gets blamed for it.
>If it was an VPN, you know it's a more competent person, org, and most VPN's also, keep logs.
"no log" is a commonly sought after feature in VPNs, and if you're planning to do shady stuff I doubt you'll go with a logged vpn.
It's marketing puffery, they all log and they all keep it and will comply. Many VPN say no log, and then logs leak. You don't have control over that system/service, you can not fully verify and there is much mistrust around it for nefarious deeds.
>parent mentioned "owned machine" (as in, "hacked" not "ownership"), which means you might be able to find the source if you can seize the computer and analyze it in time. If the attacker wiped all traces from the computer then at best the trail ends there and at worst an innocent person gets blamed for it.
So, yes, and no. The IP address will determine location and possible people of interest. It could also lead to a chain or more documentation/possible past interest/threat.
The wiping/forensics imo are hard to ensure for chain of custody, but if an IP address is honed to residental, it's easy to grab a DNS log from that ISP and see what requests they amde and if it makes sense it was targeted, random shodan or possible hijacked/RAT machine.
More info never hurts, but "tracing" an IP address is the first step.
It sounds really suspicious that the hack took the form of some sort of remote control which was evident to the actual operator who was present there. At the same time there was an actual operator, who wasn’t even suspicious the first time because apparently remote control was common by the supervisors.
I think there’s a good chance we’re gonna find that either the operator, or one of the remote controllers accidentally, or maliciously, made this change, and blamed it on a “hack”.
We were told since the 9/11 time that our industrial control systems are in really bad shape, not sure if anything is done to strengthen it if at all. May be someone that's knowledgeable can chime in with information. I see a lot of scope for controls and operational procedure that can be streamlined and standardized across the whole country, if we have the will.
My RO system can take the 450 TDS tap water down to about 30 under my normal use. If I close off the tank and run the water for about 10 minutes it will get down to about 20.
The very scenario was the plot of one of the episodes. Here's a link, if anyone remembers the show and wants a dose of nostalgia.
That seems to be a pH 13.4 result. There probably isn't much in the water to buffer that.
Pump sizes can do 10, 20, 40, 80, 160, 320, 640, or 1280 units.
Sometimes the water needs more chemicals. Maybe today it needs 10, but typical is 30, and it could go to 100.
That's just the water rate for winter though. In summer, people wash cars and water lawns. Triple the number, so 100 becomes 300.
The town is growing fast, at least when the equipment is installed. (thus the upgrade) In case the fast growth might continue, triple the number. The 300 becomes 900.
Well, the smallest pump that will work is the one that does 1280 units. Also there is a federal grant from the EPA for big upgrades, and that one qualifies. Buy it.