> In the Netherlands alone, these solar panels generate a power output equivalent to at least 25 medium sized nuclear power plants.
Since this didn't pass the smell test: the author is looking at nameplate capacity, which is a completely useless metric for variable electricity production sources (a solar panel in my sunless basement has the same nameplate capacity as the same panel installed in the Sahara desert).
Looking at actual yearly energy generation data, this is more like 1.5 times the generation of an average nuclear power plant (NL solar production in 2023: 21TWh, US nuclear production in 2021: 778TWh by 54 plants).
Which maybe puts more into perspective the actual risks involved here. I'm not saying there shouldn't be more regulations and significantly better security practices, but otoh you could likely drive a big truck into the right power poles and cause a similar sized outage.
For the purposes of information security, the nameplate capacity is the correct number to consider for a very simple reason: we must defend as if hackers will pick the absolute worst moment to attack the grid. That is the moment when the sun is shining and it's absolutely cloudless across Netherlands, California, Germany, or wherever their target grid is.
At that moment, the attacker will not only blast the grid with the full output of the solar panels, but they will also put any attached batteries into full discharge mode as well, bypassing any safeties built into the firmware with new firmware. We must consider the worst case, which is that the attacker is trying to not only physically break the inverters, but the batteries, solar panels, blow fuses, and burn out substations. (Consider that if the inverters burn out and start fires, that's a feature for the attacker rather than a bug!)
So yes, not only is it 25 medium sized nuclear power plants, it's probably much higher than that! And worse, that number is growing exponentially with each year of the renewable transition.
This was probably the scariest security expose in a long time. It's much much worse than some zero-day for iphones.
A bad iPhone bug might kill a few people who can't call emergency services, and cause a couple billion of diffuse economic damage across the world. This set of bugs might kill tens of thousands by blowing up substations and causing outages at thousands to millions of homes, businesses, and factories during a heat wave. And the economic damage will not only be much higher, it will be concentrated.
> Getting the grid back online is a laboreous manual process which will take (a lot of) time. Think...
It would be even more laborious and take more time to bring things back online if the attacker manages to damage or destroy equipment with an overload like the GP describes.
The "turning the grid up to 11" attack isn't really possible. I know it seems like it is, but the inverters will only advance frequency so much before they back off, the inverters will only increase voltage so much. Etc. Sounds scary, isn't practical.
Turning everything off when the panels are at peak output? That lets frequency sag enough that plants start tripping offline to protect themselves and the grid and it'll cascade across the continent in just a few minutes. Then you have a black start which might take months.
Would love to know more about this. How would that happen? What's the process to bring it back up so fast?
The video has a lot of good info and seems compelling. During the Texas freeze many power company officials said the exact same thing, if the Texas grid went down it would have taken weeks to bring everything back online.
It's called black start (https://en.wikipedia.org/wiki/Black_start), and power companies plan for it, and the necessary components are regularly tested. It's not a fast process, it can take many hours to bring most of the grid back up. We had last year a large-scale blackout here in Brazil, and a area larger than Texas lost power; most of it was back in less than a day.
> if the Texas grid went down it would have taken weeks to bring everything back online.
The trick word here is "everything". Every time there's a large-scale blackout, there's some small parts of the grid which fail to come back and need repairs. What actually matters is how long it takes for most of the grid to come back online.
Inverters may be protected against changing settings, but if you can replace the firmware it can likely cause permanent hardware damage. Which the manufacturer, perhaps under pressure from its government, can do.
The risk is not turning all solar installations "on maximum". That happens nearly every summer day between 1 and 2pm. Automatic shutoff when the grid voltage is rising can be disabled, but more than 9 out of 10 consumer solar installations in the Netherlands deliver their maximum output on such a day for most of the summer, not running into the maximum voltage protections.
The big risk is turning them all off at the same time, while under maximum load. That will cause a brown-out that no other power generator can pick up that quickly. If the grid frequency drops far enough big parts of the grid will disconnect and cause blackouts to industry or whole areas.
It will take a lot of time to recover from that situation. Especially if it's done to the neighbouring grids as well so they can't step in to pick up some of the load.
Not if we have grid scale batteries. Solar shuts off, oh no. Sometime in the next four hours we need to get that fixed or something else up. Also flattens out the demand curve and allows arbitrage between the peak and valley.
What makes you think we're in the later stages of the battery S-curve currently? More generally technology-wise, what makes you think energy technology in general is an S-curve? Many situations are stacked S-curves. Global energy consumption, for example, looks like an exponential (no flattening yet) [1] if you plot it starting 1800.
Also, the start of an S-curve can be described by an exponential function, right? So it is an exponential.
My point more generally was to not underestimate processes which increase exponentially. Even if they flatten at some point, they can drastically change the world and fast. For example, iPhones and computer chips took off slowly, but once they started moving they took over the world. (Or do you not have multiple smartphones and computer chips in your house right now?)
And yes your point that it's all an s-curve is theoretically correct. But I think it's a semantic discussion. Next time I'll say "never underestimate the first half of an S-curve."
You're taking exponential improvements for granted. Only one human endeavour (ever) has made exponential gains for a long period, that of silicon lithography. And even that ended in the last decade.
The reason I say "always underestimate" is the obvious one. The easy problems are solved first, then the harder ones take longer because they're harder.
So that's fantastic that battery production scaled up. Only a fool would expect - would plan on - it continuing like that.
That’s not true. Exponentials are everywhere. They can often go for a long time. Even silicon is still going strong if you focus on FLOPS per dollar.
Other examples are cruise ship sizes, US GDP per capita, or Microsoft stock price.
The point really is this: go back in time to some of these things a few decades years ago. You would say: “How is this even possible? This is crazy. This will probably plateau soon. This can’t continue.”
But it did. Cruise ships went from 20, to 100, to 200 and now 365 meters in length.
And the same will probably happen for batteries. People say “ah well this is crazy. It will probably plateau soon”. My point is maybe it won’t. Once these exponentials (starts of s-curves) go, they go. Standing at the bottom of an s-curve and predicting the plateau soon can lead to a massive misprediction. Like the IBM CEO who said there will never be a market for more than 10 computers. He was off by about a billion.
Fyi, monetary values of things, like US GDP or stock prices, can be exponential forever, if we wanted, because they're socially constructed.
I am talking about real things. Cruise ship sizes improved dramatically but... actually linearly? There won't be 1,000,000 GT cruise ships in 2050. Or 60.
You have to use specific measurements like FLOPS/$ to keep moore's law alive, because the focus has been only on a certain kind of FLOP (the fp32 MAC for graphics, or perhaps the INT8/FP8 in recent years). Because in general, it's dead. It's more performance, for more money. Because lithography is really hard, harder now than ever (and water is wet).
> We must consider the worst case, which is that the attacker is trying to not only physically break the inverters, but the batteries, solar panels, blow fuses, and burn out substations.
Power transformers have a loooooooot of thermal wiggle room before they fail in such a way and usually have non-computerized triggers for associated breakers, and (at least if done to code, which is not a given I'll admit) so do inverters and every other part. If you try to burn them out, the fuses will fail physically before they'll be a fire hazard.
This is true, especially for low frequency (high mass) inverters. The inverters that are covered here are overwhelmingly high frequency (low mass) inverters. We hope that they practiced great electrical engineering and layered multiple layers of physical safeguards on top of the software based controls built into the firmware.
Of course a company that skimped to the point of total neglect on software security would never skimp anywhere else, right? Right?
:crossed-fingers: <- This is what we are relying on here.
And even if they did all the right things with their physical safety, the attackers can still brick the inverters with bad firmware and make them require a high skill firmware restore at a minimum and turn them into e-waste and require an re-install from a licensed electrician at a maximum.
> Of course a company that skimped to the point of total neglect on software security would never skimp anywhere else, right? Right?
At least in Europe, product safety organizations and regulatory agencies have taken up work to identify issues with stuff violating electrical codes (e.g. [1] [2]) and getting it recalled/pulled off the market.
Sadly there is no equivalent on the software side - it's easy enough to verify if a product meets electrical codes, but almost impossible to check firmware even if you have the full source code.
> high skill firmware restore at a minimum and turn them into e-waste and require an re-install from a licensed electrician at a maximum.
Well not even high skill - for "security" reasons and to prevent support issues as well as to skimp on testing needed informations are often only accessible to a chosen few.
Paradoxically the effect of thes "security" concerns often mean that there are plenty of easily exploited methods in devices like that. And the only people that have them are the ones that you need to worry about instead of some 16 year old teenager finding it and playing blinkenlights with his friends parents house causing trouble for him but getting the hard coded backdoor taken out after the media got wind of it.
If I was dictator of infrastructure I would ban any non-local two way communication and would mandate all small grid storage solutions run off a curve flattening model thats uniform and predictable. Basically they would store first and only be allowed to emit a fraction of their storage capacity to the grid afterwards. Maybe regulated by time of day.
This is wildly overstating the issue. Hackers are not going to break into hundreds of separate sites, compromise inverters, compromise relay protection, compromise SCADA systems, and execute a perfectly timed attack. Even if they did, these are distributed resources, they don't all go through a single substation and I doubt any one site could cause any major harm to any one substation.
Instead, they're going to get a few guys with guns and shoot some step of transformers and drive away.
The problem with infosec people is they tend to wildly overestimate cyber attack potential and wildly underestimate the equivalent of the 5 dollar wrench attack.
They don't need to break into separate sites though - the issue at hand is that a single failure in the centralised "control plane" from the vendor (i.e. the API server that talks to consumers' apps) can be incredibly vulnerable.
Here's a recent example where a 512-bit RSA signing key was being used to sign JWTs, allowing a "master" JWT to be signed and minted, giving control of every system on that vendor's control system.
A lot of utilities have their own fibre since they own poles/towers and need it for tele protection anyway so they can have secure a real private network between control room and significant power plants
I’d expect the opposite. All companies controlling equipment that is part of the “Bulk Electric System” have to be NERC CIP compliant and are audited regularly with large fines for non-compliance. Doesn't guarantee perfect (or even good security) but it’s more likely to be a priority.
It also perverts incentives such that no utility will communicate perhaps helpful information to other utilities or government when said information can leave them liable for fines.
Until there's some kind of hold-harmless agreement, the various industry & government security information sharing groups can only be of limited effectiveness.
The management at the utility doesn’t want to be recognized for being a deficient operator that doesn’t meet standards, so they hire employees to ensure they are compliant
A fine is a black eye for a utility where people pride themselves on the reliability of the service they provide
Hurray! I have experience that may shed some insights. I worked on SCADA software (3 different ones), for about 15 years, started off as a Systems Engineer for an Industrial Power Metering company (but writing software), built drivers for various circuit breakers and other power protection devices, and wrote drivers and other software for IEC61850 (substation modelling and connectivity standard). I’ve been the technical director of one of these SCADA systems, and in charge of bringing the security to “zero trust”. I’ve been on the phone with the FBI (despite not being an American or in America), and these days I design and lead the security development at a large software company.
I’ve been out of the Power Industry/SCADA game for about 6 years now, and never had huge involvement with solar farms, so please take this with a large grain of salt, but here is my take. 15 years ago, all anyone would say about industrial networks was “air gap!”. Security within SCADA products was designed solely to prevent bad operators from doing bad things. Security on devices was essentially non-existent, and firmware could often be updated via the same connectivity that the SCADA system had to the devices (although SCADA rarely supported this; it was still possible). In addition, SCADA systems completely trusted communication coming back from the devices themselves, making it relatively simple for a rogue device to exploit a buffer overrun in the SCADA. After Stuxnet + a significant push from the US government, SCADA systems moved from “defensive boundary, trust internally” to “zero trust”. However, devices have a long, long service life. Typically they would be deployed and left alone for 10+ years, and generally had little to no security. Security researches left this space alone, because the cost of entry was too high, but anytime they did investigate a device, there were always trivial exploits.
Although SCADA (and other industrial control software), will be run on an isolated network, it will still be bridged in multiple places. This is in order to get data out of the system, but also to get data into the system (via devices, and off-site optimisation software). The other trend that happened over time was to centralize operations in order to have fewer operators controlling multiple sites. That means that compromising one network gives you access to a lot of real world hardware.
Engineers never trusted SCADA (wisely), and all of these systems would be well built with multiple fail-safes and redundancies and so on. However, if I were to be a state-actor, I’d target the SCADA. If you compromise that system, you have direct access to all devices and can potentially modify the firmware to do whatever you want. If there is security, the SCADA will be authorized.
I don’t think the security risks are overblown (they are overblown in what they think the real problems are). I think that as the systems have gotten ever more complex; we have such complicated interdependencies that it is impossible to deterministically analyse them completely. The “Northeast blackout of 2003” (where a SCADA bug lead to a cascade failure), was used as a case-study in this industry for many years, but if anything, I think the potential for intentional destruction is much higher.
I’m in this space, but plc io networks from Schneider and Rockwell are still “trust internally”, and some HMI or scada has to have read/write to them. At least Rockwell you could specify what variables were externally writeable whereas Schneider was essentially DMA from the network.
This isn't hundreds of separate sites that have to be hacked individually. This is fewer than 10 clouds with no security to speak of and the ability to push evil firmware to millions of inverters worldwide, where in a few years at the current rate of manufacturing growth, it will be 10s, and then 100s of millions of inverters.
Yeah, the potato cannon filled with aluminum chaff or medium caliber semi-automatic rifle can take down a substation. But this is millions of homes and businesses, which can all have an evil firmware that triggers within seconds of each other. (There will inevitably be some internal clocks that are off by days/months/years, so it's not like it will happen without warning, but noticing the warning might be difficult.)
Would the switch on the transformer possibly be software controlled? (By software, I am wondering about firmware on a device reading a sensor, as opposed to a physical mechanism). I don’t know enough about the internals of these things, but I wonder if you could maliciously overwrite firmware, whether certain protections could be made to fail.
I’m going to assume this kind of thing is likely covered in FMEA and such, so is unlikely.
While I agree that the important metric to consider is peak output and not average output, I would still guess that in a country like the Netherlands that peak output is nowhere near nameplate capacity.
You can get close to peak output just about anywhere, assuming the panels are angled rather than laying flat. You just can’t get it for very long in most locations.
The new method this past year that appears to be highly beneficial is to use various compass orientations of _vertically_ mounted panels. The solar cells got so cheap that every penny we spend on mounting hardware and rigid paneling now stings, and posts driven vertically into the ground which string cables tight between them are cheaper than triangles, way easier to maintain (especially in places with winter), and trade a lower peak (or even a bimodal peak) for a much wider production curve.
> the author is looking at nameplate capacity, which is a completely useless metric for variable electricity production sources
For solar panels, the nameplate capacity is usually also the power generated at the peak production time, which is the moment when an attacker turning off all inverters at the same time would have the most impact.
That is: for an attack (or any other failure), the most important metric is not the total power produced, but the instantaneous power production, which is the amount which has to be absorbed by the "spinning reserve" of other power plants when one power plant suddenly goes offline.
No, the nameplate capacity is what a solar panel will produce under perfect lighting, independent of the site where it's installed.
The peak theoretical power output of a solar panel depends on where it's installed, inclination, temperature, elevation, and so on. The actual peak power is going to take weather and dirty panels into account.
1kw nameplate in Ireland (or the Netherlands) is never going to give you an instantaneous 1kw output -- you're going to be lucky to see 60% of that.
But 60% of 25GW is less than 3GW? You need to take down more capacity than the buffer and power plants will disconnect from grid for fail-safe.
Bringing grid up alone maybe will take days or few, but all the appliances out there will be down for weeks..
ClownStrike brought us pen-written boarding passes, glad we don’t install crapware on hospitals hardware
No. You will definitely not get peak capacity even in the sahara. They got those numbers under perfect conditions in a laboratory, not under real circumstances.
It's the power output that is relevant for the failure mode described in the article, not the yearly production. And in terms of power output, 20GW is an incredibly common number for peak solar production (see e.g. https://energieopwek.nl/ at the end of Jul this year) in summer. Borssele (the medium-sized power plant named in the article) has a 485MWe net output. So yes, we _are_ talking about >25 mid-sized nuclear power plants!
If memory serves, and I’ll admit it’s pretty fuzzy, the US tends to make ridiculously large nuclear reactors and Europe has an easier regulatory situation so they make more of them and smaller.
So in addition to the other stuff people mentioned, you might be off by another factor of 2 there. They also said “medium sized” so let’s call it 3.
This might have been true back in the 1970s, but at least as far as current development goes, is not.
The only new (non-Russian) European design built in the past 15 years is the EPR at 1600 MW. The only new American design built in the past 15 years is the AP1000 which as the name suggests is 1000 MW (technically 1100). AP1000 uses a massively simplified design to try and be much safer than other designs (NRC calculations say something like an order of magnitude) but is not cost competitive against most other forms of power generation. Which is why after Vogtle 3 and 4 there are no plans for more of them in the US.
It's not that EPR is any better- they are actually doing worse in terms of money and time slippage than Vogtle did. Flamanville 3 had it's first concrete poured in 2007 and still hasn't generated a single net watt!
It turns out that the pause in building nuclear reactors in the west from about 1995-2005- both US (which actually was longer, from the early 1980's, after 3 Mile Island things still under construction were finished but nothing new was built) and Western Europe (after Chernobyl following a similar path) basically gutted the nuclear construction industries in both, and they haven't built back up. The Russians kept at it, and the South Koreans have moved in to the market (and China is building a huge number domestically, though I don't think they've built any internationally), but Western Europe and the US are far behind, and after Fukushima Daiishi I strongly suspect the Japanese are in the same boat. Without the trained workers you can't build these in any predictable way, and when you pause construction for a decade you lose all of the trained workers and it's really hard to build that workforce back up again.
> (a solar panel in my sunless basement has the same nameplate capacity as the same panel installed in the Sahara desert).
Isn't latitude taken into account by grid operators for determining their expected peak output? The owners would otherwise be installing bigger (more expensive) converters than needed, so they'd know this value at least roughly. Even smarter would be to include the angle etc. but not sure what detail it goes into compared to latitude which is a very well-known and exceedingly easy to look up value for an area
I certainly see your point about it not being apples to apples, but on a cloudless summer day, the output afaik genuinely would be the stated figure (less degradation and capacity issues). The country is small enough that it's also not unlikely that we all have a cloudless day at the same time
One might well expect some sun in summer and put some of the used-in-winter gas works into maintenance or, in the future, count on summer production of hydrogen— although hacks are likely a transient issue so I wouldn't foresee significant problems there
You are talking about energy, which is not the same thing as power. TWh == energy, GW == power.
The distinction is important, especially in the Netherlands, which has a capacity factor of only about 10%-15%, whereas most of the US will be at least 20%-25%, which is twice as high.
I'm not sure of the typical number of reactors in the Netherlands, but using the US average of 1.6/power plant may not be the most representative comparison.
The point is about instant power injected, not energy, the point is that keep an AC grid at the right frequency it's a tricky business because energy production and consumption must match.
Too much from production the frequency skyrocket, little production the frequency plunge.
Now classic grids are designed on large areas to average the load for big power plants, this way those plant see small instantaneous change in their output demand, let's say a 50MW power plant see 100-300kW instantaneous change, that's something they can handle quick enough. With massive p.v., eolic etc grid demand might change MUCH more for big power plant, like a 50MW P.P. need to scale back or power up of 10MW suddenly and that's way too much to sustain. When this happen if the demand is too much the frequency plunge, grid dispatcher operators have to cut off large areas to lower the demand (so called rolling blackouts), when the demand drop too quickly the frequency skyrocket and large PP can't scale back fast enough so they simply disconnect. Disconnecting the generation fall and the frequency stabilize, unfortunately most p.v. is grid tied, if a p.p. disconnect most p.v. inverters who have seen the frequency spike disconnect as well creating a cascading effect of quickly alternating too low and too high frequency causing vast area blackouts.
Long story short a potential attack is simply planting a command "at solar noon of 26 June stop injecting to the grid, keep not injecting till solar noon + 5'", with just "1 second or so" (due to eventual time sync issues) all inverters of a certain brand might stop injecting, making the generation fall, a bit of rolling blackouts and large pp compensate quickly. Than the 5' counter stop, all inverters restart injecting en-masse, while the large pp are full power as well, the frequency skyrocket, large pp disconnect causing most grid-tied inverter to follow them, there are large change an entire geographic segment of a grid fall. Interconnection operators in such little time do not know what to do and quickly the blackout might became even larger with almost all interconnection going down to protect active parts of the grid, causing more frequency instability and so more blackouts.
You are using both with your energy generated numbers. That's where they come from.
Your solar TWh comes from 25GW at ~15% capacity factor, and to get your nuclear numbers you're looking at 1.6GW for each of nuclear "plants" when each reactor is usually about 1GW or less. There are ~90 reactors in the US, at 54 plants. The article is assuming 1 reactor per plant for the Netherlands.
> The article is assuming 1 reactor per plant for the Netherlands.
Small addition that isn't mentioned in the English version of the article, but only in the original Dutch version: the article talks specifically about the Borssele power station [0] (which has a power output of 485MW).
> but otoh you could likely drive a big truck into the right power poles and cause a similar sized outage
I get your comparison and the following is probably obvious to most people, but I feel like it really needs to be said: this requires being there, having a truck and the willingness to risk almost certain jail time, whereas taking down all SolarEdge installations on the planet could probably be done by an anonymous teenager in a foreign country with nothing but a computer and TOR client.
(I mention SolarEdge because I just had to deal with one of their systems and it pissed me off, I don't actually know of any vulns)
I dunno. I lived next to a small inland sea most of my adult life. The number of times someone on the other side of town asserted it was raining when in fact it was not was quite high.
Every adult in Seattle eventually has to learn that if you have an activity planned on the other side of town, if you cancel it because it’s raining at your house you’re not going to get anything done. You have to phone a friend or just show up and then decide if you’re going to cancel due to weather.
Now to be fair, in the case of Seattle, there’s a mountain that multiplies this effect north versus south. NL doesn’t have that, but if you look at the weather satellite at the time of my writing, there are long narrow strips of precip over England that are taller but much narrower than NL.
"Sometimes it rained in a part of town only" does not disprove the person saying "it can be sunny virtually everywhere at the same time in a small country"
For a simple demonstration, https://www.buienradar.nl/nederland/zon-en-wolken/wolkenrada... has been showing cloudless hours pretty regularly in the last month. Someone meaning malice can certainly keep an eye on that for a few days to find a good moment
Being sunny everywhere at the same time is not the problem with solar panels, and I think this was already covered up thread so this feels like going in circles.
The problem is what percentage are generally in full sun, and how low that percentage goes. My comment was about assuming that all of the panels are not in sun at the same time. Not whether sunny days exist.
If you’re not talking about day and night, summer winter cycles, then the sun’s behavior is the inverse of the union of clouds and pollution.
Do you know why microinverters exist? They exist because panels next to each other can see different light, and without the micro inverter the entire chain produces no power at all if one of the panels in series is generating no power. So we use microinverters to rescue power stranded by trees, debris, or partial cloud cover.
> When it is sunny in the netherlands, it is likely sunny everywhere in NL because of how small the country is.
Often friends of mine who live in my city report rain when I see none, or no rain when it's raining outside my window. That's to say nothing of a location 30km away, where basically anything can happen. Do we live on the same planet?
On which planet does the regular occurrence of one phenomenon disprove the regular occurrence of another?
It can both be true that weather is locally different on most days but coincides to be universally cloudless on a fair number of hours every late-summer month (easily within a reasonable waiting time for an attacker)
The Netherlands covers 41,850 km2. I don't agree this size is small enough to cause the weather to be likely the same everywhere. Whatever qualifier and quantifier juggling you're trying to do is beside the point.
> When it is sunny in the netherlands, it is likely sunny everywhere in NL because of how small the country is.
Nowhere there is the qualifier "a fair number of hours every late summer month". If you add arbitrary conditions of course you can get a different meaning.
Not only that, solar is entirely misaligned with power requirements. Over the year it may be 1.5 over nuclear, but in the winter, when demand is highest, the amount of energy provided will be far less, on account of short days and low light - typically you get 1/10 of the energy in winter that you do in comparison to a summer day. So overproduction when unrequired, underproduction when required.
>The owner of the panels and inverters can meanwhile establish a connection with that manufacturer using an app or website, and via the manufacturer see how their own panels are doing
> It wasn’t necessary from a technical standpoint to let everything run through the manufacturer’s servers, but it was chosen to do it this way.
(emphasis from article)
I'm working on IoT cloud system. It was chosen to be done this way because netither consumers nor installers have any expertise whatsoever to setup their own network or any devices to be acessible from outside (and they want their panels to be accessible when they are outside their home). I can do it, most readers of HN could do it, but typical consumer or installer can't. Sad but true.
The cloud can operate as a dumb TURN relay relaying E2E-encrypted traffic. Then the worst the cloud can do is deny service to remote management (and even then, local management would still work), but it wouldn't be able to send direct control commands to the equipment since they don't have the authentication nor encryption keys.
This also makes it simpler from a programming point of view - instead of having separate cloud sync & local control protocols, you just have one local protocol and you merely tunnel it through the (dumb) cloud if you can't connect directly.
It could, but this requires to store historical data about usage on devices. If you store that encrypted data in cloud, then getting it to your mobile phone is super slow. If you store it in cloud, you can get historical data even if your device is dead or has 256 BYTES of memory and 1 megabit of flash storage. We have such devices, very effective at managing local municipal heating network and controlling several thermal controllers each via rs232 or rs485. Fortunately we preemptively moved everything into VPN'ed mobile network, we need special approval to touch anything on that network and can't connect without them granting access, so after EU started moving with cybersecurity this year, we are covered.
> This also makes it simpler from a programming point of view - instead of having separate cloud sync & local control protocols, you just have one local protocol and you merely tunnel it through the (dumb) cloud if you can't connect directly.
Having only cloud protocol is even simpler, I've done all of the above (I do backend and our firmwares).
We consider "they hacked the mobile network VPN's AND had time to reverse our protocol before being booted out of network" as too high a level to be resolved by us. If someone has enough resources to do this, he will also just hack into standard-level secured server at municipal office and there will probably be no one there to stop him or discover what went wrong.
I don't think E2E is simpler to program if you want to get it right. There are entire companies whose raison d'être is actually managing keys properly (e.g. Signal, Tailscale).
This should be the basic model. A fully third party TURN service. You pay $20/mo to keep your home connected, and all devices and providers can use a standard protocol, and users remain fully in control of their data.
These plants or farms are usually built around and on top of industrial IEC protocols and SCADA controllers which is a lot more low level than what any cloud IoT privider offers.
I have done a controller for a 40 foot container battery and it wasn't like we received any API from Hitachi (battery manufactor). We had to write everything ourselves.
Often there are two control paths. Sometimes more! Plenty of inverters will quite happily give you an RS232 port specification and you can create your own dongle!
However, for purpose of the security of the nation's power grid, I don't just need my inverter to be secure, I need pretty much everyone's inverter to be secure. If an attack bricks 95% of solar inverters, the fact the nerdiest 5% of users have their inverters airgapped won't stop the grid having a lot of problems.
> RS232 port specification and you can create your own dongle!
This is just a way of pretending to give access while making it as hard as possible. We are talking about a device that is already connected to the network. The local path is not some rest services, but a serial port for which I need to fabricate some hardware? Don't piss on me and tell me it's raining.
Perhaps I wasn't clear - when I say "Sometimes more!" I mean many cheap chinese inverters actually support four options:
1. Cloud management with their app.
2. Wifi management without the cloud (when you're on your home wifi).
3. Unplug the wifi dongle from the inverter for a fully offline system. You don't really need your inverter on the internet anyway.
4. Unplug the wifi dongle and DIY whatever you want, the dongle's just a serial-to-wifi converter.
That's not to say the security of any of this stuff is good, of course. In fact the security is pretty bad! But you can for sure get inverters with multiple options for non-cloud operation.
The real answer is it's more than twice the work to have both paths, and there's not enough demand for it.
That said, Apple Homekit integration is local network based, so products that do that and the typical manufacturer cloud system have done both paths.
Homekit is a pain to use without Apple hardware/software, but there you go. (There's a plugin for HomeAssistant, but I'm still classifying that as a pain)
Cheapness. It would require to be at least semi secure, application on phone would need to find those devices locally and it should be synchronized with cloud anyway, synchronization is error prone and we had problems with devices sometimes responding twice or very slowly through local interface (through cloud was much faster, no idea why, not our firmware). Also not enough people requesting that feature, most don't care and think that losing internet is not often enough to warrant worrying about this.
Why not offer an either/or rather than both? Some people (I am one of them) actively do not want these kinds of things to be managed through the cloud servers. I don't want it to sync, I want to fully turn that off. I want to locally host, and I'm willing to take responsibility for that feature, including when it breaks. All I want is access to whatever the data reporting and control APIs are.
I get that I'm a tiny minority, and that very few customers want what I want. But A) it seems like giving me what I want should be very cheap (i.e should not entail ongoing customer support costs beyond normal, and in fact would get rid of the small cloud hosting cost) and B) I'd be willing to pay a premium to get it.
In some areas like cameras there are a decent number of cloud-free alternatives. Hopefully as the IOT market grows we'll get cloud-free versions of everything.
I think you're too optimistic about costs though. Providing any support at all, even one-time during the install, is expensive and cloud-free IOT is going to require support due to home networks being broken.
Yes, support is expensive, but what I am proposing will, if anything, reduce support. I'm imagining something where, if I opt into local control, I am giving up all rights to any support that is not related to the core functionality of the device. For example the solar panels/inverters in the article. If I opt in to local control, then the only support I am entitled to is the solar panels stop generating power or if the inverter stops inverting. Anything that is network related is no longer the companies problem, because I have assumed complete responsibility for that. I'd even be willing to agree that, in the case that I ever decide I don't want local control, and I want to switch to the cloud hosting, that I will pay for the support required to switch me back over.
So if my home network breaks, that is not their problem. And they don't need to set it up, they just need to make it possible for me to set up, including figuring out how to make it work with my potentially broken home network. If it requires a new router because mine doesn't provide some necessary functionality? Not their problem. Etc. Etc.
Consumer electronics doesn't work that way. If people can't get a product to work they will return it to the retailer and when the retailer gets a lot of returns they will penalize the company or drop them completely.
> So if my home network breaks, that is not their problem.
Differentiating between people like you, who can take blame for misconfiguring device and 99% of other consumers is not viable for most companies. Also, if you bought our device and wanted to do that, I would probably make a firmware version for you that connects to your endpoint and give you some docs. But:
- Just talking and coordinating that possibility for one user would cost my company more than the final price of device, when considering time spent on this.
- You would have to spend a lot of time to implement a lot of functionality to glue our protocol to your desired endpoint.
I have some shelly devices which manage to do all that, and cost next to nothing. Work with local rest services or cloud, password protection, TLS. Sure, it costs more than zero, but not much.
In the end, freedom goes away because we could not be arsed to ask for it at least, let alone fight.
OK, so key question: why is there a control plane in there at all?
I can understand people wanting to be able to see the metering live, but remote control of the panels just seems like a security incident waiting to happen. I'm quite glad I have a non-internet-connected inverter.
For IoT stuff in general; I can do it, and I don't want to because I'd rather spend my time doing other things (although yeah, I totally did learn everything I could about my solar array, because it is a source of power, after all. But for the other stuff...)
I agree, and I wish it were otherwise. Why is it so difficult for me to have a home network where things can just work? Why is it a mess of configuration and self signed certificates? It seems like nobody is incentivised to provide this, because nobody providing me with devices, services, and so on lives in my house with me. They need my data and my control pathways to go via them, not to stay in my house.
Also administering a bunch of IOT systems is a pain. If something is an open source community project, ok, I’ll play. If somebody is selling a product they are responsible for making sure it works.
You could put an sql database on a local device and just access it remotely like anything else. But you are correct you're stuck with administering each and everyone one of them.
The standard go to a raz pi solution will up and die every few months. And half the time you'll need physical access to get it back. It takes a lot of work to develop an embedded system that has enough reliability.
If we've learned anything from the security cam and baby cam scandals, then it's that convenience is king and we as a society would rather risk everything than be arsed to take few additional steps to setup/learn something to prevent such basic breaches. We (the society) don't even want to change the default password on most things.
People gonna be people. It's up to engineers and product designers to make things user friendly but also safe-by-default. If something needs to be configured, then provide instructions on how to configure it. Instead of pretending that it's society's fault (can't be arsed), maybe ask why the IT industry can't make instructions that are written out--explicit, fairly standard, and easy to follow--like the manual for putting together a piece of furniture. Or why the stupid device doesn't come with a randomly-generated strong password taped to it.
> fairly standard, and easy to follow--like the manual for putting together a piece of furniture.
And people still have problems following that instructions. We gave a lot of instructions, but some people just don't read them. Or can't understand and follow them. Example from this week:
Yeah, main support guy for our client is on holidays this week and only a girl from sales is available and she doesn't know why our device doesn't work, she tried resetting her phone wifi but still can't pair our zigbee smart hub connected with usb modem stick (problem: no one told her she needs to message us to actually activate sim card before installing device for end client).
Yes, you can solve this one problem, but there are many more we didn't see yet. Consumer support does not scale and you can't write tests for something you don't know will be a problem.
Did you have that written down somewhere? In instructions, which was my point.
Sorry for the snark, but I think "no one knows how to do anything" coupled with "Oh, the idiot didn't know to discombulate the canooter valve before inserting into the tinklerater--it's so obvious we didn't write it down" supports the point I'm making.
This was custom configuration process tailored specifically for that one client and it was written in a detailed instruction with each step tested and with screenshots. Apparently that instruction was not distributed internally at our client office. So mere existence of instructions is not often enough, you will still get very confusing calls about how to attach toaster to 36V to charge a wallet (when you did not send any toasters or wallets and client just wants your premade pizza).
So yes I sympathize that people didn't read the instructions, but I feel that that is partly out of habit that said instructions don't exist. We could work towards a better status quo where the configuration steps are simple, easy to follow, written out, and it's standard practice to just do them and they work.
The IT world seems to be an endless swamp of special cases and things breaking deep in the stack that require very specific, arcane skills from unwritten textbooks.
> Instead of pretending that it's society's fault (can't be arsed), maybe ask why the IT industry can't make instructions that are written out
Because the competitor that doesn't have this installation "hassle"¹ will sell more units. It ends back up with society choosing to behave this way in aggregate
Society is obviously good (wouldn't want to live without it) and capitalism held up for hundreds of years now (not sure it's the best solution but with the tweaks that are in place in many well-faring countries it seems to work okay for them), but I do believe "society" does have a tendency to go for easy and cheap more than complicated and thorough when the need is not self-evident and has not been tangibly pointed out in living memory
¹ Example of hassle that HN users may not think of as exceedingly difficult: iOS setup. My dad asked me to help him set up his new tablet. I've fallen for Apple's motto and was annoyed he doesn't just do it himself. So we sit there and... lo and behold, it's good that he asked me. I forgot the details by now but things like what the heck screen time is, whether the Find My network is something to enable, whether to enable automatic updates, what to do about Siri, whether Apple Pay is something he needs to use banking apps, etc. are not things he'd have any idea of and I can give the necessary context from my IT background even if I don't have iOS myself. However, we still needed to contact their support because we both couldn't figure out how to get an Apple account set up. The continue button was disabled. I tried scrolling: nothing. Tapping the button: nothing. No error message, nothing. Turns out, the flow was only tested in portrait mode but my dad wants landscape mode easier for typing and so we never tried portrait. A part of the screen was in fact scrollable (I've used Apple devices before and my dad used the old tablet weekly, it's not like we're not familiar with their design patterns or don't know how to use a touchscreen) and revealed a required input field that would have been visible by default in portrait. Next, even setting up a standard Microsoft 365 email server in Apple Mail wasn't self-evident, iirc because he had never heard of the word Exchange and the oauth flow worked across multiple devices with again different bugs in Apple's own UIs (one needed landscape instead of portrait, another required some trick with zooming beyond intended bounds iirc)
"Hassle" is already perceived in setting up what people proclaim the most user-friendly of systems. Units that don't need it will absolutely sell better, even if that leads to a potential national energy supply risk
> We (the society) don't even want to change the default password on most things.
Like you wouldn't believe.
My most memorable case of insecure IoT devices - wifi socket was sending wifi ssid and password of the network in cleartext in every ping packet to chinese servers.
> and we as a society would rather risk everything than be arsed to take few additional steps
Large manufacturers would like you to think this. It would provide them a convenient excuse for not even trying to differentiate the market along these lines.
> We (the society) don't even want to change the default password on most things.
Actually.. I just want to use my device _first_ and not go through some manufacturer controlled song and dance of dark patterns.
In my experience, if you don't pre load the user with this garbage, and then wait for them to have an actual _need_ that depends on the feature, they're FAR more compliant with following even lengthy instructions to get it done.
It's more a problem of aligned benefits and timing than anything else.
> In my experience, if you don't pre load the user with this garbage, and then wait for them to have an actual _need_ that depends on the feature, they're FAR more compliant with following even lengthy instructions to get it done.
Nope, they want to have it working out of the box like with any other manufacturer out there. If you enable functions only after user wants it, they will comment "this sh*t doesn't work" on your app store page. Then you have to respons to each comment with "what doesn't work, could you specify please?" and then after several days that user has enabled the functionality, but in the mean time several another gave such comments.
> they will comment "this sh*t doesn't work" on your app store page.
What if I told you those 3% of people will say this no matter what you do. These comments and the reality of your product are entirely disconnected. We had a small userbase and added voice uploads into the app; unsurprisingly, about 3% of them are clearly impaired in some way when they leave a message. [x-files theme].
In any case, a simple "Fast / Slow Setup?" question to start is all you need, and a "Do More Setup?" after they finish one item has, again, in my experience, been entirely sufficient.
Reasonable people understand, "oh.. this needs the cloud.. and I didn't do that part yet.. so I'll go ahead and click the 'social media provider' button." If you also decide this is a good time to ask about a news letter, well, you got what you bargained for.
Victron (to cite an NL vendor) actually can perfectly operate in LAN only via MQTT and ModBUS also offering a (bad) WebUI locally for settings pretty anything, including a display for the said WebUI in a framebuffer with an embedded mini-keyboard. It's up to the installer decide to go with their cloud offer or not.
The sole remark I have against them (beside the not so good software quality it's the impossibility for individual owners to do offline updates, we can upgrade via VRM portal but not downloading fw and flash it locally even if the needed device is on sale, because they offer fw files only to registered vendors.
Fronius (to remain in the EU) have a local WebUI witch need a connection only for fw updates, even if differently from Victron it's not a Debian based system with sources available but a closed source one, they unfortunately offer only a very limited REST API and a very slow ModBUS but still anything con be do locally.
I'm not sure, since I haven't any myself by SMA (Germany) and Enphase (USA) seems to been able to operate offline as well.
Stated that, yes, you are damn very right in saying most installers have no competence, thankfully where I live self-installation is allowed (at least so far), but that's simply demand better UIs and training for them perhaps avoiding the current state of the industry with an immense amount of CRAP at OEM level, with most "state of art" systems not at all designed to be used in good ways (see below) and absurdly high prices to the customer at a level it's not interesting installing p.v... 4 years ago I paid my system 11.500€ for 5kWp/8kWh LFP, the smallest offer to have it designed and built by someone else was ~30.000€ the most expensive ~50.000€ and all the 6 offers I tried shows some unpleasant issues and incompetence.
About OEMs just observe how ABSURD is that there is no damn DC-to-DC direct car charger. Most EVs now have 400V batteries, the same of stationary batteries, with equal BMS comms. Why the hell not sell an MPPT-to-CSS combo direct solution? Ok, we do not ONLY charge from the Sun, than it's perfectly possible have a compo charging station with DC for p.v. and AC for the grid, switching from one to another as needed. It's ~30% energy lost in double conversion.
Why no DC-to-DC high power appliance who still run DC internally (A/C, hot-water heat-pump heaters etc)?
Why not a modern standard protocol for integration of anything instead of building walled gardens?
Long story short OEMs have choose the cloud model partially because most installers are electricians able to use desktop holding the mouse with one hand and clicking with the other, but also because they have no intention to made user-interesting solution in an open market...
I'm not a pro in these systems (yet, I hope), but my understanding is that, beside lack of demand, HVDC is a safety nightmare compared to AC, and inverters are getting more efficient each year. So, even given the choice, I'd keep AC home-wide distribution and set up inverters in key places, with exactly the highest required voltage.
Well, yes DC is unsafe because if you get electrocuted instead of being "pushed out" you get "attached" to the conductors, but if you have a p.v. inverter at home you already have a 400V DC line from the module to the inverter, how different can be having the same to the garage? The same to an exterior heat-pump unit (where the DC compressor it's typically located) witch is also insulated by the mere fact you drive it via a remote?
Aside I'm not much an expert as well but, from DC to AC we are pretty efficient, around 98% of energy get converted. But the other way round it's pretty inefficient, meaning a loss around 30%. So while your p.v. it's pretty efficient in generating AC, your car it's pretty inefficient generating DC from AC to recharge. You heat pump would gain as well.
To be clear I do not advocate DC for any DC home appliance like washing machines or dishwashers, just for energy intensive appliances like big heat pumps and car chargers. Not more.
Valid points, though for safety I thought less about shock and more about fire: the DC arc is much more stable, thus the switches, breakers and relays have to be more robust. The rated DC voltage of magnetic relays is about 1/10 of AC.
Well... I'm ready to pay 10-time the price of an AC breaker to recharge regularly DC-to-DC and since that's the very same link than stationary 400V LFP batteries being able to use the car battery as a home one when I'm at home.
Try to imaging how well you could run if you have a small battery, as an expensive UPS for the home when you are not there (ventilation, homeservers, fridge(s)/freezer(s) etc) and when you are at home, thanks to a let's say 70kWh battery you can also run your heating in winter in case of a blackout...
In the end it's the very same power circuit, why not? 300€ extras in breakers to run my heat pumps more efficiently? No issue, they'll pay back countless more.
>they want their panels to be accessible when they are outside their home
I call bullshit. They've been conditioned to think that they want it, because all product brochures have it.
What kind of tangible benefit could there be to know how bright the sun is at your home while you're not there? A cool party trick to virtue signal or a break between doomscrolling, I suppose, but it's not like you're gonna jump up and drive back to... what even could you do if you knew?
> I call bullshit. They've been conditioned to think that they want it, because all product brochures have it.
You gave reason WHY they want it. Maybe consumers were conditioned to want access, but they still want access. If you give them similar devices, they will chose the one which has application or webpage to see how their big investment is actually working. It's not about current state of device, it's all about historical data and month-by-month savings presented as a nice graph. They will check this maybe every week or month (later every several months), but buyers still want to know what their installation did for them.
To be fair, I can do it only if I have time and physical access to the network. Home routers have different gateway IPs, different web interfaces, different password policies (e.g. there might be an admin password and an additional password for changing anything), etc.
It reminds me of <https://xkcd.com/627/>, but when you're launching a product that isn't good enough.
It's hard enough to open up a port even with uPNP (typically disabled) and other made-for-purpose tech. Torrent clients end up trying to poke holes and such. Service discovery might work via local UDP broadcast, or it might not. LAN clients might live at 10.* or 192.* or be isolated by default. It's easier to just go onto the public internet and contact some mysterious server. Botnet by design.
You mention IPv4. We're in 2024, this is getting ridiculous.
Governments should have done the same thing as with digital TV transition(s) : first ban selling devices that can't do IPv6, then ban selling (most) devices advertising they can do IPv4.
Here comes Matter protocol to the rescue, it supports IPv6 natively. It's even more complicated than Zigbee and of course doesn't specify all the devices available (but 1/4 of protocol specification is dedicated to smart fridge functionality because one fridge producer actually had someone to do any collaboration with protocol makers) and allows for "manufacturer specific fields" which means all manufacturers will have incompatible implementations of some fields anyway and you can't control them universally.
I live off-grid, power and water wise, and it really irked me that the monitoring coming with my inverter is only available online. Even when there is a network available the app will not work.
I fixed this by getting a raspberry pi connected and reading it from there, but if I disconnect the inverter from the internet it will create a new network so now there is always an open network in the middle of nowhere with no option to disable it.
I'm thinking about screwing it open and desoldering the wifi module but honestly I'll replace it in the next couple of years so I'd rather not kill myself by making a mistake.
Disconnecting the antenna would still have leakage at close range. Grounding the antenna might be a better option. But in practice, the dangers highlighted by the article only surface when an attacker has control of many solar plants at scale.
Compromising an individual one by getting close-range physical access will be a local annoyance but wouldn't scale to a level where it can threaten the grid, so it limits the pool of potential attackers to local vandals (which can achieve their goals easier by just throwing rocks at your panels).
Because humans are an ongoing cost and no one has figured how to sell non-consumable slowly depreciating goods as one-off purchases and keep paying your employees once you saturate your market.
Option 1: Artificially sell the thing as an ongoing cost.
Option 2: Artificially make the depreciation cycle faster. Get consumers to regularly replace it anyway with upgrades or trend changes.
Option 3: Make ongoing money from the item via a side-channel (tvs are great at this one)
Option 4: Manufacture and sell a huge number of different goods across market segments and weather the slow depreciation cycle (Oxo does this).
Option 5: Sell some consumable good you can get recurring revenue from along side the item (Coffee pods, printer ink)
Option 6: Make up the money on maintenance, repairs, and financing. Become a bank.
Option 7: Make your money in some other sustainable profitable business and drop the product once you've gotten what you can for it.
All of these kinda suck and option 1 is easy to implement.
> 0.002 MW - Small set of technical standards, no diplomas or certificates required
Be careful with this language, especially when you're involving politicians and the non-technical.
The current atrocity of criminally negligent IT infrastructure right now is mostly created and driven by people with diplomas, including from the most prestigious schools. (And a top HN story over the weekend was one of the most famous tech company execs, turned government advisor, advising students at Stanford to behave unethically, and then get enough money to pay lawyers to make the consequences go away.)
And most of the certificates we do have are are individual certifications that are largely nonsense vendor training and lock-in, and these same people are then assembling and operating systems from the criminally negligent vendors. And our IT practices certifications are largely inadequate compliance theatre, to let people off the hook for actual sufficient competence.
My best guess for how to start to fix this is to hold companies accountable. For example, CrowdStrike (not the worst offender, but recent example): treat it as negligence, hold them liable for all costs, which I'd guess might destroy the stock, and make C-suite and upper parts of the org chart fear prison time as a very serious investigation proceeds. I'd guess seeing that the game has changed would start to align investors and executives at other companies. What could follow next (with growing pains) is a big shakeup of the rest of the org chart and practices -- as companies figure out that they have to kill off all the culture of job-hopping, resume-driven-development, Leetcode fratbro culture, IT vendor shop fiefdoms, etc. I'd guess some companies will be wiped out as they flail around, since they'll still have too many people wired to play the old game, who will see no career option other than to try to fake it till they make it at the new, responsible game (ironically, and self-defeatingly, taking the company down with them).
Punishment is not the answer, you'll just drive out of the industry lots of competent people. Punishment also means that nobody will admit to mistakes, will not fix mistakes (because that implies guilt), and the covering up of mistakes.
Punishment for mistakes is what led to the Chernobyl disaster.
Flight safety works so well because the personnel are aligned with safety and professionalism, and the FAA has an important program in place to protect people from being punished for behaving professionally. And IIRC you're familiar with aircraft manufacturer alignment with safety.
But I'm concerned about the entire field of software, which doesn't have that sense of responsibility, and I don't see how it would get it. However, software industry -- both companies and workers -- are guided almost entirely by money. To the point that it's often hard to explain to many people in HN discussions on why it would be good to behave in any other way than complete mercenary self interest. So I don't see any way to get alignment other than to link money to it. If people see that as punishment, so be it.
in your later comment you mention alignment, but the reason is that there's an enormous market discontinuity between doing the "super-duper right thing" and doing the profitable thing ... due to network effect(s).
we see competition in cloud/IaaS providers because they actually need to build datacenters and networks and so there's some price floor, but when it comes to "antivirus" CrowdStrike was able to corner the market basically, and downstream from them not a lot of organizations/clients/costumers can justify having actual independent hot-spare backups (or having special procedures for updating CS signatures by only allowing it to phone home on a test env first)
the cultural symptoms you describe in so much detail are basically the froth (the economic inefficiencies afforded) on top of all the actual economic activity that's sloshing around various cost-benefit optimum points.
and it's very hard to move away from this, because in general IT is standardized enough that any business that needs some kind of IT-as-a-service will be basically forced to pick based on cost, and will basically pick whatever others in their sector pick -- and even if there are multiple providers the will usually converge on the same technology (because it's software) -- thus this minimizes the financial risk for clients/customers/downstream, even if the actual global/systemic risk increases.
Put another way: it’s far too easy and common for certification to encourage rote memorization. And only rote memorization. No higher order reasoning is imparted.
Knowledge without reasoning is how you get mired in bureaucracy.
BS gatekeeping rituals and compliance-for-sale theatre are arguably just symptoms -- of companies and individuals not being aligned with developing trustworthy systems.
Eye opening for me. One of the arguments for renewable energy (besides emissions) has always been its potential for decentralizing power generation. Makes it more resilient, democratizes the means of production etc.
This article shows that we inadvertently introduced new choke points. And of course the global security environment makes it more worrisome.
Hmm, almost like what happend to the internet... the idea being "everything is decentralized", but now +80% of traffic passes through Cloudflare and over 90% of mails come from 2 providers!
decentralized solar will never be able to provide power at scale. even the scale of 1 household. only homes with lots of land could afford the amount of panels needed. the average home will always need to consume power generated offsite
Yes, p.v. have opened the way for semi-autonomy depending on where you live BUT ruling class really dislike this, they want slave not Citizens and tie people to service it's a very good way of making slaves who can't revolt.
That's why instead of pushing self consumption and semi-autonomous systems we push grid-tied and cloud-ties crap, to be tied to someone else service, slave of that. It's the "in 2030 you'll own nothing" already a reality in modern cars, connected to the OEM with a much higher access than the formal owner, much modern IoT and cloud+mobile crap. People do not even understand they do now own, until it's too late.
Another simple example: in most of the world banks between them have open standard to automatic exchange transaction, in EU that's OpenBank APIs, with signed XML and JSON feeds. There is NO REASON to block customers for directly use such APIs from a personal desktop client. All banks I know block such usage. So you do not have all your transactions signed by the bank on your iron, you have NOTHING in hand. In case of "serious issues" you have nothing to prove what you have on your bank, what you have done with your money. In the past we have had paper stuff to prove, we now have signed XML/JSON witch is even better than paper being much harder to falsify, but no, we miss because 99% must own nothing.
We have connected cars with a SIM inside, but instead of having the car offering APIs and a client or perhaps even a WebUI, directly to their formal owner we have to pass through their OEM, the real substantial owner. And we can't even disconnect the car. In the EU it's even illegal for new car to be disconnected since the emergency e-call service must be active on all new cars.
This article repeatedly cites the need for personnel to have diplomas, certificates, and other ceremonial bits of paper.
This focus on paper qualification to mitigate risk seems a very European approach. Not saying it is wrong - it is just not emphasized as strongly elsewhere. And while it seems like a good fit for a slow-moving industry with high expectations of safety, the solar/wind world is not a slow-moving industry.
A good point - perhaps the focus is too heavy on paperwork or "measurable compliance".
From experience in this sector though, I think the real issue is a lack of technical awareness and competency with enough breadth to extend into the "digital" domain - often products like these are developed by people from the "power" domain (who don't necessarily recognise off the top of their head that 512-bit RSA is a #badthing and not enough to use to protect aggregated energy systems that are controllable from a single location).
Clearly formal diplomas/certificates are not needed for that - some practical hands-on knowledge and experience would help a lot there.
When a product gets a network interface on it, or runs programmable firmware, we should hear discussions about A/B boot, signatures, key revocation, crypto agility to enable post quantum cryptography algorithms, etc. Instead, the focus will be on low-cost development of a mobile app, controlled via the lowest-possible-cost vendor server back-end API that gets the product shipped to market quickly.
Let's not even go near the "embedded system" mindset of not patching and staying up to date - embedded systems are a good place to meet Linux 2.4 or 2.6, even today... Vendors ship whatever their CPU chipset vendor gives them as a board support package, generally as a "tossed over the wall" lump of code.
I doubt many of these issues (which seem to be commercial/price driven) will be resolved through paperwork, as you say.
In the rest of the tech industry, what you did to get your diploma gives you about 18 months of momentum. If you haven’t learned multiple new technologies by that point, you’re in trouble. Success in this industry means perpetually redeveloping your own skills, and liking it.
How someone would wave a 20 year old piece of paper as evidence that they know how to use solar tech that was developed last year, I don’t know.
I mean, electrical engineering teaches you a lot of the math,physics,and control systems theory, and power systems that guides the design and operating characteristic of power systems devices like inverters. Sure EE doesn’t help with cybersecurity per se, but inverters and solar panels existed 20 years ago so I feel like my 20 year old electrical engineering degree is pretty darn relevant
It certainly does - if you remain current then not a lot has really changed.
If you understand the principles of control systems and how an electrical grid works, this is broadly "just" a grid stability concern.
To some extent this feels like an issue of IoT-ification of things that we otherwise understood just fine! Maybe the real issue is how we blend cyber security knowledge into other sectors, and quantify and ensure it is present?
Fair enough, the parent comment mentioned “solar tech” and old pieces of paper and silly me didn’t realize that the power systems side works is a given and the problem is essentially hooking it up to computers and the internet to gain a modicum of convenience.
How would one practically verify and certify cybersecurity of a product? Even payment smartcards sometimes come with non-malicious maintenance backdoors. There seem to be little to no academic theoretical basis to this whole software security thing.
Given the challenges of techniques like TLS interception (i.e. through pinning and other good security features), about the only measure I can see left is network isolation.
You can set up a local network that has no WAN connectivity on it. About anything else is difficult to verify even the most basic of security properties. Certifying is another step up (although you could argue certifying is just a third party saying something passed a finite list of tests) - the real challenge is defining a meaningful certification scheme.
The trouble is that these set out principles, but it's hard to validate those principles without having about the same amount of knowledge as required to build an equivalent system in the first place.
If you at least know the system is not connected to a WAN, you can limit the assurance required (look for WiFi funcitonality, new SSIDs, and attempts to connect to open networks), but at a certain point you need to be able to trust the vendor (else they could put a hard-coded "time bomb" into the code for the solutions they develop).
I don't see much value in the academic/theoretical approaches to verification (for a consumer or stakeholder concerned by issues like these), as they tend to operate on an unrealistic set of assumptions (i.e. source code or similar levels of unrealistic access) - the reality is it could take a few days for a good embedded device hacker to even get binary firmware extracted from a device, and source code is likely a dream for products built to the lowest price overseas and imported.
We're not just talking about random consumer hardware : with security issues like these, I don't see why closed source software would not be just banned.
Are you suggesting using smart phones should count in "not allowing it in"? Then yes, I try to where possible. I do not depend on a smart phone. All functionality that are operationally necessary can be done elsewhere without major delays or impact.
My installer put a solaredge inverter, it took some real efforts to keep it off the cloud while injecting the data in my grafana. I can do it because I am a network engineer, but it should be easier.
Anyway, I agree that there should be a regulation that forbid remote management, and you can only consult data in a read only manner remotely (you could air gap the inverter with the internet gateway using a one way rs232 connection where the inverted just write continuously). And if grid operators need to be able to turn solar off, they should install relays controlled by their infrastructure.
> It’s also possible to install new software (firmware) on the inverters via the manufacturer, either automatically or manually.
As always, the vulnerability of enabling remote updates. When will people learn? Updates should only be possible if there's a physical switch (not a software switch) on the device. If it's "off", no updates are possible.
Isn't the most devastating attack vector remotely installing malware? With a hardware switch, none of that malware will survive a reboot of the device.
I remember when hard disk drives came with a write-enable jumper. Then, once you've made a backup, the jumper is removed. Then it is impossible to accidentally or maliciously write over your precious backup.
Neither does remote updating. But you'll still need physical access to the supply chain to compromise it, and that's not possible for some hacker in a basement.
> But you'll still need physical access to the supply chain to compromise it, and that's not possible for some hacker in a basement.
I forgot to respond to this sentence in the sibling response.
Supply chain attacks can be executed by intermediaries of the supply chain, or by manufacturers themselves: develop the capability to deny a foreign nation its energy infrastructure. The manufacturer is not a hacker in a basement. Manufacturers can be pressured by their local gorvernments, militaries, 3 letter agencies, ...
A precautionary principle would induce potential target nations to surreptitiously catalogue the inverter boards, sort them by most-GW serving type, and consider which control traces to cut to control the internal energy transfers in its inductors, capacitors, ... from a trusted parasite board. Just develop and test a few parasite boards for the most common inverters, and preferably have critical stock ready.
The main value in inverters is the power switches, inductors, capacitors, ... it would be cheaper to reroute the control to a trusted controller in the event of a calamity. We would survive fine, but it will be a painful few days.
The short version: most consumer and business solar panels are centrally managed by a handful of companies, mostly from countries outside of Europe. In the Netherlands alone, these solar panels generate an output equivalent to at least 25 medium sized nuclear power plants. There are almost no rules or laws in Europe governing these central administrators. . . . The same thing goes for heat pumps, home batteries, and EV charging points.
Seems to me that this is very similar to the situation with IoT only with higher stakes. I appreciate this article's presentation of inverter and grid trust.
Beyond trusting customer inverters to do the right thing, I wonder if there is a method for safing a grid at the hardware level. Naive question: could there be a grid provider device that prevents overcurrent or incorrectly clocked cycles?
The utility company fuse between the property and the 240 V distribution system should prevent overcurrent. If the frequency or phase of the inverter is wrong the inverter might die first unless the network is already down.
There isn't really any practical way to prevent overvoltage though. So a rogue controller in charge of all the solar systems in a street might be able to do quite a lot of damage to consumer devices.
A problem from the utility point of view is that they can no longer guarantee that the 240 V side of the distribution system is safe to work on just by tripping a breaker on either side of the distribution transformer. So all work on the 240 V distribution system has to be done with the assumption that the system is live.
Eventually regulations will be updated, if necessary, to deal with large numbers of solar installations on domestic buildings.
> The utility company fuse between the property and the 240 V distribution system should prevent overcurrent. If the frequency or phase of the inverter is wrong the inverter might die first unless the network is already down.
To put it more simply: if the phase is wrong, the effect is the same as a short circuit, which fuses and circuit breakers protect against. If the frequency is wrong, the phase will become wrong after a number of cycles.
> There isn't really any practical way to prevent overvoltage though. So a rogue controller in charge of all the solar systems in a street might be able to do quite a lot of damage to consumer devices.
There is, it's called a surge protector or surge protective device (SPD). It converts any overvoltage above a certain level into a short to ground, which then trips the fuse or circuit breaker. It's often used as a protection against lightning-induced currents.
> A problem from the utility point of view is that they can no longer guarantee that the 240 V side of the distribution system is safe to work on just by tripping a breaker on either side of the distribution transformer. So all work on the 240 V distribution system has to be done with the assumption that the system is live.
From what I've seen, the utility workers usually ground the wiring when working on it (they have a special-purpose device for that). Once it's safely connected to ground, it's no longer live.
A surge protector dimensioned for protection against lightning induced overvoltages will probably not trip at the level of overvoltage that could be produced by an ordinary inverter.
That's why my system (Victron + Fronius) is offline, monitored with HA, BYD battery if there is no secret in-hw backdoor in my home server can't reach the internet as well. HA can, via wireguard, to act/monitor when I'm outside my home witch might be a serious threat but it's pretty easy to cut it off if needed.
There is a more important part, while with p.v. we still can go offline, with car's we can't. My car is connected and I can't do NOTHING to manage it, it's managed by it OEM behind me and that's a much bigger threat since single cars can paralyze the nation if properly blocked in critical points of the road network.
At a largest scale that's the reason we can't have a national smart grid but only individual smart microgrid, meaning p.v. should be used only for self-consumption NOT grid-tied like in California.
It irks me endlessly that we live in the worst timeline, where the computer equivalent of fuses and circuit breakers are almost completely unknown. Instead we trust code blindly.
This results in almost all of the situations threads here address.
In a better timeline, everyone has stable and secure OSs on all their devices, and the default is for everything to be locally networked, with optional monitoring from the outside via a data diode.
it's incredibly hard to implement a data diode for PV systems, enemy satellites can modulate light (like a TV remote, but lower baudrate to stay below the noise floor) and an inverter could decode it and respond accordingly.
Sunlight is about 1000 watts/meter^2 on the ground. Do you have any idea how much power you'd have to send to get through a Jewish Space Laser(tm) to even have a -40 db s/(s+n) signal to pull out of the noise through a side-channel attack?
It would be interesting to use a solar panel as a sensor for moonbounce of an IR laser, though. You'd have to try at the almost new moon, with a window of a few hours/month ;-)
You're describing two very different concepts at the start.
A data diode applies to a specific connection. It's easy to have a serial port that goes one way.
Preventing any possible input to an already compromised device is much harder. But if your device isn't already compromised then it won't be looking at the input light levels for commands.
Its quite trivial really, what is needed to capture a weak signal with known modulation from the background is integration time. Think of how deep space light can be captured with digital cameras with "long" exposure times it can reveal light the human eye can't see because the integration time of light on the retina is too short.
Now other light sources will also integrate with time, this is where the modulation scheme comes in. First consider the amount of time you'd have to integrate the noisy signal to raise it above the noise floor. Thats the on time you need. How do we remove background light variations from other sources? Consider a discrete time pre-agreed pseudorandom sequence, that has "0" periods as often as "1" periods. To remove a constant background you take calculate the sum of light intensities of all "1" periods and the sum of all measured intensities of "0" periods. Then you subtract the "0"-sum from the "1"-sum, a constant signal will remove itself, the satellite signal will be summed N times. since your pseudorandom sequence was kept secret, random variations in light (think bird passing by) will not conspire to selectively block light during the "1" periods, so such noise will be uncorrelated with your pseudorandom signal. adding N uncorrelated noises grows by sqrt(N), so the S/N-ratio grows as sqrt(N). These are widely understood methods, an engineer might call it lock-in amplification, a physicist might call it correlation. This is very basic engineering / science knowledge. It's baffling that people consider this "hard" to execute, sure if you're the milk-man in a village this is hard to execute.
Isn’t the right place to fix this at the junction between the plants and the grid? Regulate the grid utilities into a gateway role, and require all inverter control & telemetry traffic to pass through them.
This seems likely to be more fruitful than attempting to regulate 400 Chinese panel manufacturers.
If the west for some reason starts to vigorously argue with China over something, we're all completely fucked. They'll just tell our cheap EVs to forget how to brake, melt the firmware in our cellular towers/chips, and toggle our PV inverters off and on at a shitty time.
If solar panels can be turned off, why are utility companies having to sell excess power at a loss? Why can’t they tell the solar farms to reduce their output by the required amount?
It's worst actually, at least in France, if you inject to the grid you have to pay an "energy transport fee", even if you inject for free (only recently self-made systems are allowed to sell energy, before they can only donate or not inject at all) and the injected energy is now paid less than the cheapest price to the customers (6 cent/kWh for ground based p.v., 10 cent for on-roof p.v.). So well, we do not harm large utility business.
What harm on scale is the variable output especially from small p.v. utilities built out of incentives NOT personal power plants, the grid is sized with some large power plants serving a large set of customers, their absorption vary but if the grid is vast (and not too vast) enough variation tend to be slow on average, let's say 50MW PP experience 100-200kW demand variation in very short time. They can compensate easily keeping the grid frequency stable. With a significant amount of grid injecting p.v. variation might be MUCH bigger creating significant stability issues where injection goes up too quickly making the frequency skyrocketing and large PP can't decrease their output fast enough risking disconnection witch in turn might put large p.v. plants offline suddenly creating a cascading effect of large blackouts.
That's the real issue with grid-connected and tied renewables and another reason why we need to go toward self-consumption NOT injection.
As I understand it: because the incentives are wrong.
Owners of small scale solar panel installations are payed a fixed price per kWh in many EU countries, regardless of the market price. The taxpayers pick up the tab I guess.
In theory someone somewhere should be incentivised to spend money on building storage systems so that they then have to pay less money in the future in excess days.
Solar power does get curtailed pretty often, but there isn't one uniform solution to the problem, different utilities / markets / grids have chosen different solutions to this.
as far as I understand there's a market based solution. producers bid prices for time slots (consumers too, but that's less important from the perspective of a solar power plant) and if they win the contract is live, they need to input for that slot. if they miss (go over or under) they get paid less (and of course a penalty is possible too, theoretically it's the same)
this incentivizes better capacity and availability forecasting for solar installations, and preserves the usual dynamics of the open energy market.
..
the problem is with these super small ones, where initially states just let people connect it, because it's green, yey. (but now DSOs started to make connecting waay harder. and regulators are investigating, eg. in Spain. [0])
of course the non-residential installations already usually need aFRR capability. (eg. this is the case in Hungary.)
and there's already a market for "reserves" in the EU. (but the interconnection rate is below the target 15% as far as I know. but still, there are intra-state markets, etc.) and we can see that when solar is high the reserve prices are surging. [1]
Most newer solar inverters can't even be set up without internet and most functions are only available with an always-on internet connection. This is also true for EU companies like SMA for example.
I just installed a Fronius inverter (made in Austria) and 6.8kW of panels.
The inverter itself functions perfectly fine without an internet connection, and will display instantaneous power output on the screen. I could just be content with that and look at my monthly power bill to see how much I generated and how much I used each month and never connect it to the internet.
To get any kind of data logging & history from the inverter, it must be internet connected (wifi or ethernet). And all of that is through the manufacturer's website, which constantly nags me to "upgrade to pro" for some obscure feature that I'll never use.
In greater scale, meaning power plants not the PV installed at houses, these things are taken more seriously and after purchase of equipment the control and automation of plant are in your hands. For example, Woodward, ABB have products with capacity up to 0.5 MW of single inverter.
Micro inverter for each panel would be very costly. In 1 MW plant you will have around 4000 panels, communicating with that amount electronic devices would be a headache.
The author seems to imply, as if it were generally understood and accepted, that the reason nuclear reactors are heavily regulated is because they produce a lot of energy.
Perhaps that's a component, but one really doesn't need to think about it too hard to identify better explanations for why this particular energy source is held to unusually high regulatory standards.
I don't have an opinion as to whether other large-scale sources of energy should be held to similar standards, but to suggest that solar energy's failure modes are comparable to nuclear energy seems intentionally misleading.
> In the Netherlands alone, these solar panels generate a power output equivalent to at least 25 medium sized nuclear power plants.
> Because everything runs through the manufacturer, they are able to turn all panels on and off. Or install software on the inverters so that the wrong current flows into the grid. Now, a manufacturer won’t do this intentionally, but it is easy enough to mess this up.
> As an interim step, we might need to demand that control panels stick to providing pretty graphs, and make it impossible to remotely switch panels/loaders/batteries on or off.
Basically, if a hacker were to make all batteries (or panels) suddenly switch between full discharge and full charge every second or so, it would tear down the electric grid. Voltage and frequency would swing rapidly, and whatever plants are riding load would struggle.
This could create a massive power outage; but there is a huge risk that this could damage power plants and other infrastructure.
> It’s also possible that the manufacturer gets hacked, and subsequently sends out attacker controlled and wrong software updates to the inverters, with possibly dire consequences.
> There are also people that claim that the many Chinese companies managing our power panels for us might intentionally want to harm us. Who knows.
Wait, seriously? The European power system relies on Chinese companies not messing it up remotely? And the debate is over whether the companies will stay nice? For heaven's sake, isn't it obvious that during a war the Chinese government can force them to just destroy the continent's power system remotely? How is this not seen as a extreme continental security risk?
> It’s also possible that the manufacturer gets hacked, and subsequently sends out attacker controlled and wrong software updates to the inverters, with possibly dire consequences.
Idaho National Lab is one of those places that researches this. https://inl.gov - their domains are energy (primarily nuclear and integrated) and national security ... and securing the grid is the intersection of that.
From the wired article the key part of how it broke:
> A protective relay attached to that generator was designed to prevent it from connecting to the rest of the power system without first syncing to that exact rhythm: 60 hertz. But Assante’s hacker in Idaho Falls had just reprogrammed that safeguard device, flipping its logic on its head.
> At 11:33 am and 23 seconds, the protective relay observed that the generator was perfectly synced. But then its corrupted brain did the opposite of what it was meant to do: It opened a circuit breaker to disconnect the machine.
> When the generator was detached from the larger circuit of Idaho National Laboratory’s electrical grid and relieved of the burden of sharing its energy with that vast system, it instantly began to accelerate, spinning faster, like a pack of horses that had been let loose from its carriage. As soon as the protective relay observed that the generator’s rotation had sped up to be fully out of sync with the rest of the grid, its maliciously flipped logic immediately reconnected it to the grid’s machinery.
Another security issue are all these cheap always connected IP cameras from China. Meantime the most recent achievement of EU lawmakers is cap permanently attached to a bottle. No wonder, as at least in case of my country we are sending the most corrupted sleazy individuals to the EU parliament and commission.
I'm not sure everyone is really thinking clearly here.
Don't get me wrong, they should get rid of this practice of cloud monitoring. A consumer should be able to access monitoring over the internet without an intermediary. They should, of course, be allowed to contract with an intermediary if that is their desire.
But the security argument?
Yeah, that ship has sailed. Total war, means total war. Your power grid, your internet, your communications, and your fossil fuel deliveries will all see material disruption. I wouldn't count on being able to stop those disruptions by banning a few web sites. (And frankly, during total war, those disruptions would be the least of your problems in any case.)
Best bet for places like Europe, China, the US and Russia is, just don't do total war with each other. If you choose to do it anyway, then you can see what you can expect from that in the documents filed under "Play stupid games, win stupid prizes."
You're turning war into a black-and-white "total war" situation. Total war is rare, and no -- no ships have sailed.
It's easy to imagine a scenario where something happens between China and Taiwan, Europe gets involved in a way that majorly pisses off China, and China decides to sabotage Europe's grid in response.
Nothing about that is "total war" with Europe, and it's not like Europe is going to escalate with nukes either because that would be wildly disproportionate.
But it's a major vulnerability that should be fixed as quickly as possible. It's negligent for that to even be an option for China, because it certainly doesn't seem like Europe can do anything similar to the grid in China.
Your idea that security vulnerabilities don't matter, that "that ship has sailed", is false and irresponsible.
No one advocated ignoring the vulnerability. I, myself, specifically stated that monitoring should be direct. Consumers should unilaterally decide where, when and how their assets are monitored.
The material point on security is that there are many, many methods of disrupting a power grid. Even when you are looking for plausible deniability, shutting down solar panels from cloud website doesn't make a list of your top 10 options. (In fact, it won't make the list in those scenarios precisely because you are looking for plausible deniability.)
Let's imagine a power grid as modern societies know them today, except all consumers monitor their solar panels themselves, and none of those consumers outsource this monitoring function to any third party foreign or domestic. Power grids can still be materially disrupted in this scenario. Especially in the case of total war. Obviously in the case of open war. And particularly in the case of cold war.
As I said, I advocate consumers disconnecting any power generation functions from networks. But if I'm in the seat coming up with post conflict, or even simply emergency recovery, operating assumptions, I'm not counting on those panels generating power. It's just irresponsible to do so. In total war EMP will knock most of that generation off line where you're luck enough not to have it eliminated entirely. In cold or open war, disruptions to distribution can and will render that generation useless. (Just ask Ukraine.)
Consumer cloud, or even personal, monitoring of solar panels does not enhance, nor does it degrade, your adversary's ability to disrupt your power grid when your adversary is at that super power level. If you believe it does, you're either not looking at the full spectrum of what you're calling "vulnerabilities" extant in the infrastructure of modern societies. Or you're underestimating the full spectrum of capabilities of modern military powers. Both, frankly, are fatal mistakes in the types of crises we're postulating.
> But the security argument? Yeah, that ship has sailed. Total war, means total war.
Those are your words.
I'm saying, focusing on total war is irresponsible and leads you to draw false conclusions. In the real world, limited conflicts are what we're dealing with 99.9+% of the time, thank goodness.
And now in your new comment, for some reason you're focusing on "plausible deniability" which is another red herring. If China wants to disrupt Europe's grid, it doesn't care about plausible deniability -- the entire point is to publicly retaliatiate. It just needs to do it, as easily as possible. The idea that relying on a cloud vulnerability "doesn't make a list of your top 10 options" doesn't make any sense at all. It might very well be the #1 option, or one of three tactics employed simultaneously.
The security argument against cloud based monitoring has sailed.
With or without cloud based monitoring, our power grids can be disrupted.
That's the commonly accepted meaning of "that ship has sailed" as a colloquialism with respect to cloud based monitoring.
Also, you, yourself, brought up the idea of cold war style confrontation. The basis of most actions against proxy supporters in cold war style conflicts is plausible deniability. It's not a red herring, it's a widely adhered to tenet of cold war style conflict planning when targeting said proxy supporters.
I tried to cover total war, open war, and cold war to address the full spectrum of likely super power on super power active confrontations. In each scenario, the existence, or non-existence, of cloud based monitoring of solar panels, has no effect on the ability or inability of your adversary to disrupt your power grid.
Which disruption was the central thesis of your assertion. I was simply explaining why it was false.
You are being willfully argumentative at this point. If you didn't want to address cold war scenarios, why did you bring them up? You have a nice day sir or ma'am.
> Also, you, yourself, brought up the idea of cold war style confrontation.
No, I didn't.
I think it's clear that China shutting down Europe's power grid would not be a cold war scenario. That would be quite hot. But also clearly not total war either.
Since you don't want to converse any more I won't make any further points, but please don't claim I said things that I clearly didn't.
Reading through comments I saw a lot of comments confusing cloud security with electrical safety of a system. Electrical protections are completely separate from communication line/ internet, has to be hard wired. As the size of plant/substation increases the automation and control system (again completely different thing from electrical protection) has its own internet system. Burning down substation, exploding transformer through solar panel is very very unrealistic.
On top of that PVs installed at homes are insignificant to cause such troubles. As the size of installation increases, you will have different connection agreement and certain requirements. You can't install 15 MW and connect through inverters that are used at home, which is 100 kW at most. Even 15 MW is insignificant change for a grid.
You're the one that seem to be missing something, we're not talking about local electrical safety, but about the global grid stability, with a hacker potentially hijacking software controlling tens of GW, spread out over many personal home installations of solar panels.
My point is hacking into home PV inverters doesn’t affect grid stability, you can’t penetrate into grid in that way. At worst we’re talking about losing power for a short time at those homes. When the demand is planned for a region you specifically exclude PV for load flow studies.
Engineering grid and PV installations acknowledge that the generation may be lost, so you are having contingency plan by means of transferring or picking the load. You're going to lose power for short time if you didn't do this properly.
Actually due to the nature of PV generation, no sun no generation, it is reckless to just rely on PV. If sun is shining that is great, there'll be generation. However, daily peak consumption coincides with less day light. So during planning the target is the extreme cases (statistically estimating demand), in other words you do load flow studies for extreme cases. This helps to see your capacity limits.
In parallel to this you should consider electric grid as a layered system. PV generation at house is the lowest level, so less impact. So when it is lost, or neighborhood or town lose PV generation it will impact nearby station, which is couple MW if not kW. So when you lose PV generation and you planned your system for extreme case, higher level of generation or substation will take control of it.
Losing GW solar does not mean you're losing that amount power in small geographical region. You have to divide that into so many small parts. Also, PV generation at GW level is too high for small region.
Hope this explanation helps. It is because how power flows, governed by rules of physics. Bottom line, if hacker wants to affect a grid they should target higher level of grid, PV panels on the rooftops will not help their cause, they are end of line.
Not to be that guy, but the DOE is arguably one of the most important federal agencies in the US, and they treat the problem with the correct amount of focus, research and dedication. It’s just a very hard problem.
The grid is no less secure or less resilient than it was 50 years ago, the main problem is that people are more dependent on it.
Almost no one buys a personal generator before an outage happens anymore, despite it being one of the cheapest ways to get resiliency.
I can make my computer wildly vary the amount of power it is drawing by performing different things in software. Max out the CPU and GPU load and it will instantly change from drawing ~100 watts to 500 or more.
There have been plenty of botnets in the past. Some even in the millions of computers. If such a botnet decided to make every node's power draw fluctuate per above, wouldn't this cause the same type of problem? Is there a reason we've never seen this happen despite large enough networks of hacked machines existing?
The Netherlands (about which the article mainly is) has 8.4 million households, let's presume they own average of one such PC you mention. A delta of 400W would mean a total consumption delta of 3.36GigaWatt. That's "peanuts" to cover.
And that presumes an attacker can switch on/off all 8.4million computers in a small timeframe. 100% of them would need to be on, online and hacked.
I don't think this is a realistic problem.
Tesla F-ing up an OTA update that suddenly switches all charging Tesla's off, is probably a theoretical worse scenario.
I don't doubt that that many watts is easy to cover - eventually. The problem is that it can be instantly turned on and off, whereas the grid takes time to shed load or add capacity.
I found a figure on Wikipedia saying that the NL's 4.7GW worth of offshore wind capacity is 16% of their total electricity demand nationwide. 4.7/.16 = 30GW total, so this theorized computer load attack would represent about 10% of their grid's total capacity. Can their grid add and shed that much load that quickly? That's the part I doubt.
You skipped over the part where I point out that my assumptions are completely off. These numbers presume that all computers in all Dutch households are hacked, running and connected to the internet. 5% of that would be on the high side even.
So a more realistic "attack" would be able to move demand, 0.5% of the total grid capacity. Switching on/off one smelter in an aluminium factory, is probably more than that. Hacking a major charging-station company and switching off their chargers is probably more than that even.
I understand the direction you think, and I agree that the combined power usage of "consumer devices" is big. But the larger power system is rather well protected by an attack on these devices through the diversity of these devices and the diversity of their setup (consumer firewalls, routers, individual protection, in-house fuses, local load killswitches etc).
The solar devises lacks this diversity, as the article mentions. There are few brands, and all of a brand need to connect to the one cloud service in the exact same way. So this does have a single point of attack. Whereas "switching on/off all personal computers in a country" is of an entirely different level.
According to World Population Review and corroborated by other sources, the Netherlands has 91 PCs per 100 people. So your assumptions aren't that far off, actually. If anything, your assumption is lower than reality because a household is defined as 2 or more people.
> Incidentally, why are all those panels centrally connected anyway? I’d like to know what my panels are doing, but you don’t need the internet for that.
This is because of the market for carbon credits. When you installed your PV panels, someone estimated how much electricity they would generate over the next 10-15 years. Tradable carbon credits were created based on that estimate and went into the marketplace. And for the next 10-15 years they have to verify that the electricity was actually generated, or else someone has to pay back some money. Did you read the fine print on your contract? It is probably you that has to pay it back. You didn't know that one of the "rebates" you got was actually a pre-payment for those credits?!? Should have read the fine print ;)
Oh yeah, BTW: that "rebate" was only your portion of the credits. The installer got some of it (and doesn't have to pay back anything), the person that filled out the paperwork you didn't know existed got some (and doesn't have to pay back anything)...
I don't remember when and where exactly (and didn't found it in a quick search), but there was already an incident, where an automatic update failed. I think it was something with the country code, so it was a bit isolated and not all over the world.
There's a reason why I took my inverter offline after making sure that it was installed correctly. A cheap power meter now serves to measure my power generation instead.
Taking it offline doesn't protect against supply chain attacks in the form of built-in kill switches. A satellite could transmit signed instructions by modulating light below the noise floor, inverters must sense the voltage/current state of the PV panels anyway for MPPT to work.
Only deep inspection of the silicon and code can improve the situation.
Perhaps Western blocks could develop provably secure silicon IP and code, formally verified, and perform continuous random sampling on imported goods, including full multilayer silicon inspection; publish it for free and refuse to import products that don't cooperate.
I'm curious about the feasibility of modulating light onto a solar panel. I feel it would not be feasible, except possibly onto a single panel at a time over a long time period. Just a gut feeling based off radio stuff (GPS).
GPS can provide the coherent reference, if you mean transmitting signal (say sound) while the panel is illuminated by the sun theres youtube videos of people doing that, with a laser pointer, but in sunlight and without information theoretic justified modulation scheme.
Nothing prevents the satellite to transmit the commands at night, if that feels more convincing to you.
Ask yourself what is the active area of a photodiode in your TV/... ? What is the active area of your light-bucket on a roof?
I'm not at all claiming this is happening, I'm claiming this is very feasible to do. Consider the aperture (area) of a camera or telescope, how many times can you fit this area into a domestic solar panel installation?
If we are discussing the solar panel that powers your outdoor garden light, I don't suggest to apply the precautionary principle, but we are talking about products that sum up to a significant fraction of grid power generation.
Wait what. I don’t know if my inverter does what they say. For one thing the vendor went bankrupt so there is no cloud dashboard anymore. For another there are hundreds of inverter vendors not one single one. And I am highly sceptical the basic dashboard showing solar generation has some sinister inverter backdoor killswitch when the article seems to provide no evidence of such? Seriously?
Edit: did some research and apparently it varies - many modern inverters can be remotely controlled by manufacturers - if they’re setup to allow it and are internet connected.
The article is still sensationalist about the risks though
I posted this question to HN 7 months ago, more around DataCenters:
>In the increasingly interconnected global economy, the reliance on Cloud Services raises questions about the national security implications of data centers. As these critical economic infrastructure sites, often strategically located underground, underwater, or in remote-cold locales, play a pivotal role, considerations arise regarding the role of military forces in safeguarding their security. While physical security measures and location obscurity provide some protection, the integration of AI into various aspects of daily life and the pervasive influence of cloud-based technologies on devices, as evident in CES GPT-enabled products, further accentuates the importance of these infrastructure sites.
>Notably, instances such as the seizure of a college thesis mapping communication lines in the U.S. underscore the sensitivity of disclosing key communications infrastructure.
>Companies like AWS, running data centers for the Department of Defense (DoD) and Intelligence Community (IC), demonstrate close collaboration between private entities and defense agencies. The question remains: are major cloud service providers actively involved in a national security strategy to protect the private internet infrastructure that underpins the global economy, or does the responsibility solely rest with individual companies?
---
And then I posted this, based on an HNers post about mapping out Nuclear Power Plants:
And you look at the suppliers that are coming from Taiwan, such as the water-coolers and power cables to sus out where they may be shipping to, https://i.imgur.com/B5iWFQ1.png -- but instead, it would be better to find shipping lables for datacenters that are receiving containers from Taiwain, and the same suppliers as NVIDIA for things such as power cables. While the free data is out of date on ImportYeti - it gives a good supply line idea for NVIDIA... with the goal to find out which datacenters that are getting such shipments, you can begin to measure the footprint of AI as it grows, and which nuke plants they are likely powered from.
Then, looking into whatever reporting one may access for the consumption/util of the nuke's capacity in various regions, we can estimate the power footprint of growing Global Compute.
DataCenterNews and all sorts of datasets are available - and now the ability to create this crawler/tracker is likely full implementable
Since this didn't pass the smell test: the author is looking at nameplate capacity, which is a completely useless metric for variable electricity production sources (a solar panel in my sunless basement has the same nameplate capacity as the same panel installed in the Sahara desert).
Looking at actual yearly energy generation data, this is more like 1.5 times the generation of an average nuclear power plant (NL solar production in 2023: 21TWh, US nuclear production in 2021: 778TWh by 54 plants).
Which maybe puts more into perspective the actual risks involved here. I'm not saying there shouldn't be more regulations and significantly better security practices, but otoh you could likely drive a big truck into the right power poles and cause a similar sized outage.