Hacker News new | past | comments | ask | show | jobs | submit login
Weather intelligence and cutting-edge tech is boosting grid capacity by 30% (electrek.co)
91 points by toomuchtodo on Jan 13, 2024 | hide | past | favorite | 25 comments



Somewhat related to this is the regulations for interstate natural gas pipelines. The regs have four “classes” which dictate the thickness and depth of a pipeline based on the diameter and pressure of the pipe.

Those “classes” are determined by population density near the pipe eg high density means thicker pipe.

This means city dwellers get the biggest safety margin, while those out in rural regions get pipelines that are around 50% thinner. And such pipelines can be within a few feet of an occupied dwelling.


Does pipe thickness affect the efficiency of ng delivery?


Not directly, but it does increase cost. Thicker pipe costs more.


The article doesn't explain very well...

But this is all about getting more electricity down the same wires. On a cold windy day the wires are cooled more. Which means you can pump more amps through them without risk of failure - sometimes 100% more!

However, grid operators aren't able to make use of that ability because grid operation is typically scheduled hours in advance - since classical power stations take hours to start/stop.

However if you have a reliable weather forecast a few hours out saying it will be cold and windy, then you can safely schedule in some combination of power producers and consumers which would otherwise put a particular link over capacity.


I am not sure what this means.

Grid capacity can mean that every segment is now being fully utilized.

The Grid has X number of inputs and Y outputs, (some if not all can be both?)

Boosting capacity would seem to me to be most interesting as it pertains to routing.

If San Diego is experiencing an uncommon need for electricity, then paths are routed to increase supply(?). But San Diego probably wont have that many inputs (?) and each of them probably has some max capacity (?).

Ok so you need to route as much power there as can be used.

How would this system make this better?

would it allow the grid operators to route more power over existing lines if weather permitted? and the opposite if the weather does not permit.


I think it means on average they can transmit 30% more current since they know the temperature of the lines for a given load, ambient temp, and air speed.

It doesn’t help on a hot day with no wind.

The transmission systems I have seen mimic geography - mostly transmitting power from generators to loads such as big cities. It is more of a radial or hub and spoke architecture than a matrix or grid.


Other companies have been doing this for a while now, not sure if this is that new https://www.linevisioninc.com/


Can someone elaborate on how this works? Is that like optimizing power delivery based on the fact that power lines will have different power losses based on weather/temperature/humidity?


There is a good Volts[1] episode[2] on it. TL;DR the thing that limits how much power a line can carry is heat dissipation, which depends on weather. We used simple heuristic models to back out what we considered safe operating currents historically, but if you use sensors to enable a more data-intensive approach, you can operate closer to full capacity.

[1] https://www.volts.wtf/

[2] https://open.spotify.com/episode/4sqNcVcXdLF3QFDR6f5Vgd


So an alternative title is " ... is reducing safety margins by 50%".

(Sensors sense what they are designed to sense, not other things, and sometimes lose calibration.)


> So an alternative title is " ... is reducing safety margins by 50%".

Not necessarily. The _actual_ current limit depends on the climate conditions, and with static line rating the safety margin actually varies. So while dynamic line rating might reduce the safety margins when the conditions allow running more current, it can also increase the safety margin in case of a particularly hot summer.


Operating with unnecessary margin is wasteful and does not necessarily increase safety.


Said no ohs advisor ever.

Engineering in buffer for safety is one of the most common practices.

I mean hey if you wanna roll the dice standing up next to some high pressure hydraulic gear on machinery that's running a few psi away from failure because it's super efficient be my guest.

But I'm not gonna enjoy the one day in 10000 when something goes a Lil awry and you get cut in half because old mate ordered hoses without steel sheathing because the unsheathed ones were rated for the same pressure but were cheaper and more cost efficient.

That's some 1950s old world business ideology there.

Also another example. All lifting straps for cranes and lifting hardware is generally capable of 3x to 4x its safe working load. Buffers are everywhere and they save lives.


Accuracy and safety margins are two different things.

Suppose you have to spec the capacity of a line regardless of what temperature it is, because you're not going to measure it in realtime at all. You estimate the highest temperature will be 105 degrees F, calculate the capacity at that temperature, add e.g. a 20% safety margin, and call that the capacity of the line.

That means when it's 40 degrees F, you could be operating with a 200% safety margin, which is unnecessarily conservative and wasteful. Conversely, because you're not measuring the temperature at all, your high temperature estimate could be wrong and there could be a day that it's 115 degrees F, your safety margin is completely gone and the line burns out. Whereas if you were monitoring the temperature, you'd lower the capacity of the line that day to still have a 20% safety margin and not have problems.


This is a timely debate in engineering. Historically, it would be customary do the most conservative reliability analysis possible, right up to the point of accounting for physically impossible parameter combinations, just to ensure enough margin to trade at some point in the future when things inevitably don’t go as planned.

Now, technology has advanced to the point where all the low hanging fruit are gone in terms of performance optimization, and we’re better able to determine how close to the “cliff” we are, with tighter uncertainty bounds, than ever before with probabilistic design methods. By this logic, we should be able to squeeze out margin that was never needed (or never there) in the first place.

No safety professional would trade away margin you say, but the engineer is often between a rock and a hard place when a few percent traded away in margin translates to multiple millions of dollars in cost reduction or profit. Is it unethical to remove margin if your Bayesian UQ calculations are saying you’re still safe, if not safer than you were under the other methodology?

This tension is going to keep on building as AI-enabled solutions start penetrating more and more into traditional engineering fields, with as-yet unknown consequences.


trading margin is perfectly fine if there are compensating mechanisms.

for example automatic circuit breakers and other sensors and actuators that can reliably respond to the critical conditions

if we can double the line current by trading margin, and only need to add few sensors and automatic breakers here and there - thats perfectly fine. You will double your current without building a new line basically


In the end all engineering is about economy and compromises. The idea his to keep the margin for safety but to improve utilization. That's a tricky problem and indeed if the measurements are inaccurate then riding closer to capacity will result in safety issues. But that's not the intention here (though if gear is marginal then it may well be a possible outcome in some cases).


the point of safety margin is to account for the unknown. if you learn more, you can safely reduce the margins. cranes have 3x margins because they're being operated by construction workers who aren't carefully measureing everything. Rockets otoh often only have ~10% margins. The more uncertainty you remove, the smaller you can push the margins.


One of my closest friends was killed in a workplace accident due to the implementation of the safety switch in the device he was operating.

Elevated work platform. He was working underground. They use a small joystick with a safety button on top to control the up and down action of the machines. His offsider was directing movement from the ground.

Ewp hit a obstacle and bounced, my mate ended up falling over the side rail and in the process fell on the lift switch. Pushed it up and pushed the safety switch in because it was located on top. He got crushed and killed between the ewp and the mine roof.

This was a safety device that was specifically put in to increase the safety of the machine. The engineers overlooked this aspect of its implementation during design. Now they have cages over the lift control. That lessons learnt in blood.

Needless to say from this experience I never assume an engineer has thought of all dangers and certainly hasn't engineered devices to be as safe as they can be. It's a downside of designing from the office and not the field. Its also why field monkeys like me get called in to talk to engineers and provide consult. There's a lot yo be learnt when you work on the tools that you don't get in textbooks.


The safety margins are still the same -- just using a lookup table based on temperature vs. assuming worst case.


Not really sure how this is different than a meter at the POI at every substation?


They are measuring the temperature out on the line


This submission seems a dupe of https://news.ycombinator.com/item?id=38983687


My post is the source per HN guidelines vs reblog.

https://news.ycombinator.com/newsguidelines.html


How so? The submission you linked is a different article?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: