Anyone who's trying to secure workloads running in any EC2 instance should know about this, there's nothing special about it being an ECS instance. You could do the same thing with EKS.
I find these kinds of articles very helpful in professional communication. It's not because they're scientifically accurate, but I often find myself pausing and thinking about what I've heard and how I will respond. As someone with ADHD, just that small moment allows me to take in information and compose my thoughts to communicate better.
One additional one I would add is: don't be an ass. I have seen a fair share of people in this role belittle QA, Devs, and interns because they're not able to follow along or understand how to use DevOps tooling.
I like to think of the role as hospitality; you don't particularly have to like the guest but try to create a good working atmosphere.
A specific example: if a particular dev runs to you every time a build fails and tries to blame the environment: force them to check their own work first. If there's another branch that builds, ask if it's related to one of the changes in this branch. Ask if the code builds locally. Ask them what they've tried so far to debug the issue. These questions will help out anyone who does not know how to actually troubleshoot build failures and discourage anyone who is simply trying to pawn off their work.
I try to think about developer experience in my day-to-day. K8s might make deployment for the DevOps team easier but more painful for the devs, and that's not good because they are the ones who make the company money. So work together with the developers if they're complaining to make it possible to ship code as fast as possible. If the devs don't like settings in yaml files, find a way to abstract that away (use standardized naming so there's fewer values they have to care about, give sane defaults that won't be overwritten 90% of the time).
The DevOps team's sole existence is there to enable developers to ship better code faster. If DevOps practices are preventing this, that needs to be addressed.
I've always wondered if being forced to live near the plants would change regulators minds. There are plants throughout the country, so if you want any say in safety regulations you'd have to live within 1km of the start of the safety zone.
Probably wouldn't change anything because the regulators likely actually understand nuclear power plants and what the particular failures mean for the overall safety of a plant, at least far better than the average lay person does.
I'm an advocate for nuclear power but it seems ridiculous that this warning was given 20 years after the initial find.
I understand that nuclear facilities are not like regular buildings and require more logistics for repairs but there's no way it would require 20 years.
> The NRC, which rarely issues yellow findings, said nuclear plant operators did not resolve cracking problems from 2003 to 2022 in V.C. Summer’s diesel generator system, one of the most important backup safety systems at an atomic power plant.
> Federal nuclear safety officials made a discovery that was perhaps more unsettling than the problem from 2022. They identified a pattern of cracks and leaks in the plant’s emergency generator system going back 20 years. On five different occasions since 2003, the power company has been forced to repair cracks in the emergency diesel generator system, according to an agency inspection report released in August. Diesel oil leaks have focused attention on why VC Summer plant operators did not resolve the cracking problems — and how that might have affected the company’s ability to prevent a radiation leak if an emergency occurred. Officials with the Nuclear Regulatory Commission say they are concerned because the problems keep recurring. Few other nuclear plants in the Southeast have had the same number of cracking problems in diesel generator systems, say officials in the agency’s Atlanta office.
"In this case, officials at the V.C. Summer plant learned about cracks in fuel pipes in the facility’s diesel generator system in 2003. Utility workers fixed the initial crack, as well as other cracks four different times in the years after the initial work was done. But the NRC says the utility never adequately assessed what could be done to make sure the diesel piping system did not experience more cracking. The most recent cracks were identified in November 2022 during a 24-hour test of the system. Workers found a small leak on one of two diesel generator systems. The leak increased over time and workers discovered a 140-degree crack around a pipe, records show."
So they found cracks first in 2003, and fixed them. They found more cracks over the years and fixed them... what they have failed to do is to stop the cracking from occurring in the first place. That failure led to an even bigger crack happening during a test run of the system. According to Dominion, they plan to build a new pipe, which should fix this. The NRC seems to think they should have done this sooner, and the NRC is likely right, but I am no expert in these things.
It probably should also result in a much more throughout inspection of the plant to ensure that there are no other issues. Again, I have no expertise here.
Agreed overall. That specifically contradict's GP's upthread understanding of "The original cracks were repaired. New ones did not show up until 2022."
We would need to assume that each of those events was spread out over the entirety of the 20 years. All 5 occasions could have happened between 2003 and 2007, and fifteen years passed before the next. Each time the cracks were repaired.
Without a true timeline of events, its hard to say for sure.
> not like regular buildings and require more logistics for repairs but there's no way it would require 20 years.
tldr; regulatory processes prevent improvements.
I work in a highly regulated industry though not nuclear. There are obvious things to change which were approved in the site plans decades ago. Those nonsensical systems must be maintained because if they stop working for a small number of hours, everything must be stopped and there will be fines. It won't be changed because doing so requires asking regulator for approval and then everybody and their uncle gets to make a comment and even sue to stop it. Regulatory process hinders obvious good changes and improvements.
BTW, this is similar to how Los Angeles squeezed out good paying manufacturing jobs three decades ago - make it near-impossible to get electrical permit to change anything.
While you can't explicitly allocate a process to E cores, I wonder if you'd be able to make a program which floods high QoS processes to saturate E cores until they are full and then launch the desired application into a P core
This is a similar hack to what AMD does with their new 3D-series processors with two CCDs (or clusters following this article's terminology). One of them has a higher clock count but the other one has much more L3 cache. Since their target is gamers (at least at the consumer level chips), they basically ship a driver which detects when you launch a game and "parks" the low-cache CCD from being assigned tasks which effectively dispatches your processes to the high-cache ccd.
I'm not sure I see the point? Applications generally run on P cores by default; there's no need to go through those sorts of gymnastics to make that happen.
On my M2 Macbook Air, I see everything generally executes on the E-cores. The P-cores only see load when there's clearly something that needs lots of processing power.
I presume there's logic in the scheduler to prioritize E-cores for battery life purposes.
I don't think it's completely guaranteed that you can choose such things anymore. Some scheduling work has to be done at the hardware level and it's certainly going to be hijacked as heterogeneous CPU micro architecture becomes more pervasive and the gulf of difference in feature set between efficiency and performance cores becomes wider.