Read a biography of Rickover. ("Rickover: Controversy and Genius" by Polmar and Allen. is a good one.) He was a weird boss. Huge ego. Into screaming at subordinates, put-downs, and micromanagement. He did get results, most of the time. Hated within the Navy, liked by Congress.
Highly controversial for sure. “The Rickover Effect” also a good choice. Dry, long, but full of great anecdotes. All that said it’s tough to argue with his accomplishments. First to a productive/working nuclear reactor, in the hardest application possible (submarine), in government (not the private sector). And starting significantly behind… It’s hard not to be inspired by what he accomplished.
"When I came to Washington before World War II to head the electrical section of the Bureau of Ships, I found that one man was in charge of design, another of production, a third handled maintenance, while a fourth dealt with fiscal matters. The entire bureau operated that way. It didn’t make sense to me. Design problems showed up in production, production errors showed up in maintenance, and financial matters reached into all areas. I changed the system. I made one man responsible for his entire area of equipment—for design, production, maintenance, and contracting. If anything went wrong, I knew exactly at whom to point. I run my present organization on the same principle."
> A major flaw in our system of government, and even in industry, is the latitude allowed to do less than is necessary. Too often officials are willing to accept and adapt to situations they know to be wrong.
I feel like this has become the norm just about everywhere.
I reread this essay once every few months. It is a good reminder that there is no shortcut to being an effective engineering manager.
A common mistake many new and seasoned managers make is to delegate and then disconnect from the details. Once so disconnected, it is a one-way street where the manager progressively operates at a higher and higher level. Now they are no longer in a position to evaluate the reliability of estimates from their engineers or the output of their work. Worse, they lose awareness of the technical debt being accumulated and tradeoffs being made.
This works out fine if they are lucky to have a highly competent and conscientious team and it is a disaster otherwise. The latter is more likely in the real world especially if you are not working for a company that can afford to hire top-notch engineering talent.
Rickover's approach outlined in the above article keeps managers grounded in the reality - delegate but do not disconnect from the low-level details. The flip side is that this may come across as micromanagement (and Rickover was a notorious micromanager). Effective management is anything but easy.
>> Further, important issues should be presented in writing. Nothing so sharpens the thought process as writing down one’s arguments. Weaknesses overlooked in oral discussion become painfully obvious on the written page.
I knew Rickover's leadership and management still sounded very, very familiar, way before I read this towards the end of his speach.
This applies widely in software engineering: for example, the yawning gulf between the merits of microservices with Kubernetes and Istio on paper and in practice.
This quote has always seemed cute, and coming from Berra it would be, but hardly true.
Theory (at least in physical disciplines) is always taught with up-front statements of assumptions, what non-idealities are assumed away in order to tractably develop a theory in the first place. It's not taught by theory that theory transcends whatever simplifications were made to arrive at the the theory. Physical theoreticians do not contend that theory has no differences from reality.
It's not a boolean matter of the assumptions being "not valid" - sometimes, the non-idealities are of significant enough magnitude to make a meaningful difference. Other times not.
If one is taught ideal spring-damper theory, they are told friction in the damper is neglected in the mathematical model. If I use this theory to size the spring and hydraulic damper for some application where the forces involved vastly outweigh the damper seal friction, it's likely this non-ideality doesn't impact the answer enough to affect my sizing.
If I'm trying to eke out every last bit of performance from the spring-damper system and minimize damper lag, then the seal friction probably does matter.
Either way, when the theory was taught, it was done by stating what assumptions were used to derive the theory. How much those assumptions affect the validity of the model for a given purpose is an "it depends" matter.
Applied to practical PV and wind power and batteries, this has some merit. But, the point is those lab bench designs HAVE led to structural improvements, engineering at scale. And they did in nuclear engineering too. All he was really doing is drawing a line through "simple, cheap and easy" language coming out of benchtop work.
It's like "... in mice" for medical advances. Taking the next step into humans demands rigour.
But is it the same at all? You design a solar panel and try one and make prototypes and then install thousands and it’s about the same and it scales. A nuclear reactor on the other hand…
A tonne of peskovite lab demos have failed to launch at scale in industrial deployment. It is really hard to make stable, high yield, high power PV. Small increments are all we're left with.
You're saying the risk:consequences equation differs. I don't disagree.
Modern silicon PV cells are cheap, non-toxic, stable, and their efficiency is more than half of the theoretical maximum. The big challenge is storage, PV generation already outperforms any other power source.
For those of us who have read it, do you think reading his book, The Never-Ending Challenge of Engineering, is a good investment for a software manager?
I had it recommended to me once a blog post I’d written on engineering management hit the front page. I bought it and read it based upon that comment.
He basically has some very strong opinions about the seriousness of knowing your domain and systems well. Given the risks inherent in submarine nuclear reactors, he really focuses on deeply knowing your shit, so you can model problems holistically.
How practical this advice is in a modern environment of doing more with less and selecting trade offs with less than perfect information is an open question.
That being said, if you want to reflect on your own standards for yourself and your work, it’s good.
I read the DoE biography of Rickover while I was doing my undergrad in Physics at the Naval Academy in the 90s. At that time there were still Navy Captains who had interviewed with Rickover. And I've read pretty much everything I could since. I would say the lessons left behind are more for managers, unfortunately most managers will never read them. They're too busy showing how they can do more with less. Cutting corners is their method.
> At that time there were still Navy Captains who had interviewed with Rickover.
After nearly 50 years I can still recite my interview (both sides of the conversation) pretty much verbatim from memory — it was, let's just say, unfriendly (but he let me into the program, much to my surprise).
I think a lot about how the modern software solution is to automate all of the things.
Rickover believed in that AND demanded technical excellence from all of the operators. Nowadays, it seems like we automate things so that we don’t have to train technical excellence.
Rickover believed that automated systems will fail in unpredictable ways, leaving the human operators as the last line of defense. The people in the organization need to understand how -everything- works so that they can respond to these failures.
But modern orgs want to automate the things so that they can cut costs. Not Rickoverian.
I'm a former fast-attack sub nuke (Electronics Technician). Note that I got out in 2016, and was last on a sub in 2013, so cultures may have changed.
Nukes are expected to understand their systems (and everything that interacts with them) at a granular level. A common final board certification question is, "How does a neutron turn my rack light on?" Depending on your rate (i.e. your specific job - reactor, electrical, mechanical, chemical), you may be expected to do a deeper dive into a specific area, but everyone will be required to trace the path from fission --> heat --> heat exchange --> steam generation --> turbine --> power buses --> light. By trace the path, I mean literally sketch the systems including pumps, valves, circuit breakers, etc.
An analogous question for tech is "What happens when you type foobar.com into your browser?" This question has enormous breadth and depth: keyboard debounce circuits, CPU interrupts, TCP congestion control algorithms, DNS, LCDs, and a thousand other things I didn't discuss. You can argue that this is all trivia for most SWEs, but I counter that if you have at least a passing familiarity with the actual full stack, you're in a much better position to understand how your day-to-day work affects and can be affected by the rest of it.
For example, IOPS seem to be a mystery to a lot of devs. I'll grant you that EBS (and presumably other cloud offerings as well) makes it a fun adventure between provisioned, burstable, implicit striping, max performance/24 hours limits, et al., but the root concept remains the same - you can perform N actions/sec of block size M. Don't forget to take fsync() calls into account, as well as blocksize mis-matches.
This stuff is so abstracted away that most people never have to see it or think about it, until they do. The magic that's occurring when you add a Persistent Volume to a Pod is incredible. There's the cloud provider abstracting away the disk, controller, redundancy, failover, etc. The CSI driver abstracting away filesystem creation, expansion, etc. Kubernetes abstracting away mount points, access controls, read/write, etc. The amount of things in the path between your app issuing a write and it landing on persistent storage is breathtaking.
Not having to deeply understand lower level activities allows people to concentrate on higher level activities, enabling the building of more complex systems. I suppose it is good to have a few people who know everything but it suffices to have collective coverage with redundancy. Unless you have severe constraints on the size of the crew, as with a plane or a submarine. The software world is not like that.
I didn't say it's necessary to deeply understand everything.
> if you have at least a passing familiarity with the actual full stack, you're in a much better position to understand how your day-to-day work affects and can be affected by the rest of it.
One of the places Inworked at was so good at cutting corners, they had to resort to cutting corner of a perfect sphere after no corners were left anymore. In a sense, the culture / leadership principle of "no corner cutting" was 100% accurate... Horrible place, and all engineering driven...
True problem is not technical and it is right there in the article. Non-academics "speak less and worry more". So their knowledge is likely scattered and not distilled. Building these devices is probably quite simple in the end, it is just the story is not being told.
I'm not sure why this was posted. Presumable u/jameshart thinks it is relevant to some current events.
If these events have anything to do with nuclear fission, then I don't find Admiral Rickover's remarks relevant at all. Those remarks were right on the money back in 1953, but we are in 2023 now. Nobody is suggesting we can build reactors easily, or with "off-the-shelf" components, or that they'd be cheap, or can be built quickly, etc. We know reactors are complex, expensive, take a long time to build.
But reactors are not theoretical anymore. We've built hundreds of them. Of many types. Pressurized water reactors the most; they've turned out to be quite good from a number of points of view. But we've built boiling water reactors, Candu reactors, even sodium cooled reactors, reactors cooled with CO2, etc.
When people talk about building new reactors, it's not paper reactors we're talking about. It's reactors where the world has thousands of reactor-years experience. They are complex, they have to go through an arduous regulatory approval process, but they are definitely not vaporware.
"An academic reactor or reactor plant almost always has the following basic characteristics:
It is simple.
It is small.
It is cheap.
It is light.
It can be built very quickly.
It is very flexible in purpose (“omnibus reactor”)
Very little development is required. It will use mostly “off-the-shelf” components.
The reactor is in the study phase. It is not being built now.
On the other hand, a practical reactor plant can be distinguished by the following characteristics:
It is being built now.
It is behind schedule.
It is requiring an immense amount of development on apparently trivial items. Corrosion, in particular, is a problem.
It is very expensive.
It takes a long time to build because of the engineering development problems.
It is large.
It is heavy.
It is complicated.
"
The above still seems to be relevant, it spells out the differences between paper exercises and the finished product. On paper the many thousands of subtleties of real things are rarely taken into account
i.e. frictionless, motionless, perfectly rigid, etc., spherical cows are the norm.
Precisely this. In theory, there will never be unexpected latency to the database, so you don't need to worry about overwhelming it and can just immediately retry. In theory, your instance's CPU won't throttle due to a shared tenant on the host running an AVX-512 instruction, so the profiling you did in staging is valid and can be trusted.
Nothing to do with current events. Nothing to do with (specifically) anything happening in nuclear engineering.
Just thought it captured a timeless truth of engineering - that there are two kinds of project: beautiful clever designs that will solve all your problems (but that haven’t yet been built); and messy, expensive, late designs that are actually working.
Related to the nirvana fallacy discussed here the other day, where user acidburnNSA mentioned it - I felt it deserved a discussion of its own: https://news.ycombinator.com/item?id=36078781
I think it’s been posted on HN because it’s relevant to the way software development can be performed in ignorance of production environments - or can be done in a production-aware (more SRE focused) way.
It is much more widely applicable than just to nuclear reactor design. There is a steady drumbeat of articles here (and elsewhere) of developments which (on paper) are allegedly game-changing. This is prevalent in medicine of course, but also (off the top of my head) space travel, batteries, solar cells, everything AI, programming languages and frameworks, self-driving cars, flying cars, even the perennial favorite but now-obsolescent internal-combustion car engine, and, yes, new nuclear reactor designs.
I took it as a metaphor for other types of engineering. With software engineering we teach people how to walk a tree, schedule threads, do matrix transformations, etc. in school, then throw them into a job where the big problems are more like "How do we make the web page fast enough for the vendors in Mumbai without violating GDPR?"
"Doing a Job" https://govleaders.org/rickover.htm