Hacker Newsnew | past | comments | ask | show | jobs | submit | nickevante's commentslogin

That distinction between Industrial Production and Knowledge Work feels like the root cause.

It seems like modern Agile has mutated into a tool for Manufacturing Predictability rather than Software Discovery. We are so obsessed with making the velocity graph look like a straight line that we stopped asking if we are even building the right thing.

Do you think that shift happened because non-technical management needed a metric they could understand (tickets closed), or did we do this to ourselves?


This is the exact mental model I was looking for.

It reminds me of Kingman's Formula in queueing theory: As server utilization approaches 100%, the wait time approaches infinity.

We intuitively understand this for servers (you never run a CPU at 99% if you want responsiveness), yet for some reason, we decided that a human brain—which is infinitely more complex—should run at 99% capacity and still be expected to handle urgent interruptions without crashing.


This discussion on software estimation brings up an interaction I had with an engineer who optimized Black & Decker assembly lines in 1981 using an Apple II.

They didn't estimate in 'Story Points'. They used atomic physical constraints.

He described it like this:

There was a standardized metric for all manual operations like "reach, one hand, 18-24 inches" or "pick item 10-100g." Each step had a time in decimal seconds... The objective was to minimize the greatest difference in station time so that no line worker is waiting.

The most interesting part was his conclusion on the result: Modern supply management is a miracle, but manual labor today is much harsher... The goal back then was flow; the goal now is 100% utilization.

It feels like in software, we are moving toward that "100% utilization" model (ticket after ticket) and losing the slack that made the line work.


For anyone interested in this, Jamis Buck's book 'Mazes for Programmers' is a masterpiece of the genre.

My personal favorite distinction is between the Recursive Backtracker (which creates long, winding corridors with few dead ends which is great for tower defense games) vs. Prim's Algorithm (which creates lots of short cul-de-sacs which is better for roguelikes). The bias of the algorithm dictates the feel of the game more than the graphics do.


I've noticed a similar pattern. AI assistants are incredible at kinetic coding i.e. generating boilerplate, refactoring, writing tests. But they are detrimental to potential coding especially at the architectural thinking that happens before you touch the keyboard.

Writing by hand (or whiteboarding) forces you to load the entire context into your working memory. AI allows you to be lazy with your working memory. The code gets written faster, but the mental model of the system in my head is significantly weaker.


Definitely, it's great at making a template but, it's hard to see or continue from what it has generated.

I agree that writing by hand sets up that mental model, and because it's written by you, it's easier to branch out from there.


The headline is slightly misleading. Microsoft can only provide the key if you are using a Microsoft Account which automatically escrows the BitLocker recovery key to OneDrive.

If you use a Local Account (which requires bypassing the OOBE internet check during setup) or explicitly disable key backup, the key never leaves the TPM. The issue isn't the encryption algorithm its the convenience selection.


I love how Luke has his video setup dialed in... while some of us struggle to even get our basic backpack dialed in for such adventures. Curious what selfie stick and tripod does he use to get such high quality angles and footage without spending too much effort to setup while camping. We know he used GoPro 13 Black but what about other accessories and tech gear for charging in wilderness


Vibe-based coding is sloppy and has given the tech a bad name. Used smartly by devs, LLMs can still spark innovation and solid builds provided you know what you are doing with them and their limitations.


TL;DR - Learn from each other. Pay it forward. It's not a zero sum game! :)


Application of AI could be a game-changer for microgrids, especially to a) boost efficiency, b) manage demand, and c) strengthen resilience.

Perhaps there are strong parallels with the application of AI for microservices architectures.

The learning can be shared between both systems. Any thoughts?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: