Hacker News new | past | comments | ask | show | jobs | submit login

Reminds me of that 70s IBM presentation quote that surfaced recently, "A computer can never be held accountable, therefore a computer must never make a management decision".

It's one thing to have a computer flag issues, another to make it responsible for taking action and in this case, making a decision final. Google continues to set poor examples with irresponsible implementations of machine learning. With no accountability, no recourse, no humans to talk to.




I think that quote should be flipped.

"A computer can never be held accountable, therefore a computer must always make management decisions."


Or..

"A computer can never be held accountable for decisions, therefore all computer decisions are management decisions."


That misunderstanding could explain the last several decades of corporate management.


I thought the same as that's our current reality. "It wasn't us, it was the computer!" Things are only going to get worse


"Computer says No!"


No? Why is this computer talking about Norwegian?



A program can never be held accountable, but the person (or the whole company, or both) who decided that it should make management decisions can.


It's possible to read the original quote was as a warning of exactly that. We should all be very wary of letting our code do the walking because the fault should realistically lie with us, but it's as if the software industry feels invincible, and it's not hard to see why.


Yeah, that's basically how I read into it. Or, more specifically, if the computer can't be held accountable, then those doing the holding get to decide who they think is responsible, and that might not end in your favor. In pithy terms, computers make poor scape-goats.

Given the context IBM operates in, that's probably what they are getting at.

Bill Gates tried this argument and lost.

Lawyer during deposition: "So who sent that email?"

BG: "A computer."


Yes! This is one thing humans are still much better at than AI: taking blame.


The best of both worlds is when you have a human supervise a fully automated process. You can pay them peanuts and still use them as liability sponges.

(The term "liability sponge" is shamelessly stolen from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2757236 .)


Computer: "It's not my fault". The sunspots made me do it.

Sunspots are computers' devil.


AI-aided management decisions can't come soon enough. Decision Model and Notation made big headway on this, but I don't see it discussed. They're fancy decision trees but can handle complex factors and are designed to be intuitive to reason about.


That’s begging for some leaky abstraction to absolutely ruin your business.


Management is made of leaky abstractions. I think of course that humans should still be the accountable party, but AI can guide baser decisions so that more time can be spent on the hard problems.

Imagine how many thousands of hours managers have wasted at this point discussing WFH. They should have been figuring out their supply chain and labor strategy.


This argument would be more compelling if companies were in general better at driving management decisions using all of the many data analysis techniques already available. Making something lower effort but also harder to understand and more error prone doesn't seem like an obvious win....


I agree! I think management processes need more open experimentation, though. Documenting and creating decision trees makes those decisions more transparent overall. It also aids in transfer learning, something which, from first hand experience, is lacking on ground-scale. Meaning, there's a lot of management knowledge in people's head's that increases bus factor risk. That can be mitigated, whether through my proposed method or otherwise.


What do you mean with "leaky abstraction" here?


Abstractions get leaky when underlying assumptions fail to hold. As an example, comparing insurance rates and coverage from different companies via AI is of limited utility if you ignore counter party risks where the companies fail to keep their end of the contract. Of course you can add that or any specific thing I bring up to the model, but you can’t include everything because the model is always simpler than reality.

The desire for AI decisions to be explainable forces them to use even simpler which makes this worse.


Now, imagine letting a computer decide how to drive a 15 ton truck down the freeway at 75 mph.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: