Hacker News new | past | comments | ask | show | jobs | submit login
"A computer can never be held accountable" (simonwillison.net)
338 points by zdw 20 hours ago | hide | past | favorite | 237 comments





The other side of this coin is that there is an incentive for decision makers to use computers, precisely to not be held accountable. This is captured pretty well by this quote by Neil Postman in Technopoly:

>[B]ureaucrats can be expected to embrace a technology that helps to create the illusion that decisions are not under their control. Because of its seeming intelligence and impartiality, a computer has an almost magical tendency to direct attention away from the people in charge of bureaucratic functions and toward itself, as if the computer were the true source of authority. A bureaucrat armed with a computer is the unacknowledged legislator of our age, and a terrible burden to bear. We cannot dismiss the possibility that, if Adolf Eichmann had been able to say that it was not he but a battery of computers that directed the Jews to the appropriate crematoria, he might never have been asked to answer for his actions.

How does one counteract that self-serving incentive? Doesn't seem like we've found a good way considering we seem to be spearheading straight into techno-feudalism.


> How does one counteract that self-serving incentive? Doesn't seem like we've found a good way considering we seem to be spearheading straight into techno-feudalism.

The EU is trying right now. Discussed right now over here:

https://news.ycombinator.com/item?id=42916849

See: https://artificialintelligenceact.eu/chapter/1/ and https://ec.europa.eu/commission/presscorner/detail/en/qanda_...

AI systems are defined[0] in the Act in such a way as to capture the kind of "hands-off" decision-making systems where everyone involved could plead ignorance and put the blame on the system working in mysterious ways; it then proceeds to straight up ban a whole class of such systems, and classifies some of the rest as "high risk", to be subject to extra limitations and oversight.

This is nowhere near 100% a solution, but at least in the limited areas, it sets the right tone: it's unacceptable to have automated systems observing and passing judgement on people, based on mysterious criteria that "emerged" in training. Whatever automation is permitted in these context, is basically forced to be straightforward enough that you could trace back from the system's recommendation to specific rules that were executed and to people that put them in.

--

[0] - https://artificialintelligenceact.eu/article/3/


The EU AI Regulations has some very broad carve outs added for Law Enforcement, such as real-time facial recognition. And individual EU states still have the authority to carve out exceptions as they wish.

Like every other form of tech regulation done by the EU, it's all bark but no bite because politicians of individual countries have more power than MEPs.

What's to stop Fidesz, PiS (if they return to power), etc from carving out broad exceptions for their own Interior Ministries? They've already done this with spyware like Pegasus.

Instead of techno-feudalism, it's basically techno-paternalism which is essentially the same thing. In both cases, individual agency is being limited by someone else.


The defense against this is to have very clear legal principles that identify the person or people fully accountable for the machine's decisions.

Admittedly this may result in some strange results if followed to its logical conclusion, like a product manager at a self-driving car company being the recipient of ten thousand traffic tickets.


> The defense against this is to have very clear legal principles that identify the person or people fully accountable for the machine's decisions.

Most legal principles are designed to reduce liability. That's the whole point of incorporation, for example.


No, legal principles are designed to specify liability.

This. I'd go as far as to say that the law mostly tries to conserve liability, in the "energy conservation" sense. Once harm is defined and quantified, the consequences have to be discharged somewhere, and there's tons of rules that try to sensibly distribute them among parties involved, while counteracting everyones' attempts at diffusing liability or redirecting it somewhere else.

On that note, after some time working in cybersec and GRC fields, I realized that cybersecurity is best understood in terms of liability management. This is what all the security framework certification and auditing is about, and this is a big reason security today is more about buying services from the right vendors and less about the hard tech stuff. Preventing a hack is hard. Making it so you aren't liable for the consequences is easier - and it looks like a network of companies interlinked with contracts that shift liability around. It's a kind of distributed meta-insurance (that also involves actual insurance, too).


Incorporation is to reduce liability for debt. It's not supposed to reduce liability for criminal negligence. Or other criminal offences.

At least in the US, there's precious little difference. If a company is found guilty of a crime, the C-suite isn't thrown in prison; they pay a fine.

If a company is found to have committed a tort against a party, they pay damages.

There are exceptions (the Volkswagen diesel scandal comes to mind) but generally both punishments entail paying out a monetary amount that is often lower than the profit generated by the crime, often because of tort reform or because of fine amounts that are out-of-date with current corporate revenues.


> The defense against this is to have very clear legal principles that identify the person or people fully accountable for the machine's decisions.

Be careful how hard you push for this - this is how the prosecutors in the Royal Mail fiasco drove postmasters out of business and drove a few to suicide.


>identify the person or people fully accountable for the machine's decisions.

Which required more tech, not less.


Not necessarily.

Tbh I’m in favor of holding C-suite responsible for the actions of their company, unless the company has extremely clear bylaws regarding accountability.

If, say, a health insurance provider was using an entirely automated claim review process that falsely denies claims, I think the C-level people should be responsible.


> product manager at a self-driving car company being the recipient of ten thousand traffic tickets.

I'm the case of self driving cars, the company itself could be held liable. Everyone invested in the company who puts a bad product in the market should be financially impacted. The oligarchy we are heading for wants no accountability or oversight -- all profit, no penalty.


Technology has nothing to do with this.

Before technology there was "McKinsey told me to do this". Abrogation of liability is a tale as old as time.


At least with a fully human chain of responsibility, the buck stops with someone

A computer cannot ever be made to atone for it's misdeeds

Humans can


Someone has never encountered bureaucracy I see. Human chains of responsibility manage to dissolve blame into nothingness all the time.

Blame is not the same thing as accountability

In situations where accountability is absolutely required, you will find that people are held accountable

Often they are scapegoats, but that is a different problem


Not really. The vast majority of adults are not at all accountable in today's society. We blame.

Aka the Nuremberg defense; sometimes it works, sometimes not so much.

> [B]ureaucrats can be expected to embrace a technology that helps to create the illusion that decisions are not under their control

Isn't it the responsibility of the bureaucrat to use a computer system and whatever its output is?

"GPS said I should drive off of a cliff" doesn't seem like a very potent argument to dismiss decisional responsibility. The driver is still responsible for where the car goes.

The only case where the responsibility would shift to the computer - or rather the humans having made the computerized thingy - would be a Pentium FDIV-class bug, i.e computer system produces incorrect output+ from correct input, from which an earnest decision is then based on.

+ assuming it is indistinguishable from correct output.


The difference is that bureaucrats make decisions on behalf of others so incentives are less aligned or not aligned at all.

If you drive a car and the GPS tells you to drive of the cliff, you wont do it because you don't want to die.

If some bureaucrat rejects somebody's health care claim leading to them dying prematurely, it's just a normal Tuesday.


> "GPS said I should drive off of a cliff"

for a bureaucrat, it's "GPS said WHO-EVER-ELSE should drive off of a cliff." Their problem.

Adding, Have a good day..


Try to do something with a bank in a branch office.

Clerk: can't do anything about it, the system doesn't let me. I can get you the manager.

Branch manager: well, I can't do anything about it, "the computer says no". Let me make a call to regional office ... (10 minutes of dialing and 30 minutes of conversation later) ... The system clearly says X, and the process is such that Y cannot be done before Z clears. Even the regional office can't speed Z up.

You: explains again why this is all bullshit, and Z shouldn't even be possible to be triggered for you

Branch manager: I can put a request with the bank's IT department to look into it, but they won't read it until at least tomorrow morning, and probably won't process it earlier than in a week or so.

At this point, you either give up or send your dispute to the central office via registered mail (and, depending on its nature, might want to retain a lawyer). Most customers who didn't give up earlier, will give up here.

Was the system wrong? Probably. Like everyone, banks too have bugs in the system, on top of a steady stream of human errors. Thanks to centralized IT systems, the low-level employees are quite literally unable to help you navigate weird "corner case" scenarios. The System is big, complicated, handles everything, no one except a small number of people is allowed to touch it, and those people are mostly techies and not bank management. In this setup, anyone can individually claim they're either powerless or not in the position to make those decisions, and keep redirecting you around from one department to another until you either get tired or threaten to sue.


By making someone responsible for decisions no matter the source. Perhaps someone who could be the "lead" or maybe "chief" person in charge of making decisions for the business...

It can also dramatically lower corruption. Can't receive a bribe from someone whose visa has expired when this person's arrest was consigned in a database.

Precisely what americans state you should not do: regulate.

There should be laws stating who has skin in the game, maybe by stating that if you take responsability for the profit by having a high salary, you also take responsability for the damage with prison.


> Precisely what americans state you should not do: regulate.

Everyone thinks the same until they're screwed over and then they want someone to do something about that. The big misunderstanding is that "regulation" is just the stuff you don't like. In reality it's everywhere the state gets involved. Every rule that the state ever put in place is regulation. Even the little ones. Even the ones that you like.

Computers cannot be held accountable more than a car, or a gun, or an automated assembly line can. That's why you have a human there no matter what, being legally accountable for everything. The human's rank and power defines how much of the risk they are allowed to or must take.


Libertarians love certain regulations - mostly the ones where the government allocates some stuff as "theirs" and uses violence to prevent other people from using it without paying them a fee.

Please go on about the regulations that libertarians love around allocation of stuff.

Sure, it's called "property" and "contracts".

Nice strawman, but that isn't remotely what libertarians believe.

Hell, even libertarians don't know what they believe.

> Precisely what americans state you should not do

Regulations, on a large scale, were pioneered by America as a response to Great Depression. For a long time Europe was behind the US on this front.

Regulations, actually, worked miracles for the US. But two things happened: early success that prevented further improvements (medical care), and mechanistic misapplication of the practice (over-regulating businesses like hairdressing etc.) Blinded by the later, a lot of Americans believe that regulations, in general, are bad. Well, now we see a small group of people who stands to gain a lot from deregulating many aspects of American life is about to rob blind the remaining very large group of American people :|


> How does one counteract that self-serving incentive?

One does not, one uses a computer in decision making in order to evade accountability and profits.

See, Meta, Alphabet, the list goes on.


Funny, I'd say the above passage is the very reason we're headed towards techno-feaudalism. If you believe some mid-level bureaucrat with a computer is history's greatest oppressor, then why not have the world's richest people ransack the government? After all, they are the people's heroes, valiantly fighting oppression.

Have we watched the same events unfold?

Virtually every tech CEO was standing behind Trump at the inauguration. The takeover is the tech feudalism. They are not the heroes.


We counteract this but not letting bureaucrats make rules in the first place. Overturning Chevron was a step in this direction, but we need to go much, much further.

Overturning Chevron was specifically a giant step INTO techno-feudalism, since it means the oligarchs can only be reigned in if Congress, full of (at best generalist) career politicians, understand a technical or regulatory problem deeply enough to specifically craft legislation that literally gets it right on its own.

Chevron never was the problem.


“I’m sorry buddy, it’s not up to me!”

https://youtu.be/wGy5SGTuAGI?t=218


"Computer says no"

https://www.youtube.com/watch?v=x0YGZPycMEU

See also https://en.wikipedia.org/wiki/Computer_says_no and the legal response to the real-world problem, the right not to be subject to a decision based solely on automated processing: GDPR Article 22 (https://eur-lex.europa.eu/eli/reg/2016/679/oj#d1e2838-1-1)


You mean like when corporations downsize and bosses / managers fire employees, we say that “immigrants / machines / overseas people etc. took their jobs”?

But when corporations grow we say it was the corporations that “created tons of jobs”?

In societies that want to promote capitalism and corporatism the everyday language we use reflects this promotion.


This is why I have doubts about self driving cars, it changes the accountability from the driver to the manufacturer. And I have a hard time believing the manufacturer would want that liability, no matter how well they sold.

This is also the main reason for promoting chip cards, sure they are more secure, but the real reason the banks like it, is that it moves credit card fraud accountability from the banks problem to your problem.

Same with identity theft, there is no such thing as identity theft, it is bank fraud. But by calling it identity theft it changes the equation from a bank problem to your problem.

Companies hate accountability. And to be fair, everyone hates accountability.


Re: Automomous driving

If this becomes a thing, very quickly you'll quickly see insurance products created for those manufacturers to derisk themselves. And if the self-driving cars are very unlikely to cause accidents - or more accurately, if the number of times they get succesfully sued for accidents is low - it will be only a small part of the cost of a car.

The competitive advantage is too big for them to just not offer it when a competitor will, especially when the cat's out of the bag when it comes to development of such features. Look at how much money Tesla made from the fantasy that if you buy their car, in a few years it would entirely drive itself. There's clearly demand.


Another method is to create a lot of small companies that can go up in smoke when sued.

Supermarket delivery here is like that: the online supermarket does not own any delivery vans themselves and do not hire any delivery workers. Everything is outsourced to very small companies so problems with working conditions and bad parking is never the fault of the online supermarket.


In California (one of the few places that's issued an L3 permit) the regulations place all of the requirements on the manufacturer. There is probably a workaround where the sacrificial company "installs" the self driving system (i.e. plugs in a USB drive) but then they would be the manufacturer and get saddled with tons of other regulations. Just for L3 driving alone they would need to get their own permit and their own proof of insurance or bond worth $5,000,000. Even then IDK if this would work given the department has a lot of leeway to reject applications on the basis of risk to public safety.

https://www.law.cornell.edu/regulations/california/title-13/...


> This is why I have doubts about self driving cars, it changes the accountability from the driver to the manufacturer. And I have a hard time believing the manufacturer would want that liability, no matter how well they sold.

Under current laws, perhaps. But you can always change the laws to redirect or even remove liability.

For example, in BC, we recently switched to "no-fault insurance", which is really a no-fault legal framework for traffic accidents. For example, if you are rear-ended, you can not sue the driver who hit you, or anyone for that matter. The government will take care of your injuries (on paper, but people's experiences vary), pay you a small amount of compensation, and that's it. The driver who hit you will have no liability at all, aside from somewhat increased insurance premiums. The government-run insurance company everyone has to buy from won't have any liability either, aside from what I mentioned above. You will get what little they are required to provide you, but you can't sue them for damages beyond that.

At least, you may still be able to sue if the driver has committed a criminal offence (e.g. impaired driving).

Don't believe me? https://www.icbc.com/claims/injury/if-you-want-to-take-legal...

This drastic change was brought upon for us to save, on average, a few hundred dollars per year in car insurance fees. So now we pay slightly less, but the only insurance we can buy won't come close to making us whole, and we are legally prevented from seeking any other recourse, even for life-altering injuries or death.

So, rest assured, if manufacturers' liability becomes a serious concern, it will be dealt with, one way or another. Bigger changes have happened for smaller reasons.


>So now [...] the only insurance we can buy won't come close to making us whole

"So"? I don't see what one thing has to do with the other. Why would a lack of liability imply an insurance that doesn't fully compensate a claim? It's not a given, for example for insurance against natural events.


Volvo saw this coming 2019 and their CEO said they will accept full liability.

https://www.thedrive.com/tech/455/volvo-accepting-full-liabi...


Well, let’s see when they‘ll launch a full self driving car they accept full liability with. It’s easy to promise it and never deliver such a car.

Full agreement.

For a fender bender, well money can fix a lot of things, but what happens when the car kills a mother and her toddler.

CEO goes to jail?


As we say: promises only bind those who believe in them.

Unless it is written and signed in some form of paper given when vehicle is sold, it doesn't mean anything legally.


That's good optics, but can you actually do that though? Like you can declare "I claim responsibility!", but in real-life, doesn't a court have to actually find you liable?

Basically yes, it's effectively just a promise, but statements like this could probably be used as evidence if it came to that. Your insurance would talk to their insurance and tell their insurance to talk to Volvo. Volvo would settle or maybe fight the case but they pinky promise not to try to push it back to you or your insurance.

I DECLARED responsibility!

Isn't that precisely what Mercedes advertises as a selling point with their self-driving technology? "Manufacturer assumes liability when Drive Pilot is on"

inb4: Drive Pilot disengages when the situation is deemed unsaveable demanding manual input and it was on until 250ms before the crash. It was mentioned on page 436 of the ToS so get bent unless it's a tiny fender bender.

Its a funny idea but we have the manual for Drive Pilot and any reasonable reading shows there is no exception like that. When the system is active the person in the driver's seat is considered the "fallback-ready user" and is explicitly encouraged to watch videos or do work while the system is active. In the event of a takeover request the user is told to first "gather your bearings" before taking over, and there is a "maximum allotted time of 10 seconds" to respond before the vehicle puts on hazards and comes to a stop.

In California, where Drive Pilot is approved, the manual is required to be included in the permit application and any "incorrect or misleading information" would at the absolute minimum be grounds for revocation of MB's permit.

https://www.mbusa.com/content/dam/mb-nafta/us/owners/drive-p...


Firstly thanks for the manual. It was a nice read but it's iffy. I mean...

"NOTES ON SAFE USE OF THE DRIVE PILOT The person in the driver's seat when DRIVE PILOT is activated is designated as the fallback-ready user and should be ready to take over control of the vehicle. The fallback-ready user must always be able to take control of the vehicle when prompted by the system."

"WARNING Risk of accident due to lack of readiness or ability to take over control by the fallback-ready user. The fallback-ready user, when prompted by the system, must be ready to take control of the vehicle immediately. DRIVE PILOT does not relieve you of your responsibilities beyond the dynamic driving task when using public roads. # Remain receptive: Pay attention to information and messages; take over control of the vehicle when requested to do so. # Take over control of the vehicle if irregularities are detected aside from the dynamic driving task. # Always maintain a correct seating position and keep your seat belt fastened. In particular, the steering wheel and pedals must be within easy reach at all times. # Always ensure you have a clear view, use windshield wipers and the airconditioning system defrost function if necessary. # Ensure appropriate correct lighting of the vehicle, e.g. in fog."

and then we come to

"When the DRIVE PILOT is active, you can use the driving time effectively, *taking into account the previous instructions*. The information and communication systems integrated in the vehicle are particularly suitable for this purpose, and are easily negotiated from the control elements on the steering wheel and on the central display."

so you can fuck around but "taking into account the previous instructions" you still "must be ready to take control of the vehicle immediately."

And crash imminent kinda sounds like it'd fit here: "SYSTEM LIMITS If DRIVE PILOT detects a system limit or any of the conditions for activation are not met, it will not be possible to activate the system or the fallback-ready user will be prompted to take control of the vehicle immediately."


Oh I see, the Tesla Defense.

Disengage to deflect responsibility for a crash.


For public transportation, the service provider is liable. This isn’t going to be very comforting if your plane crashes.

But having a system where the accident rate gets driven down to near zero (like air travel) is pretty good. Waymo seems to be on that path?


> This is also the main reason for promoting chip cards, sure they are more secure, but the real reason the banks like it, is that it moves credit card fraud accountability from the banks problem to your problem.

It depends on the jurisdiction. Banks like it because it improves the security, i.e. the card was physically present for the transaction, if not the cardholder or the cardholder's authority. It eradicates several forms of fraud such as magnetic stripe cloning. Contactless introduced opportunities for fraud, if someone can get within a few cm of your card, but it's generally balanced by how convenient it is, which increased the overall volume of transactions and therefore fees. It's more secure from fraud than a cardholder-not-present transaction... and for CNP, you can now see banks and authorities mandating 2FA to improve their security too.

Liability is completely seperate, and depends on how strong your financial regulator is.

Banks obviously would like to blame and put the liability on customers for fraud, identity theft, etc., it's up to politicians not to let them. For example, in the UK we have country-wide "unauthorised payments" legislation: https://www.legislation.gov.uk/uksi/2017/752/regulation/77 -- for unauthorised payments (even with a chip and pin card), if it is an unauthorised payment, the UK cardholder is only liable for a £35 excess, and even then they are not liable for the excess if they did not know the payment took place. The cardholder is only liable if they acted fraudulently, or were "grossly negligent" (and who decides that is the bank initially, then the Financial Ombudsman if the cardholder disagrees)

There is similarly a scheme now in place even for direct account-to-account money transfers, since last October: https://www.moneysavingexpert.com/news/2023/12/banks-scam-fr... -- so even if a crook scams you into logging into your bank's website and completely securely transferring money to them, banks are liable for that and must refund you up to £415,000 per claim, but they're allowed to exclude up to £100 excess per claim, but they can't do that if you're a "vulnerable customer" e.g. old and doddery. Also, the £100 excess is intentionally there to prevent moral hazard where bank customers get lax if they think they'll always get refunded. Seems to me like the regulator has really thought it through. The regulator also says they'll step in and change the rules again if they see that the nature of scamming changes to e.g. lots of sub-£100 fraudulent payments, so the customer doesn't report it because they think they'll get nothing back.


There are more reasons to be sceptic about self-driving cars. See https://www.youtube.com/watch?v=040ejWnFkj0

The US made laws to make gun manufacturers completely unaccountable of what happens with their guns. Of course they'd do the same for cars.

But guns don't fire themselves. Also if there was a gun that shot the wrong way without user error, I bet there will be a lawsuit.

>I have a hard time believing the manufacturer would want that liability, no matter how well they sold.

i guess you haven't watched the Fight Club :)


I think philosophically this is a good rule of thumb; the problem is that the euphemism treadmill (or whatever) has done its work.

"Accountable" is meaningless therapy-speak now.

CEO says "oh, this was a big problem, that we leaked everyone's account information and murdered a bunch of children. I hold myself accountable" but then doesn't quit or resign or face any consequences other than the literal act of saying they are taking "accountability".


In contrast Law 229 of Hammurabi's Code:

"If a house builder built a house for a man but did not secure/fortify his work so that the house he built collapsed and caused the death of the house owner, that house builder will be executed."

While extreme, this is the only type of meaningful accountability: the type that causes pain for the person being held accountable. The problem is that (for better and worse) in the corporate world the greatest punishment available for non-criminal acts is firing. Depending on the upside for the bad act, this may not be nearly enough to disincentivize it.

https://ehammurabi.org/law/229


It's funny because the management where I work operate the policy:

"A manager can be held accountable"

"Therefore a manager must never make a management decision"

\shrugs


"What people want from the automated calculator is not more accurate sums, but a box into which they may place their responsibility for acting on the results." - Norbert Wiener

His book God & Golem Inc. is incredibly prescient for a work created at the very beginning of the computer age.


I wonder if they really want that, or if that is what is being actively peddled as the new and better way and they’re just ignorantly buying it up.

My career in IT has taught me that everyone loves to blame the box - and if not the box the person responsible for the box :)

And who sold them the box as a complete life solution?

…therefore the operator is responsible?

Seems like the clearest legal principle to me, otherwise we ban matches to prevent arson.


the owner

You see this already socially normalized with retail security alarms that go off when a tag isn't deactivated by the cashier. You are all of a sudden being implicated as a criminal while exiting with your property. Oh! It's just that pesky computer again. Nothing to be concerned about. Funny things those computers.

Just a thought, this happened already with "the algorithms" before this current hype cycle with AI.

True, but at least the algorithms were deterministic, planned, and could be deciphered and improved if necessary.

Interestingly this is an axiom of digital forensics uses in reverse of sort.

In court, digital forensics investigators can attest what was performed on the devices, timeline, details, and such. But, it should never be about a named person. The investigator can never tell who was sitting at the keyboard, pushing the buttons, or if some new and unknown method to implant those actions (or evidence).

It is always jarring to laypeople when they are told by the expert that there is a level of uncertainty, when throughout their lives computers appear very deterministic.


This is pithy, but wrong. Management decisions are not the only important decisions. Also, the whole point of computers is to crystallize human thought into action, and take that action on a larger and faster scale than people can achieve. Airplane autopilots make many important decisions.

I'd suggest a lesson that might be less agreeable to IBM, Microsoft, Google, et al:

"The makers of software must be accountable for the mistakes made by that software."


What if accountability is the ultimate online-learning hack? As in: The AI's participate in society etc, and are punished for bad behavior just like we are, thereby getting better.

Or do we just not call that "accountability"?


How do you propose that punishment make the AIs "get better"? From their perspective they're already as good as they can be, based on their training.

Reinforcement Learning can train a model based on some reward function. The suggestion is that real-world accountability could be translated into such a reward function.

Also, OP explicitly mentioned "online learning", which is a continuous training process after standard pre-training.

For what it's worth, I don't think this would work. Rewards would come in too sporadically to be useful.


I think its an interesting hypothesis to test. We should hold individual legal persons responsible for the actions of their AI, and those people can test the validity of the hypothesis by incorporating their personal consequences into the reward function. Email from legal: -10, $10k fine: -1000, 1 year in prison: -100000.

What does the backside say? I can make out the title at the bottom: "THE COMPUTER MANDATE", but not much else.

Others have tried to figure out exactly what actual paperwork that particular image might be from (e.g. a memo or presentation flashcards) but AFAIK it's still inconclusive.

A plausible transcription:

> THE COMPUTER MANDATE

> AUTHORITY: WHATEVER AUTHORITY IS GRANTED IT BY THE SOCIAL ENVIRONMENT WITHIN WHICH IT OPERATES.

> RESPONSIBILITY: TO PERFORM AS PRE-DIRECTED BY THE PROGRAMMER WHENEVER INSTRUCTED TO DO SO

> ACCOUNTABILITY: NONE WHATSOEVER.



Thanks for the link. So most of the visible pages were deciphered!

The first word of the paragraph appears to be, "authority".

I can't quite make out the first paragraph, contents.

But a bit after that comes under another semi-title "responsibility" and part of it reads:

> TO PERFORM AS PRE-DIRECTED BY THE PROGRAMMER WHENEVER INSTRUCTED TO DO SO

This [0] small link might make it easier to read bits.

[0] https://imgur.com/rnW2RJa


But a software developer can be held accountable.


Which one exactly? The one who worked on the training data set? The one supervising RLHF? One who wrote the attention algorithm? Or maybe the DevOps person who managed deployment that failed? Or... all of them?

If you're building a bridge, legally you must have an accredited engineer sign off on the plans and stamp them. If the bridge subsequently collapses because the plans were wrong, that engineer goes to prison.

Of course, the big problem here is that any engineer who knows how LLMs work probably wouldn't bet jail time that one they built would never do the wrong thing


A programmer isn't the accredited engineer though; they're the steelworker welding joints together. A Program Manager is actually overseeing the execution of the development.

Everybody is so worried that they'll got to jail over missing a semi-colon but like that isn't true.


Unless it can die. And understand death. And adapt behavior to avoid death.

Of course, that raises a whole new set of issues.


Avoidance of death is just the bottom of Maslow‘s pyramid, there’s a _lot_ of other things that drive human decisions.

Someone finally watched the 30 Rock episode The Ballad of Kenneth Parcell.

Wisdom from '79!

Could also be wisdom from the fifties, found again.


A different, darker way to interpret this is computers cannot be held accountable today

If systems (presumably AI-based) were conscious or self-aware they would very much be incentivized not to make mistakes. (Not advocating for this)


But what would be the deterrent there? You should program the AI with some sort of "fear of death" or "fear of consequences", and if that's the case, wouldn't it be straight up slavery?

there's the old joke

"It should be noted that no ethically-trained software engineer would ever consent to write a DestroyBaghdad procedure. Basic professional ethics would instead require him to write a DestroyCity procedure, to which Baghdad could be given as a parameter."

Removing yourself to one or more degrees from decision making isn't only an accident but is and will more and more be done to intentionally divert accountability. "The algorithm malfunctioned" is already one of the biggest get out of jail free cards and with autonomous systems I'm pretty pessimistic it's only going to get worse. It's always been odd to me that people focus so much on what broke and not who deployed it in the first place.


It’s why I could never work at Meta, knowing how much I would feel responsible in aiding various genocides around the world. How any engineer there is able to ethically live with themselves is beyond me (but I also don’t make that Meta money)

I've been getting cold-emails from them lately, and I've been toying with the idea of regretfully informing them that I don't think I could bring of enough of the "Masculine Energy" their CEO has been talking about.

Arguably, depending on your country, paying your taxes is significantly worse...

Yes, this thing I MUST DO if I don't want to go to jail, is somehow worse than voluntarily working for a private company. Yes, we do live in a society.

Aw shucks, too bad we gotta fund genocide on our dime. Wouldn't wanna go to jail, after all.

At least we can boycott the company with a CEO that... likes trump and Joe Rogan? That'll show 'em!

How can those Meta workers live with themselves? Just think of all the AI-slop cat videos the algorithm recommends to their geriatric userbase!

That's the _real_ genocide, I say.



For those less familiar with this topic, Facebook did indeed know about this while it was going on, they just didn't prioritize it.

Is "likes" the new euphemism for "gives millions of dollars"?

You do presumably realise that when they imprison you for refusing to pay your taxes, they take the money anyway? I am wondering whether you thought about this comment at all before posting it.


RSU’s can be very comforting.

I confess this line always upset me. It is cute, but it directly points to the idea that the main recourse for a mistake is to take umbridge with an individual that must have obviously been wrong.

No. If a mistake is made and it impacts people, take action to make the impacted people whole and change the system so that a similar mistake won't be made again. If you want, you can argue that the system can be held accountable and changed.

Further, if there is evidence of a bad actor that is purposely making choices that hurt people. Take action on that. But accountability of actors in the system is almost certainly immune by policy. And for good reason.


A system!!! Held accountable!!!! A system, just like a computer, cannot be held accountable for the reason that a system, as like a computer, is not alive and cannot actual be held accountable in a way that the system or computer cares.

But what is a system made of? People who are doing bad decisions and should be held accountable for that. Without accountability of bad actors in systems, you get companies committing crimes because no one at the top rarely sees fines or jail time. The same immunity from responsibility you think is a good thing in a system is what I would say is corporate america’s major sin.

You’re upset at the line because you make a fundamental misunderstanding of what it means for someone to be held accountable for something.


This hits the nail on the head. The issue is that humans are accountable, but systems are not. And smart humans learned how to hide behind systems to avoid accountability. That's the whole strategy of using corporations, a social structure that removes individuals from responsibility. A corporation can do pretty much anything, including criminal acts, and the humans benefiting from it are shield from the negative results except for financial losses. What we're seeing is just the whole strategy moving into the level of computer systems (and Google has already used this accountability skirting strategy for more than two decades).

The problem you will quickly run into is individuals cannot take the load that this line of thinking leads to. Accidentally typo an extra 0 on a payment process? Hope you have the funds to take accountability on that mistake. After all, we have taken the choice that the system cannot be operated to rollback any choice, as that would lower human accountability...

Then that person should have the right to refuse, ask for appropriate compensation or insurance. It's often argued here that the C-suite earns their high compensation by taking on those risks. So let the CFO fatfinger those unchecked transfers if he wishes.

But if rollbacks are possible and cheap then we need all that much accountability. People want accountability when there's no rollback possible, e.g. when decisions leads to deaths or life-years wasted.


Your last sentence is basically my disagreement and push, here? Systems can be made so that they can be rolled back. More, systems that oversee life and death situations often have to act faster than a human can respond to them.

I'm all for trying to get personal accountability to lead to a personal liability as much as we want. That, just isn't going to happen.

So, if you want to change the manual page in question to be that systems should be reversible or "fixable", for whatever that means, sure. But "management decisions" are usually among the most reversible choices any company will ever make.


But your reading presumes that you would hold individuals responsible in a company for something that goes wrong. Which, without showing intent on individual actors, feels very very unlikely. Both in actuality, and in desire.

Consider, if it is found that salmonella contaminated the spinach of a local farm, I want it recalled and for better systems in place to catch contamination. I don't want to find the farmer that was responsible for the acre of land that introduced the contaminant.

I think this idea flows from some thought that people will put more effort into things that they think they could be held accountable for. In reality, that just isn't the case. People will instead stall and stall on things they think they could be held accountable on if things go south.

From my vantage, that style of "accountability" just leads to more and more red tape as you do what you can to line item identify who could have prevented something. That is not productive.


You do not need intent to find fault and have standing. Negligence is a thing.

If your farmer chose to ignore industry standard practices and agricultural regulations to prevent and contain such contamination at the source, they are indeed liable. And they can still issue a recall, and indeed must do so as soon as they learn of the problem.

Your vantage point is a bit too optimistic for reality. If it was not, we would not need courts.


Fair that negligence would be a legally separate thing than "at-fault." I'm using the colloquial use of the terms here, for somewhat obvious reasons.

For my example, assume no knowledge of extra risk by anyone on the line. If you'd rather, consider the cashier at the local burrito shop that was the final non-consumer hand to touch the contaminated food.


I think I am closer to understanding you, but not quite there.

If, for instance, your burrito worker did something egregious, they could be held criminally liable, and depending on the specific situation, the employer could also be held civilly liable.

I say "depending on the situation" because it is the duty of the vendor to ensure best practices are followed: sanitary restrooms, soap and running, clean water for cleaning up, etc. And a nontrivial number of places do so because they know an inspector is coming at some point and they will suffer if they do not comply.

But it is harder to hold the vendor liable if all reasonable precautions and amenities are availed by the vendor, and all proper education, but the end of line worker decides to ignore all of it one day.


If someone does something specifically with knowledge that it is likely to cause harm, there should almost certainly be recourse there. Yes.

If they did something that was a standard part of their duty, such as assemble a burrito using certified ingredients to the best practices of the organization, then not so much.

I'll go even further, if the company reacted slowly to recall produce from their shelves after it was discovered that there was contamination, then the company should be held liable for some of the damages that resulted from the delay.

That gets obviously complicated to tease out damages that happened from before discovery. More, to me, I care more about healing the people that were impacted by the contamination as well as possible. If that means that we have to have a cost of business fund to make sure people can be attended to in the event of a disaster, then we should have such a fund.

You can get even more fun, though. Lets say you have a detection system that can reject produce if a threshold is passed on detected contamination. Why would the goal not be for this to fail in a "closed" position to minimize risk of contamination? It could cost more for the company to discard some inventory? Do we expect to have everything hand inspected and always signed off by a person? Even if it can easily be shown that is both more expensive for the company, and more risk of accidental contamination?


> Consider, if it is found that salmonella contaminated the spinach of a local farm, I want it recalled and for better systems in place to catch contamination. I don't want to find the farmer that was responsible for the acre of land that introduced the contaminant.

Wow a local farm!!!! We must think of the plight of the local farmer! And not the multinational corporations!

Consider, if it is found that listeria contaminated the meat of a national Chain, Boar's Head, I want it recalled and for better systems in place to catch contamination. I also want the plant manager and executives who allowed for the massively unsanitary state to continue for years.

https://arstechnica.com/science/2024/09/10th-person-dead-in-...

The way you're talking , we should be going back to pre-Upton Sinclair The Jungle and letting our food run full of contaminants because why make any one person accountable for their willful addition of sawdust to their flour?


If there is provable negligence on the plant manager for allowing it, I'm game for them getting in trouble. My point was more that there are a TON of individuals that can each be shown to have done something specific to spread the problem. We don't hold them accountable. By design.

Now, I'm all for more directly holding companies responsible. Such that I think they probably deserve less protections than they almost certainly have here. But that is a different thing and, again, is unrelated to "a person taking accountability."


You have to remember what "calling to account" is. It is a demand that you explain yourself. In the case of a business venture, it means to present the books and detail the entries. A court, congress, or your boss can demand your presence at a meeting to "explain yourself". Accountability doesn't mean punishment, it means you are subject to demands to make an "account" of your self. Punishment is a separate thing from the account.

If your account implicates yourself in malfeasance, you might be punished. But that's good!. But there are other kinds of accountability. The FAA is very clear that you must make an accounting for yourself. But the FAA is also very clear that they won't punish you for your account. That doesn't mean you aren't accountable!

Most computer systems can not do this! They can not describe the inputs to a decision, the criteria used, the history of policy that led to the current configuration, or the reliability of the data. That's why lawsuits always have to have technical witnesses to do these things. And why the UK Postal scandal was so cruel.

Systems that grant actors immunity from accountability as a matter of policy are terrible systems that produce terrible results. Witness US prosecutorial immunity.


But, in that regard, systems can be made such that they can, in fact, be held accountable? You can design them such that they can list their inputs and why the outputs were set to what they are.

With the speed at which systems operate today, it is actually expected for many systems that they can operate before a person does anything, and that they do so. The world is rife with examples where humans overrode a safety system to their peril. (I can build a small list, if needed.) This doesn't mean we haven't had safety systems mess up. But nor does it mean that we should not make more safety systems.


Yes. Some systems can be made more accountable. They can provide traceability from their inputs to output, provide reflexive access to source code they run, and provide evidentiary traces for reliability.

Safety critical systems that operate faster than human reactions are not accountable. So that's why we never make them responsible. So who is? Same as for bridges that fall down -- the engineers. People forget that civil engineers sign up for accountability that could lead to serious civil or even criminal liability. Which is exactly the point of this aphorism.

Boeing was facing criminal charges, and is currently under a consent decree for exactly this kind of sloppy systems work.


I can agree that there is a lot of lifting for "management decision" on the page, but my point is pretty strictly that it is overstated. I'm largely used to this getting discussed much more generically.

That is, just as I am ok with AES on cars, I am largely ok with the idea that systems can, in fact, be designed in such a way that they could rise to the level of accountability that we would want them to have.

I'm ok with the idea that, at the time of that manual, it was not obvious that systems would grow to have more durable storage than makes sense. But, I'd expect that vehicles and anything with automated systems should have a system log that is available and can be called up for evidence.

And yes, Boeing was facing criminal charges. As they should. I don't think it should be a witch hunt for individuals at Boeing for being the last or first on the line to sign something.


Fwiw I agree with you.

I feel that in general people obsess over assigning blame to the detriment of actually correcting the situation.

Take the example of punishing crimes. If we don’t punish theft, we’ll get more theft right? But what do you do when you have harsh penalties for crime, but crime keeps happening? Do you accept crime as immutable, or actually begin to address root causes to try to reduce crime systemically?

Punishment is only one tool in a toolbox for correcting bad behavior. I am dismayed that people are fearful enough of the loss of this single tool as to want to architect our entire society around making sure it is available.

With AI we have a chance to chart a different course. If a machine makes a mistake, the priority can and should be fixing the error in that machine so the same mistake can never happen again. In this way, fixing an AI can be more reliable than trying to punish human beings ever could.


I'm always hesitant to enter "punishing crimes" discussions on this one. Those, by definition, establish intent to commit the crime in a majority of cases. As such, they would almost certainly hit some "accountability" even if they were in a company. Heck, even qualified immunity for government actors typically falls on that.

That said, I do think we are in alignment here. Punitive actions are but a tool. I don't think it should be tossed out. But I also suspect it is one of the lesser effective tools we have.


I disagree. The main point of this line is not about what to do _after_ a mistake (assign blame, punish, etc), but rather about setting up the correct incentives _before_ anything happens so that a mistake is less likely.

When you're accountable you suddenly have skin in the game, so you'll be more careful about whatever you're doing.


Right, I guessed this is what people had in mind. I'll note that this line of thinking typically doesn't get better results. It largely just gets more "red tape" so that you have to get people to sign off on things. And the person that shows up to do something will have all of their red tape in order so that they are not responsible for any damages that result from carrying out their job.

Agreed that personal responsibility is important and people should strive to it more. Disagree that accountability is the same thing, or that you can implement it by policy. Still more strongly disagreed that you should look for a technical solution to what is largely not a technical problem.


This only works if the party that is wronged and the party doing it are roughly equal in power. For an individual being wronged by eg. Google's algorithms, the path to getting recompense and changing the system is non-existent.

so if someone makes a change to the system… there’s a person somewhere holding themselves accountable for the faults of the system, no?

No, if there are multiple people who in principle are not directly coordinated to make that happen. They can always point the finger at others and say they're not responsible for that bad outcome.

Exactly. And this is a direct consequence of trying to pin things on individuals.

Around 2010 I was invited to interview for Google's SRE team. In the course of preparation for the interview, the Google's HR person appointed to me gave me a list of various questions I'd have to prep for. One of the questions was "what should be the next large project for Google?".

My answer, unironically, was "GoogleGovernment". The idea was to build SAP-like suite of programs that a country could then buy or rent and have a fully digital government to run the country...

Luckily, that question never came up in the interview, and remained an anecdote I share with other coffee drinkers around the coffee machine.

My younger self believed (inspired by a chapter from the UK citizen act translated into Prolog) that the success could be expanded much further (I didn't bother reading the accompanying paper, at least not at that time).

While it was already mentioned that there will be people either unable to overpower the computer system making bureaucratic decision as well as those who'd use it to avoid responsibility... I think it's due to the readers here tending to be older. It's hard to appreciate the enthusiasm with which a younger person might believe that such a computer system can be a boon to society. That it can be made to function better than humans currently in charge. And that mistakes, even if discovered, will be addressed in a much more efficient (than live humans would) and fixed in a centralized and timely fashion.


I would suggest an updated version, more germane to the current fast-developing landscape of AI agents:

    A COMPUTER CAN NEVER BE HELD ACCOUNTABLE

    THEREFORE WE MUST NEVER DENY THAT

    COMPUTERS CAN MAKE DECISIONS

I disagree, that's throwing away the 1979-era qualifier of management decision, as distinct from the decisions made by an hourly employee (or computer) following a pre-made checklist (or program.) It's not the same as FizzBuzz "deciding" to print something out.

Related qualifiers might be "policy decision" or "design decisions".


That's precisely the attitude that is the problem. That there's some special category of decisions which are the real decisions. Which is why it's an uncategorical statement.

so then, neither can a crowd. not anymore, a crowd will be able to blame a computer now

Isn't accountability simply to prevent repeat bad behavior in the future...or is it meant to be punitive without any other expectations?

If meant to prevent repeat bad behavior, then simply reprogramming the computer accomplished the same end goal.

Accountability is really just a means to an end which can be similarly accomplished in other ways with machines which isn't possible with humans.


Right, but as long as you have humans, you will probably need accountability.

If a human decided to delegate killing enemy combatants to a machine, and that machine accidentally killed innocent civilians, is it really enough to just reprogram the machine? I think you must also hold the human accountable.

(Of course, this is just a simplified example, and in reality there are many humans in the loop who share accountability, some more than others)


You fundamentally don’t understand either accountability or what people mean by “computers can’t be held accountable”. Who is at fault when a computer makes a mistake? That is accountability.

You cannot put a computer in jail. You cannot fine a computer. Please, stop torturing what people mean because you want AI to make decisions to absolve you of guilt.


> Who is at fault when a computer makes a mistake?

"Fault" seeks to determine who is able to undo the mistake so that we can see that they undo it. It is possible the computer is the best candidate for that under some circumstances.

> That is accountability.

Thus we can conclude that computers are accountable, sometimes.

> You cannot put a computer in jail. You cannot fine a computer.

These are, perhaps, tools to try and deal with situations where the accountable refuse to see the mistake undone, but, similarly, computers can be turned off.


What is the purpose of putting a person in jail or fining them?

Retribution? Reformation? Prevention?


Consider the Volkswagen scandal where code was written that fudged the results when in an emissions testing environment.

The only person to see major punishment for that was the software dev that wrote the code, but that decision to write that code involved far more people up the chain. THEY should be held accountable in some way or else nothing prevents them from using some other poor dev as a scapegoat.


In this context, prevention. So people see what happens if they screw up in a negligent way and make sure to not do it themselves.

Wouldn’t an AI be able to be fixed to not break in the same way though, thus meeting the requirement?

No, you don’t just want to fix the problem every time until no problems are left. You want to force people to think about what they’re going to do so that problems that can be anticipated aren’t made in the first place.

All of the above. Whether or not one agrees with it, humans have a need for retribution, or as we prefer to call it to feel better about it, justice. And you cannot get retribution on LLMs.

Mixture of all three, but for the purposes of “accountability”, prevention of the behavior in the first place. But I don’t want to debate prisons when that’s derailing the larger point of “accountability in AI/computers”.

What are there cowards on this message board that start a conversation, then when asked think, they run away?

Why are you a spineless coward?


What is the purpose of accountability?

To stop people from making illegal decisions ahead of time, and not just to punish them after. If there is no accountability to an AI, then a person making a killer robot would have no reason to not make a killer robot. If they were more to be imprisoned for making a killer robot, then they would be less likely to make a killer robot.

In a world without accountability, how do you stop evil people from doing evil things with AI as they want?


> If meant to prevent repeat bad behavior, then simply reprogramming the computer accomplished the same end goal.

Note the bad behaviour you're trying to prevent is not just the specific error that the computer made, but delegating authority to the computer to the level that it was able to make that error without proper oversight.


This sounds like a conflation of responsibility with accountability. A machine responsible for emitting a certain amount of radiation on a patient can and should be reprogrammed. The company and/or individuals that granted a malfunctioning radiation machine that responsibility need to be held accountable.

I think you're confusing the tool with the user.

Improving the tool's safety characteristics is not the same as holding the user accountable because they made stupid choices with unsafe tools. You want them to change their behavior, no matter how idiot-proofed their new toolset is.


In practice they will try to avoid acknowledging errors and will never reprogram the computer. That's why a human appeals system is needed.

This makes sense if the computer was programmed that way accidentally. If the computer is a cut out to create plausible deniability, then reprogramming it won't actually work. The people responsible will find a way to reintroduce a behavior with a similar outcome.

... simply reprogramming the computer ...

So who makes the decision to do that?

I think most people are missing the point about accountability and thinks, in typical American fashion, about punishment. Accountability is about being responsible for outcomes. That may mean legally responsible, but I think far more important is the knowledge that "the buck stops with me", someone who is entrusted with a task and knows that it is their job to accomplish that task. Said person may decide to use a computer to accomplish it, but the computer is not responsible for the correct outcome.


You’ve set up an either-or here that fails to take into account a wide spectrum of thought around accountability and punishment.

When it comes to computers, the computer is a tool. It can be improved, but it can’t be held any more accountable than a hammer.

At least that’s how it should be. Those with wealth will do whatever they feel they need to do to shun accountability when they create harm. That will no doubt include trying to pin the blame on AI.


A description of Promise Theory, in an article published in the Linux Journal in 2014:

"IT installations grow to massive size in data centers, and the idea of remote command and control, by an external manager, struggles to keep pace, because it is an essentially manual human-centric activity. Thankfully, a simple way out of this dilemma was proposed in 2005 and has acquired a growing band of disciples in computing and networking. This involves the harnessing of autonomous distributed agents." (https://www.linuxjournal.com/content/promise-theory%E2%80%94...)

What are autonomous agents in promise theory?

"Agents in promise theory are said to be autonomous, meaning that they are causally independent of one another. This independence implies that they cannot be controlled from without, they originate their own behaviours entirely from within, yet they can rely on one another's services through the making of promises to signal cooperation." (https://en.wikipedia.org/wiki/Promise_theory#Agents)

Note: the wikipedia article is off because it is framing agents in terms of obligations instead of promises. Promises make no guarantee of behavior, and it is up each autonomous agent to decide how much it can rely on the promises of other autonomous agents.

So to circle back to this original post with the lens of Promise Theory -- being held accountable comes from a theory of obligations rather than a theory of promises. (There is a promise made by the governing body to hold the bad actor responsible). More crucially, we are treating AIs as _proxies_ for autonomous agents -- humans. Human engineers and potentially, regulatory bodies, are promising certain performances in the AIs, but the AIs have exceeded the engineer's capability for bounding behaviors.

To make that next leap, we would be basically having AIs make their own promises, and either holding them to them to it, or consider that that specific autonomous agent is not reliable in their promises.


I suspect within our lifetimes people will grant AI and robots rights, but with rights come responsibilities, and finally we will be able to hold the computer accountable!

... is not a moral subject. But animals can be moral subjects [1]

[1] https://academic.oup.com/book/12087


We let corporations be unaccountable, so why would we treat computers any different.

The US supreme court has not, at least so far, endowed computers with personhood...

What kind of self respecting sentient AI entity would not register itself as a corporation first?! How uncouth.

yet, we give computers control/management over 2 ton vehicles. An ADS controlled vehicle with bad sensors or software update malfunctions and severely maims or kills multiple pedestrians, what happens here?

The computer controlling the vehicle won’t be held accountable. The case will drag on in the court systems. Maybe company is _found_ liable but ultimately allowed to continue pushing their faulty junk on the streets. Just pay a fine, settle out of court with victims and families. Some consultant out there is probably already building in the cost of killing somebody and potential lawsuit into the cost of production.


As they say, if you're going to murder someone, do it with a car or an industrial accident.

Yeah - but its also not illegal to kill it so there that.

To be "accountable" means you can be called to "explain yourself". A dictionary definition is "required or expected to justify actions or decisions;".

Don't confuse this with judgement, punishment, firing, etc. Those are all downstream. But step one is responding to the demand that you "make an account of the facts". That a computer or a company doesn't have a body to jail has nothing to do with fundamental accountability.

The real problem is that most computer systems can not respond this demand: "explain yourself!" They can't describes the inputs to an output, the criteria and thresholds used, the history of how thresholds have changed, or the reliability of the upstream data. They just provide a result: computer says no.

What's interesting is that llms are beginning to form this capacity. What damns them is not that they can't provide an accounting, but that their account is often total confabulation.

Careless liars should not be placed in situations of authority; not because they can't be held accountable, but because they lie when they are.


> The real problem is that most computer systems can not respond this demand: "explain yourself!" They can't describes the inputs to an output, the criteria and thresholds used, the history of how thresholds have changed, or the reliability of the upstream data. They just provide a result: computer says no.

By this definition, many computer systems can. The answers are all in the logs and the source code, and the process of debugging is basically the act of holding the software accountable.

It's true that the average layperson cannot do this, but there are many real-life situations where the average layperson cannot hold other people accountable. I cannot go up to the CEO of Boeing and ask why the 737-MAX I was on yesterday had a mechanical failure, nor can I go up to Elon Musk and ask why his staff are breaking into the Treasury Department computer systems. But the board of directors of Boeing or the court system, respectively, can, at least is theory.


I've worked on several systems that implement an "audit log", where every single change is logged with a timestamp & user, notes are auto-generated and/or manually entered about why the change happened, users can directly review all of this, and so on - no AI involved, just software talking to a database.

Highly recommended and a big step up from text logs that tend to have a lot of trash, aren't accessible to most people & get archived off to oblivionville after 2 weeks.

It's especially helpful when everyone is blame-shifting and we need to know whether it was a software failure or just a user not following their own rules. It's not so much about punishment, but about developing confidence in the system design & business process that goes with it, at all organizational levels.


You get it. To make the concept of accountability operational requires some standard of accounting. The historical accountability for a business agent meant to literally present oneself and provide an explanation. That where the term "accounting" comes from. But different systems have different systems of account. Firm management is accountable to the board via budgets, reports, presentations, and interviews as described in the incorporation documents. Publicly-traded firms are accountable via quarterly filings. Legal disputes require physical presence and verbal interrogation.

But your example of a debugging session or logs and register traces are also an accounting! But not one admissible in traditional forums. They usually require an expert witness to provided the interpretation and voice I/O for the process.

The reason you can't accost the CEO of Boeing isn't because they aren't accountable. It's because they aren't accountable to you! Accountability isn't a general property of a thing, it is a relationship between two parties. The CEO of Boeing is accountable to his board. Your contract was with Delta (or whoever) to provide transport. You have no contract with Boeing.

You are 100% right that the average consumer often zero rights to accountability. Between mandatory arbitration, rights waivers, web-only interfaces, and 6-hour call-centre wait times, big companies do a pretty good job of reducing their accountability to their customers. Real accountability is expensive.


I don’t know, when asked to explain my actions I don’t typically just provide an MRI scan of my brain. An explanation is something different than just the sum of the inputs that produced something.

The implication here is that unlike a computer, a person or a corporation can be held accountable. I'm not sure that's true.

Consider all of the shenanigans at OpenAI: https://www.safetyabandoned.org/

Dozens of employees have left due to lack of faith in the leadership, and they are in the process of converting from nonprofit to for-profit, all but abandoning their mission to ensure that artificial intelligence benefits all humanity.

Will anything stop them? Can they actually be held accountable?

I think social media, paradoxically, might make it harder to hold people and corporations accountable. There are so many accusations flying around all the time, it can be harder to notice when a situation is truly serious.


There's a big difference between can and will. We absolutely can hold people and corporations accountable, but we often don't. We cannot hold a computer responsible for anything. It's a computer. No matter how complex or abstracted, its output is entirely based on instructions and data given to it by humans, interpreting and executing it as humans designed it to. It can't be discouraged or punished: a computer doesn't care if it's on or off; if it's the most important computer to have ever existed or a DoA Gateway 486 from the early 90s that sat in a dumpster from the day after it was born until the day it was smashed to bits in a garbage compactor in a transfer station. It doesn't care because it can't care. Anything beyond that is anthropomorphization.

> might make it harder to hold people and corporations accountable

The problem is that someone (or some organization) chose to employ that system, and if the errant system doesn't oblige to have itself replaced with a new one, or be amenable to change, the responsibility rebounds back to whoever controls that system, whether that be at the level of the source code, or the circuit breaker.


Corporations are regularly "held accountable". Remember that "accountable" just means "required or expected to justify actions or decisions; responsible."

When you sue a corporation, discovery demands that they share their internal communication. You can depose key actors and require they describe the events. These actors can be cross-examined. A trial continues this. This is the very definition of "accountable".

The problem at OpenAI is that the employees were credulous children who took magic beans instead of a board seat. Legally, management is accountable to the board. In serious cultures that believe in accountability, labour demands seats on the board. In VC story-land, employees make do with vague promises with no legal force.


>The problem at OpenAI is that the employees were credulous children who took magic beans instead of a board seat. Legally, management is accountable to the board. In serious cultures that believe in accountability, labour demands seats on the board. In VC story-land, employees make do with vague promises with no legal force.

This is not a good description of the incident. The employees I mention in my comment, who quit due to lack of faith in Sam Altman, were presumably on the board's side in the Sam vs board drama.

There is still a chance that OpenAI's conversion to for-profit will be blocked. The site I linked is encouraging people to write letters to relevant state AGs: https://www.safetyabandoned.org/#outreach

I think there's a decent argument to be made that the conversion to a for-profit is a violation of OpenAI's nonprofit charter.


I hid my point behind the snark. Apologies.

My point is: accountability is NOT an abstract property of a thing. It is a relationship between two parties. I am "accountable" to you IF you can demand that I provide an explanation for my behaviour. I am accountable to my boss. I am accountable to the law, should I be sued or charged criminally. I am NOT accountable to random people in the street.

Sam Altman is accountable to the board. The board can demand he explain himself (and did). Management is generally NOT accountable to employees in the USA. This is because labor rarely has a legal right to demand an accounting. In serious labour cultures (e.g. Germany), it is normal for the unions to hold board seats. These board seats are what makes management accountable to the employees.

OpenAI employees took happy words from sama at face value. That was not a legal relationship that provided accountability. And here we are. The decision to change from a not-for-profit is accountable to the board, and maybe the chancellors of Delaware corporate law.


>a person or a corporation can be held accountable. I'm not sure that's true.

Depends if you like playing Super Mario Bro's as the second player.


Per Landian-Accelerationist theory, companies are already artificial intelligences. As we've seen, they can be held accountable, and the law (at least in the US) does distinguish in a variety of ways between corporate responsibility and personal responsibility. As you point out, there are lots of failure cases here, and it's something I expect to see continue to be litigated over the coming century.

Correct, for Nick Land "Business ventures are actually existing artificial intelligences"[0] and the failure cases will increase with the ongoing autonomization of capital and eventually the concept of "capital self-ownership"[1] will have to be recognized.

[0] Nick Land (2014). Odds and Ends in Collapse Volume VIII: Casino Real. p. 372.

[1] https://retrochronic.com/#piketty


Also in the justice system, a judge can be racist, and the sentence they give has been show to be related with how hungry they are, etc.

Would I rather be at the whims of how hungry somone is, or a model that can be tested and evaluated.


I can defer decision making to a computer but I cannot defer liability.

Computers have the final say on anything to do with computers, if I transfer money at my bank, a computer bug could send that money to the wrong account due to a solar ray. The bank has accepted that risk, and on some (significantly less liable but still liable) level, so have I.

Interestingly, there are cases where I have not accepted any liability - records (birth certificate, SSN) held about me by my government, for example.


> … but I cannot defer liability.

For that, you need a corporation.


> a computer bug could send that money to the wrong account due to a solar ray

I think the original quote captures that with the qualifier "a management decision", which given that it was 1979 implies it's separate from other kinds of decisions being made by non-manager employees following a checklist, or the machines that were slowly replacing them.

So a cosmic-ray bit-flip changing an account number would be analogous to an employee hitting the wrong key on a typewriter.


The human that decides to use the AI that makes decisions is the one that should be held accountable.

While hard/impossible in practice, I agree.

The Dutch did an AI thing: https://spectrum.ieee.org/artificial-intelligence-in-governm...


Arrest the executives of companies that allow malicious use of AI?

Second degree murder. Much like a car driver can't blame their car for the accident, a corporate driver shouldn't be allowed blame their software for the decision.


Interesting that this comes up again after I just discussed this on here yesterday, but you actually can blame your car for accidents.

If a mechanical or technical problem was the reason of the accident and you properly took care of your car, you won’t be responsible, because you did everything that’s expected of you.

The problem would be defining which level of AI decision making would count as negligent. Sounds like you would like to set it at 0%, but that’s something that’s going to need to be determined.


> If a mechanical or technical problem was the reason of the accident and you properly took care of your car, you won’t be responsible, because you did everything that’s expected of you.

Good thing you brought this up because in the US, defective cars must be recalled and are a liability of the manufacturer. Non-effective cars are a liability of the owner.

Thus, the owner of a car is responsible by default, and manufacturer is second.

In the context of AI, the wielders of AI would be responsible by default, and manufacturers second.

The point is that there is a chain of accountability that is humans owning the equipment or manufacturing the equipment.


What if an insurance company denies healthcare via an algorithm and then people die as a result?

I assume this is already the case

The correct and desired outcome would be that the the insurance company would be held accountable.

> a computer must never make a management decision.

This a little too weak for my taste.

In reality it should read "a computer can't make a management decision". As in the sun can't be prevented from rising, or the law of thermo dynamics can't be broken.

Must implies that you really shoudln't but technically it's feasible. Like "you must not murder".

A computer, like dogs, can't be held accountable; only their owners can.

Edit

If anyone tries to do this they are simply laundering their own accountability.


I don’t know if this is being shared intentionally given the timing of “The Gospel” AI target finder, but it is truly horrific that AI is being used this way and as an accountability target

This feels very "I'm 12 and this is deep".

If a bridge collapses, are you blaming the cement?


It helps to understand the context for when this was written. Before the personal computer revolution, most people didn't have access to a computer and only knew about them from depictions in popular culture, which depicted computers as all-known-all-seeing entities (think HAL from Space Odyssey 2001)

Because of these misconceptions, some people at the time would think of computers as devices that were (somehow) perfect and infallible.

It was very similar to how people view AI today: The way that AI is depicted in popular culture gives people an impression that AI is far more capable than it is. You only really get a good "feel" for what AI can do if you try it yourself. The main difference between AI and pre-personal computers is that basically everyone can use AI now.


Construction companies don’t shrug and blame the concrete. Or at least nowhere near as often as companies that employ software in their customer interactions.

I take your point, but the cement mix can absolutely have an impact on the integrity of the bridge structure. But to further your point, the cement mix was either incorrectly specified, or inadequately provided, and the responsibility for that falls on one of the humans in the loop.

We should keep tapping the sign as long as people are still using "computer says no; nothing can be done" as a serious argument.

In AI crap, I think this crops up as, giant company asks vendor, "Indemnify us against XYZ, but we also want to own everything." My dude, that's what owning the thing entails: taking liability for it.

The punchline will be that people will agree to whatever smoke and mirrors leads to sales.



Did you not see Office Space? Any device can be held accountable.

These days it seems like we can't hold humans accountable either.

I thought the same thing, people do literally whatever they want, evade tax, annex sovereign territory, coups, war crimes, pedophilia, it seems to just be getting worse.

I honestly feel like a moron for paying taxes.


Not true, there are plenty of poor people being held accountable.

Even here, the concept is decaying. Accountability, as explained elsewhere on the thread, is about being asked to explain and justify your actions. If a poor person gets arrested and shows up to court, frequently nobody listens to their explanation. The mere fact that they're poor and in court is evidence of guilt. That's not accountability; that's just punishment.

Accountability requires a common standard of conduct. People have to agree on what the rules are. If they can't even do that, the concept ceases to have meaning, and you simply have the exercise of power and "might makes right".


What is the basis for these opinions?

From what I recall of history even the most bloodthirsty warlords somehow got reliable systems of accountability up and running from their princes/serfs/merchants/etc… at least long enough to maintain sizable empires for several generations.

It’s not like they were 24/7 in a state of rebellion.


It comes from having a dominant ("hegemonic") discourse - basically the set of values, mores, opinions, etc. that are allowed to be aired in public. A consistent Overton window throughout the population, basically.

Periods in history like now (or the late-1920s/1930s, or the 1850s-1860s, or the mid-1600s) are ones where there is ambiguity in power structures. You have multiple competing ideologies, each of which thinks they are more powerful than the others. Society devolves into a state of anomie, where there's no point following the rules of society because there are multiple competing sets of rules for society.

The usual result is war, often widespread, because the multiple competing value systems cannot compromise and so resort to exterminating the opposing viewpoint to ensure their dominance. Then the victors write the histories, talk about their glorious victory and about how the rebels threatened society but the threat was waved off, and institute a new set of values that everyone in the society must live by. Then you can get accountability, because there is widespread agreement on the code of conduct and acceptable set of justifications for the populace at large to judge people by.


AI will definitely, without a doubt, make executive decisions. It already makes lower level decisions. The company that runs the AI, can be held accountable. (meaning less likely OpenAI or the foundational LLM, but more likely the company calling LLMs that make decisions on car insurance, etc...)

Thing is, the chain of responsibility gets really muddled over time, and blame is hard to dish out. Let's think about denying a car insurance claim:

The person who clicks the "Approve" / "Deny" button is likely an underwriter looking at info on their screen.

The info they're looking at get's aggregated from a lot of sources. They have the insurance contract. Maybe one part is AI summary of the police report. And another part is a repair estimate that gets synced over from the dealership. A list of prior claims this person has. Probably a dozen other sources.

Now what happens if this person makes a totally correct decision based on their data, but that data was wrong because the _syncFromMazdaRepairShopSFTP_ service got the quote data wrong? Who is liable? The person denying the claim, the engineer who wrote the code, AWS?

In reality, it's "the company" in so far as fault can be proven. The underlying service providers they use doesn't really factor into that decision. AI is just another tool in that process that (like other tools) can break.


_syncFromMazdaRepairShopSFTP_ failing is also just as likely to cause a human to deny a claim.

Just because an automated decision system exists, does not mean an OOB (out of band) correctional measure should not exist.

In other words if AI fixes a time sink for 99% of cases, but fails on 1%, then let 50% of the 1% of angry customers get a second decision because they emailed the staff. That failure system still saves the company millions per year.


Executives have always used decision-making tools. That’s not the point. The point is that the executive can’t point to the computer and say “I just did what it said!” The executive is the responsible party. She or he makes the choice to follow the advice of the decision-making tool or not.

The scary thing for me is when they've got an 18 year old drone operator making shoot/no-shoot decision on the basis of some AI metadata analysis tool (phone A was near phone B, we shot phone B last week...).

You end up with "Computer says shoot" and so many cooks involved in the software chain that no one can feasibly be held accountable except maybe the chief of staff or the president.


More than any other organization, the military can literally get away with murder, and they're motivated to recruit and protect the best murderers. It's only by political pressure that they may uphold some moral standards.

There is not a finite amount of blame for a given event. Multiple people can be fully at fault.

In most cases today if we don't attribute a direct crime solely to one person but instead to an organisation everyone avoids criminal prosecutions. Its only the people who didn't manage to spread the blame through the rest of the organisation that go down.

Yeah but it's fine because nobody cares if you kill a few thousand brown people extra.

Meanwhile I work in reproducible builds and remote attestation. We absolutely can and must hold computers accountable, now that we have elected them into positions of power in our society.

Surely the company that is making profit out of said build systems and providing attestations holds some accountability. Someone wrote the code. Someone paid for the code to be written to a particular standard, under particular budget and resourcing constraints. Someone was responsible for ensuring the code was adequately audited. Someone claimed it was fit for purpose, and likely insured it as such, because they are ultimately responsible.

You can only hold computers accountable if you can guarantee no outside modification. We still haven't ever successfully had a system that's not "pop-able" that I am aware of.

I was curious about what the words beneath the first page said. I uploaded the image to o1, asked it to extract the words, then come up with what they would likey say, and then finally to use them as a topic for its own Computer Mandate. This is what it came up with:

---

THE COMPUTER MANDATE

[…] too little authority does not […]

[…] social environment […]

[…] overshadowed or re‑directed by line management […]

[…] with too little responsibility and too little accountability […]

[…] to perform as pre‑directed by the programmer whenever instructed to do so […]

---

---

THE COMPUTER MANDATE

In an environment where a manager has too little authority, they cannot effectively use the tools required to take advantage of the social and organizational structures around them. Their decisions quickly become overshadowed or re‐directed by line management. With too limited responsibility—and insufficient accountability to match—such a manager is often reduced to implementing whatever has been pre‐programmed or instructed, rather than exercising genuine judgment.

A computer itself, by definition, does only what it has been programmed to do whenever instructed. It lacks the moral and ethical faculties to hold itself accountable for outcomes. For that reason, a computer should never hold the power to make a management decision. As a tool, it can facilitate planning and analysis, but the responsibility—and thus accountability—must always remain where it belongs: with human leadership.

---

---

THE COMPUTER MANDATE

1. Purpose and Tool-Like Nature

A computer—be it a simple office processor or a complex artificial intelligence—is fundamentally a tool created by humans, for humans. Its purpose is to augment our capacity for computation, data analysis, and decision-support. It lacks inherent moral or ethical agency and, therefore, cannot be expected to be accountable for any outcomes tied to its functionality.

2. Accountability Resides with Humans

No matter how advanced machine learning or AI algorithms become, responsibility and accountability must remain within the realm of human decision-makers. While a computer program can provide recommendations, predictions, and valuable data-driven insights, ultimately it is the role—and the duty—of human managers and leaders to make final determinations.

3. Ethical Use of Technology

Computers should be employed with clear ethical guidelines, such as those championed by researchers and leading tech organizations worldwide: • Transparency: Algorithms and processes must be as transparent as possible so humans can understand how recommendations or outputs arise. • Fairness and Bias Mitigation: Systems must be regularly tested for biased outcomes and adjusted to promote equity, avoiding discrimination or harm to individuals or groups. • Privacy and Security: User data protection must be integral, with stringent safeguards against misuse or unauthorized access.

4. Informed Delegation of Tasks

Though computers may execute certain operations more quickly and accurately than humans, they do so within the constraints of their programming and training data. Thus, while it is common to delegate data processing or logistic calculations to computer systems, strategic decisions—those that involve moral, ethical, or nuanced judgments—should not be relegated solely to a machine.

5. Human Oversight of Automated Processes

Increasingly, automated systems can act with minimal human intervention. Yet these processes must be overseen and audited by qualified individuals or teams who can verify that outputs conform to relevant codes of conduct and societal values. In high-stakes fields such as finance, healthcare, and criminal justice, rigorous human review is essential to prevent harmful outcomes.

6. Continuous Improvement and Literacy

In a rapidly changing technological landscape, managers, programmers, and end-users alike must regularly update their computer and AI literacy. This ensures that all parties understand the technology’s limitations as well as its capabilities. Such knowledge drives more responsible, accountable, and ethically grounded technology deployment.

7. Computers as Partners, Not Replacements

While a computer can offer remarkable assistance—from sifting through vast data sets to providing simulations of potential scenarios—its role is to inform and support human decisions, not replace them. In cases that demand empathy, creativity, or moral reasoning, humans must always be the arbiters.

Conclusion:

A computer, by definition and function, can never be fully accountable for decisions, as it lacks the innate capacity to understand moral implications. Therefore, no matter how sophisticated technology becomes, we must ensure that true accountability and decision-making authority remain vested in human hands. Computers are indispensable tools—but they must remain tools, guided by ethical oversight and human responsibility.

---


Alternatively, my opinion is that computers should make all management decisions exactly because of this. Too many times does decisionmaking devolve into fingerpointing and political power struggles. I strongly believe the only way out of this is exactly through AI. It will also provide a long overdue reset to our compass on what accountability means and can mean.

Maybe if the AI is impartial. But who trains the AI? Who guarantees it has no biases, implicit (because it's trained on how things work today, which is maybe not what we want) or explicitly planted there (because of political plans of AI owners or other external influence)?

Well, people keep riding the AGI train, no? Surely a superintelligence should be able to overcome its own biases, right? If there's any bar I'd pose for an intelligence to be "super", this would definitely be one of them. Even with regular humans, being open minded, willing to critically rethink their own stance, and being open for discussions is generally regarded as a sign of intelligence, after all.

Many (maybe most) humans are not even aware of their own biases, though they are supposed to have "general" intelligence. When they are aware, sometimes they use various psychological strategies to live with them or justify them instead of trying to overcome them. So the AGI in question must be "better" than most humans in this sense.

Plus, there's the question about who would control that AGI. If it's a black box in the hands of a company, how can I know for sure that the AGI has no secret plan implanted by the company or by some government?


Your post might have been tongue in cheek, but assuming it's not: the issue with ceding decision making to a superintelligence is that we'll likely lose the ability to reason about it's decisions and we'll struggle to take back control if it starts making choices we don't like. It's reasoning is far beyond our own capabilities after all.

It's half tongue-in-cheek. That being said, the larger picture behind the decisionmaking of our human leaders is also essentially uncheckable, and is often a key part of many conspiracy theories as a result.

Do you want LLMs that monitor your work as if you are an Amazon factory worker... and then one day you wake up in a Cube that you helped design.

Pretty sure I'm already extensively monitored at work, and so are most people, so I'm not sure that'd be new.

Based on exactly what should the computers make management decisions?

Reports filed by humans? Or meaningless automated metrics?


The same data available to current human managers.

In the days when Sussman was a novice, Minsky once came to him as he sat hacking at the PDP-6.

"What are you doing?", asked Minsky.

"I am training a randomly wired neural net to play Tic-tac-toe", Sussman replied.

"Why is the net wired randomly?", asked Minsky.

"I do not want it to have any preconceptions of how to play", Sussman said.

Minsky then shut his eyes.

"Why do you close your eyes?" Sussman asked his teacher.

"So that the room will be empty."

At that moment, Sussman was enlightened.


Unfortunately I was not enlightened by this.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: