> This moves the cost of LLM errors from the customer to the company offering the service.
So does that mean AI companies are going to have insurance/litigators like doctors and models will be heavily lawyered to add more extensive guardrails. I'm assuming this means not just OpenAI but any service that uses LLM APIs or open models?
For ex: If a finance business pays to use an AI bot that automates interacting with desktop UIs and that bot accidentally deletes an important column in an Excel spreadsheet, then the AI company is liable?
No, the exact opposite. This says that if the AI that a bank is paying for locks your bank account in error because your name sounds <ethnicity with a lot of locked bank accounts>, it's the banks problem to fix, not yours to just live with (entirely. You still likely have a problem).
Conversely, would you suggest that if an AI driver has a programming error and kills 20 people, that the person who reserved the car should be required to enter into a “User Agreement” that makes them take responsibility?
If it's a "self driving car" that the person "owns" - Yes.
If it's a "Taxi service" that the person is using - No.
If it's a car they own, they (should) have the ability to override the AI system and avoid the accident (ignoring nuances) - therefore owning responsibility.
If it's a Taxi they would be in a position where they can't interfere with the operation of the system - therefore the taxi company owns the responsibility.
Rightly or wrongly, this model of intervention capability is what that I'd use to answer these types of questions.
So does that mean AI companies are going to have insurance/litigators like doctors and models will be heavily lawyered to add more extensive guardrails. I'm assuming this means not just OpenAI but any service that uses LLM APIs or open models?
For ex: If a finance business pays to use an AI bot that automates interacting with desktop UIs and that bot accidentally deletes an important column in an Excel spreadsheet, then the AI company is liable?