I'm not sure I understand why this is about agents. This feels more like contracting than SaaS. If I contract a company to build a house and it's upside down, I don't care if it was a robot that made the call, it's that company's fault not mine. I often write electronic hardware test automation code and my goodness if my code sets the power supply to 5000V instead of 5.000V (made up example), that's my fault. It's not the code's fault or the power supply's fault.
So, why would you use a SaaS contract for an agent in the first place? It should be like a subcontractor. I pay you to send 10k emails a day to all my clients. If you use an agent and it messes up, that's on you. If you use an agent and it saves you time, you get the reward.
Exactly. I have said several times that the largest and most lucrative market for AI and agents in general is liability-laundering.
It's just that you can't advertise that, or you ruin the service.
And it already does work. See the sweet, sweet deal Anthropic got recently (and if you think $1.5B isn't a good deal, look at the range of of compensation they could have been subject to had they gone to court and lost).
Remember the story about Replit's LLM deleting a production database? All the stories were AI goes rogue, AI deletes database, etc.
If an Amazon RDS database was just wiped a production DB out of nowhere, with no reason, the story wouldn't be "Rogue hosted database service deletes DB" it would be "AWS randomly deletes production DB" (and, AWS would take a serious reputational hit because of that).
If I am a company that builds agents, and I sell it to someone.
Then, that someone loses money because this agent did something it wasn't supposed to: who's responsible?
Me as the person who sold it? OpenAI who I use below? Anthropic who performs some of the work too? My customer responsible themselves?
These are questions that classic contracts don't usually cover because things tend to be more deterministic with static code.
> These are questions that classic contracts don't usually cover because things tend to be more deterministic with static code.
Why? You have a delivery and you entered into some guarantees as part of the contract. Whether you use an agent, or roll a dice - you are responsible for upholding the guarantees you entered into as part of the contract. If you want to offload that guarantee, then you need to state it in the contract. Basically, what the MIT Licenses do: "No guarantees, not even fitness for purpose". Whether someone is willing to pay for something where you enter no liability for anything is an open question.
Technically that's what you do when you google or ask chatgpt something, right? They make no explicit guarantees that any of what is provided back is true, correct or even reasonable. you are responsible for it.
Agreeing with the others. It's you. Like my initial house example, if I make a contract with *you* to build the house, you provide me a house. If you don't, I sue you. If it's not your fault, you sue them. But that's not my problem. I'm not going to sue the person who planted the tree, harvested the tree, sawed the tree, etc etc if the house falls down. That's on you for choosing bad suppliers.
If you chose OpenAI to be the one running your model, that's your choice not mine. If your contract with them has a clause that they pay you if they mess up, great for you. Otherwise, that's the risk you took choosing them
In your first paragraph, you talk about general contractors and construction. In the construction industry, general contractors have access to commercial general liability insurance; CGL is required for most bids.
Maybe I'm not privy to the minutae, but there are websites talking about insurance for software developers. Could be something. Never seen anyone talk about it though
Did you, the company who built and sold this SaaS product, offer and agree to provide the service your customers paid you for?
Did your product fail to render those services? Or do damage to the customer by operating outside of the boundaries of your agreement?
There is no difference between "Company A did not fulfill the services they agreed to fulfill" and "Company A's product did not fulfill the services they agreed to fulfill", therefore there is no difference between "Company A's product, in the category of AI agents, did not fulfill the services they agreed to fulfill."
Well, that depends on what we are selling. Are you selling the service, black-box, to accomplish the outcome? Or are you selling a tool. If you sell a hammer you aren't liable as the manufacturer if the purchaser murders someone with it. You might be liable if when swinging back it falls apart and maims someone - due to the unexpected defect - but also only for a reasonable timeframe and under reasonable usage conditions.
I don't see how your analogy is relevant, even though I agree with it. If you sell hammers or rent them as a hammer providing service, there's no difference except likely the duration of liability
It's you. You contracted with someone to make them a product. Maybe you can go sue your subcontractors for providing bad components if you think you've got a case, but unless your contract specifies otherwise it's your fault if you use faulty components and deliver a faulty product.
If I make roller skates and I use a bearing that results in the wheels falling off at speed and someone gets hurt, they don't sue the ball bearing manufacturer. They sue me.
Yes they do, adding "plus AI" changes nothing about contract law, OAI is not giving you idemification for crap and you cant assign liability like that anyway.
LLMs are not actually intelligent, and absolutely should not be used for autonomous decision making. But they are capable of it... as in, if you set up a system where an LLM is asked about its "opinion" on what should be done, it will give a response, and you can make the system execute the LLM's "decision". Not a good idea, but it's possible, which means someone's gonna do it.
Sigh -- another not-even-thinly-veiled ducking of "A computer can never be held accountable, therefore a computer must never make a management decision."
The question isn't just who's liable - it's whether traditional contract structures can even keep up with systems that learn and change behavior over time. Wonder if this becomes a bigger moat than the AI.
Probably a dumb question, but what do you mean with changing behavior over time? Contract with changing clauses? From my limited knowledge on the matter, the idea of a contract is getting rules that would not change without agreement from both parties.
I encounter this all the time with GenAI projects. The idea of stability and "frozen" just doesn't exist with hosted models IMO. You can't bet that the model you're using will have the exact behavior a year from now, hell maybe not even 3 months. The model providers seem to be constantly tweaking things behind the scenes, or sunsetting old models very rapidly. Its a constant struggle of re-evaluating the results and tweaking prompts to stay on the treadmill.
Good for consultants, maybe, horrible for businesses that want to mark things as "done" and move them to limited maintenance/care and feeding teams. You're going to be dedicating senior folks to the project indefinitely.
This is a big motivation for running your own models locally. OpenAI's move to deprecate older models was an eye-opener to some but also typical behavior of the SaaS "we don't have any versions" style of deployment. [0] It will need to change for AI apps to go mainstream in many enterprises.
This isn't a new problem. It's like if you built a business based on providing an interface to a google product 10 years ago and google deleted the product. The answer is you don't sell permanent access to something you don't own. Period.
I interpreted the comment as worrying about drift across many contracts not one contract changing.
Imagine I create a new agreement with a customer once a week. I’m no lawyer so might not notice the impact of small wording changes on the meaning or interpretation of each sequential contract.
Can I try and prompt engineer this out? Yeah sure. Do I as a non lawyer know I have fixed it - not to a high level of confidence.
No, at least not in all cases. Customers incur review costs and potentially new risks if you change contract terms unexpectedly. In my business many large customers will only adopt our ToS if we commit to it as a contract that does not change except by mutual agreement. This is pretty standard behavior.
Also it might be that with systems that learn and change behavior over time, some sort of contract structure is needed. Not sure if traditional is the answer though.
So, why would you use a SaaS contract for an agent in the first place? It should be like a subcontractor. I pay you to send 10k emails a day to all my clients. If you use an agent and it messes up, that's on you. If you use an agent and it saves you time, you get the reward.
reply