I don't think it's so dire. I've gone through this at multiple companies and a startup that's selling B2B only needs one or two of these big outages and then enterprises start demanding SLA guarantees in their contracts. it's a self correcting problem
My experience is that SLA "guarantees" don't actually guarantee anything.
Your provider might be really generous and rebate a whole month's fees if they have a really, really, really bad month (perhaps they achieved less than 95% uptime, which is a day and half of downtime). It might not even be that much.
How many of them will cover you for the business you lost and/or the reputational damage incurred while their service was down?
It depends entirely on how the SLAs are written. We have some that are garbage, and that's fine, because they really aren't essential services, SLAs are mainly a box-checking exercise. But where it counts, our SLAs have teeth. We have to, because we're offering SLAs with teeth to some of our customers.
But that's not something you get "off the shelf", our lawyers negotiate that. You also don't spend that much effort on small contracts, so there's a floor with most vendors for even considering it.
It seems like the "Turbo" models are more about being faster/cheaper, not so much about being better. Kinda similar to the iPhone "S" models or Intel's "tick-tock"
I don't know if I'd say "miles ahead." AWS had 7 years of basically no other competition -- all of the other big clouds of today had their heads in the sand. OpenAI has a bunch of people competing already. They may not be as good on the leaderboards now, but they're certainly not having to play catch up from years of ignoring the space.
How do you square this with OpenAI's assertion that they never use data from enterprise customers for their own training? Are you suggesting they're lying?
OpenAI just slurped the entire internet to train their main model, and the world just looks on as they directly compete with and disrupt authors the globe over.
Whoever thinks they are not interested in your data and won't use any trick to get it, then double down on their classic "but your honor, it's not copyright theft, the algorithm learns just like an employee exposed to the data would", isn't paying attention.
This is exactly why I am personally intensely opposed to treating ML training as fair use. Practically speaking the argument justifies ignoring anyone or any group’s preference not to contribute to ML training, so it’s a massive loss of freedom to everyone else.
I agree with you. What come to my mind, is that GPT using private data to learn, if given back to (any) customer, you would have an indirect "open source everything".
No-one seems to have sued Unity over an even more egregious set of EULA changes - claiming that users retroactively owed money on games developed and published under totally different terms.
Afaik, they rolled the retroactive fee back, right?
And technically, that was a modification of the future EULA.
If you wanted to continue to use Unity, here was the pricing structure, which includes payments for previous installs.
You were welcome to walk away and refuse the new EULA.
Which is a big difference with historically collecting data, in violation of the then-EULA, and attempting to retroactively bless it via future EULA change.