Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> What's scary isn't that these models are so good they'll replace us, but that despite how limited they are, someone will make the decision to replace humans anyway.

is this a real threat? if a system/company decides to replace a human with something that is less capable wouldn't that just result in it becoming irrelevant/bankrupt as it is replaced by other companies doing the same thing the more efficient (and in this case traditional) way?



Has any company ever truly believed that outsourcing their customer service to India would improve the experience for their customers? No, but they did it anyway, because it cut costs. AI can be obviously worse than humans and still put them out of work, because GPUs are much cheaper than people.


Capability and efficiency are very different things.

Consumers will absolutely buy worse buy cheaper products/services.


Not necessarily. Imagine a health insurance provider even partially automating their claim (dis)approval process - it could be both lucrative and devastating.


Adding to this, government use cases would be most likely to cause issues because they’re often relevant regardless of how badly they suck.

There are already active discussions about AI being used in government for “efficiency” reasons.


A similar issue arises with health insurance. Using AI to evaluate claims is a huge efficiency play and you don’t have much ability to fight it if something goes wrong. And even if you can these decisions can be life or death in the short term and human intervention usually takes time.


i suppose that links back to the other comment i made - is hype the root issue you are trying to get at?

would be interesting to see what examples there are of this in recent history


I'm not entirely sure what you're getting at re: hype.

While there is undoubtedly a lot of hype around these tools right now, that hype is based on a pretty major leap in technology that has fundamentally altered the landscape going forward. There are some great use cases that legitimize some of the hype.

As far as concrete examples, see the sibling comment with the anecdote regarding health insurance denial. There are also portions of the tech industry focused on rolling these tools out in business environments. They're publicly reporting their earnings, and discussing the role AI is playing in major business deals.

Look at players like Salesforce, ServiceNow, Atlassian, etc. They're all rapidly rolling out various AI capabilities into their existing customer bases. They have giant sales forces all actively pushing these capabilities. They also sell to governments. Hype or not, it adds up to real world outcomes.

Public statements by Musk about his intention to use AI also come to mind, and he's repeatedly shown a willingness to break things in the pursuit of his goals.


worse for the consumer or the provider? if the llm is going to fundamentally do a "worse" job no matter what the incentive (whatever that is, maximising profit, maximising claims, whatever that may be) we will end up with the "more efficient" system in charge

the counterpoint to this (which i guess is the tenet of the original comment?) is that hype can shadow good judgement for a short period of time?


Already happened. With predictable results.

https://www.youtube.com/shorts/Y7QPXzDmloI?embeds_referring_...


is that even legal?


Who's going to stop them?


Lawyers


The current admin is unlikely to get the DOJ to do anything, so that leaves you trying to sue them yourselves, and they can afford more expensive lawyers and to gum up the proceedings until you go bankrupt or give up.

If you haven't already signed away your right to sue with an arbitration clause.


class action. Someone sues on behalf of all customers of the insurance company saying the insurance policy is fraudulent. The company is executing policies based on ai or rolling the dice. The insurance company owes all customers balance paid on policy and all costs associated with attempting to use fraudulent policy.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: