Hacker News new | past | comments | ask | show | jobs | submit login

It's disingenuous in that it takes a word with a common understanding ("agent") and then conveniently redefines or re-etomologizes the word in an uncommon way that leads people to implicitly believe something about the product that isn't true.

Another great example of this trick is "essential" oils. We all know what the word "essential" means, but the companies selling the stuff use the word in the most uncommon way, to indicate the "essence" of something is in the oil, and then let the human brain fill in the gap and thus believe something that isn't true. It's techinically legal, but we have to agree that's not moral or ethical, right?

Maybe I'm wildly off base here, I have admittedly been wrong about a lot in my life up to this point. I just think the backlash that crops up when people realize what's going on (for example, the airline realizing that their chat bot does not in fact operate under the same rules as a human "agent," and that it's still a technology product) should lead companies to change their messaging and marketing, and the fact that they're just doubling down on the same misleading messaging over and over makes the whole charade feel disingenuous to me.






with a common understanding ("agent") and then conveniently redefines or re-etomologizes the word in an uncommon way that leads people to implicitly believe something about the product that isn't true.

What is the 'product' here? It's a university textbook. Like, where is the parallel between https://en.wikipedia.org/wiki/Intelligent_agent and 'essential oils'.


Oh, I have no issue with his textbook definition, I'm saying that it's now being used to sell products by people who know their normal consumer base isn't using the same definition and it conveniently misleads them into believing things about the product that aren't true.

Knowing that your target market (non-tech folks) isn't using the same language as you, but persisting with that language because it creates convenient sales opportunities due to the misunderstandings, feels disingenuous to me.

An "agent" in common terms is just someone acting on behalf of another, but that someone still has autonomy and moral responsibility for their actions. Like for example the airline customer service representative situation. AI agents, when we pull back the curtains, get down to brass tacks, whatever turn of phrase you want to use, are still ultimately deterministic models. They have a lot more parameters, and their determinism is offset by many factors of pseudo-randomness, but given sufficient information we could still predict every single output. That system cannot be an agent in the common sense of the word, because humans are still dictating all of the possible actions and outcomes, and the machine doesn't actually have the autonomy required.

If you fail to keep your tech product from going off-script, you're responsible, because the model itself isn't a non-deterministic causal actor. A human CSR on the other hand is considered by law to have the power and responsibility associated with being a causal actor in the world, and so when they make up wild stuff about the terms of the agreement, you don't have to honor it for the customer, because there's culpability.

I'm drifting into philosophy at this point, which never goes well on HN, but this is ultimately how our legal system determines responsibility for actions, and AI doesn't meet those qualifications. If we ever want it to be culpable for its own actions, we'll have to change the legal framework we all operate under.

Edit: Causal, not casual... Whoops.

Also, I think I'm confusing the situation a bit by mixing the legal distinctions between agency and autonomy with the common understanding of being an "agent" and the philosophical concept of agency and culpability and how that relates to the US legal foundations.

I need to go touch grass.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: