Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Using the term 'AI' in product descriptions reduces purchase intentions (wsu.edu)
111 points by haswell on July 31, 2024 | hide | past | favorite | 58 comments


I'm not surprised.

You can market your product as being "cheap" in the sense "you as a customer don't have to pay much", people like not spending money.

You can't market a product as being "cheap" in the sense of "no expense spent", people look at that and think "then why are you charging me for this?"

We've all seen the failure modes of LLMs and art generators, so AI comes across as the latter rather than the former. And that's despite the success examples: it isn't sufficient to point out that AI can do some things better than any human, as those things are very narrow domains like chess and protein folding and millisecond stock market trading; and it isn't sufficient to argue about the difference between the average human and AI because of our division of labour, because, for example, it may be better at architecture than an accountant and better at accountancy than an architect, but you wouldn't hire someone for either role unless they were skilled at that role.


Indeed. There's an older distinction between "handmade/artisinal" vs "mass produced" which matters here. At first thought, shouldn't the machine made version using modern statistical process control be better?

What actually happens is that the process control version uses that predictability to dial down the average quality while keeping the product just above total failure level.

AI takes this a step further: it allows cost cutting by replacing human processes with nondeterministic uncontrolled statistical processes. Some percentage of AI users will just be given a dud experience. For which they will, not unreasonably, blame the AI.

So AI becomes a negative quality signal. I've already observed "uses AI generated images" being interpreted as "product is obviously fake and/or bad".

(consumers are very much complicit in this by always choosing the cheaper option - but see Akerloff "Market for Lemons", that may be economically rational if you can't properly distinguish quality!)


AI has a "not ready for prime time"/"bleeding edge" problem.

Releasing features that too often work poorly or not at all reduces customer satisfaction. AI as a feature too often works poorly at too much of what it is intended for. Non-technical customers who try "AI feature" often become frustrated, but most of them won't try it. Technical customers don't need AI built into a feature, and know more about what can be wrong with it.


I think the negative petception is more cultural in the last 50 years than recency.

AI has rarely been shown as good in all media.


> I think the negative petception is more cultural in the last 50 years than recency.

> AI has rarely been shown as good in all media.

I disagree. The negative depictions in past media were typically very capable, and nearly perfect AI (perhaps except for one small vulnerability that let the plucky protagonist win).

That's very, very different from the current negative impressions of AI: mediocre quality, erratic, requiring constant human supervision, and frustrating.


People also hear "AI" and think "chatbot." So they not only think of the recent, post-chatGPT experiences, but also remember every shitty automated phone system or automated chatbot on every website that's ever tried to push something or avoid giving you support.


Yeah, exactly. The incentives in capitalism aren't to make consumers happy, it's to keep them from being so unhappy they switch to a competitor, which is a very different thing.

There are huge incentives for all large businesses to adopt these automation technologies, despite the negative customer experience. Customers know they're getting mistreated, but there's not a lot they can do about it. The competitors are probably doing the same thing, and those that aren't will do it eventually (certainly once the MBA's inevitably gain control).


I dont buy yhe recency argument.

Culture is long tail and AI simply isnt evwt depicted as good.

As such, labeling all this stuff with AI triggers negative branding and has little to do with how poor current tech is.

Yall are using your narrow focus to cast a wide net and thats foolish.


This feels right, I can’t think of any consumer product where AI is providing value today.

I assume, like me, most consumers have been saturated with AI, tried it a few times, and found it doesn’t deliver on simplifying/improving anything. They tried it, it hasn’t helped and they’ve adjusted their mindset accordingly.


It's interesting because this may precipitate an AI winter. Everyone is burnt out on sub-standard AI, even if it's improving, most folks have already passed on it. I think it may have been a mistake to popularize what is effectively a beta version so soon, especially with all of the controversy it has created.


Two categories currently saturating consumers: ambiguous AI product advertisements on social media feeds and companies offering AI chatbots instead of a human for customer service or support.

Neither are very appealing.


> I can’t think of any consumer product where AI is providing value today.

Is ChatGPT a consumer product? It must be providing tons of value, otherwise it wouldn't be popular.


Entertainment value for sure.


For me as a consumer, AI is an anti-feature. In technical products, I know it doesn't work reliably (at least if we're talking about LLMs, not traditional algorithms marketed as AI). In art - music, photography, painting, design - AI is the opposite of anything I value.


As a consumer, AI means "outsourced decisions with no accountability or humanity involved". It's like selling your product with the tagline "We hired less humans in favor of this automated phone support system."

It needs to be kept back as a toolset that humans use to address human problems, not a front-line feature. It's not trustworthy, it holds no responsibility, it's not predictable, and it makes the company seem less invested in humanity itself.


Without even considering moral dimensions.. try replacing every instance of the phrase AI with the equivalent phrase “a huge complicated non deterministic system that has no actual specification, and therefore never makes any errors”. By this definition art and music are the only things it might actually be qualified for, although of course starving artists will starve more. But it’s driving cars. Deciding whether transactions go through. Handling applications for loans, college, and jobs. It’s not a good idea. Even spam detection, widely regarded as a success story, isn’t actually. We all just got used to having important emails categorized as spam every week, and deleting all the spam that gets through, and don’t get me started about how ineffectual are the recommendation engines despite decades of spying.

I don’t care that much whether things like customer service are handled by people or robots, as long as it works. But self checkout is always broken and uncooperative, and humans are now just triple tasked with checkout, cleaning toilets, and shelf stocking so that they can’t do any of them properly, and clerks and customers are both more miserable. It’s just a preview. Face meet boot, forever


Relatedly, has anyone else noticed Doordash now has AI generated descriptions for menu items that the restaurant didn't manually describe? I noticed it while looking at a taco place by us.

For items with menu pictures, it gives a definitive description, e g. "Marinated pork with onions, cilantro, and pineapple, folded in a grilled tortilla. Served with lime wedges and a side of grilled jalapeño and sautéed onion."

For items without pictures, it hedges: "Folded flour tortilla filled with seasoned chicken and melted cheese, typically includes a blend of Mexican cheeses."

Part of me finds it pretty neat, but the other part wonders how long it will last.


Autogenerating an ingredients list feels like a risky game to play wrt allergies

At least with an empty description you know that you don’t know


At a previous Hack-n-Tell event someone presented a recipe website which used an LLM to parse a free-text recipe and change it to a numbered list of steps and ingredients list.

All I could think is that this is definitely going to kill someone. Luckily, it seems to be offline now.


Sounds like it will be a tasty lawsuit, especially if it's not disclosed that the description is auto generated. Hopefully they don't kill someone with food allergies.


It's not just the descriptions, they are also AI generating "photos" of the product.

https://www.404media.co/email/57421524-fd79-4073-b6f2-e7fb17...


Gobs of e-commerce apparel companies are now using AI product photos too. My friend works for one such company. It’s outrageous as it’s a complete fabrication of the garment’s fit on a model, how a piece of furniture looks in a space, etc. But it allows retailers to skip a huge cost to sell products, and they can make the models as hot, or diverse as they want - without paying human models a dime, or hiring a photographer, or scouting a location.

Seriously, Google “ai ecommerce photos” - it’s disgusting how these services advertise themselves while cutely sidestepping how clearly fraudulent this is to the customer. Here’s an easy scapegoat from page 1 of the results https://flair.ai/


If I notice that product descriptions or photos are generated, I'm simply not buying the product. The same way that if I notice a product photo is photoshopped without AI, I'm not buying the product.


I hate that so much


It's really funny to use "typically" in a menu item. Am I rolling the dice every time I order that menu item?


Wow, that's insane. Here in Poland product descriptions are legally binding. You can straight up sue if you don't get what the description says in any online store. Pictures are optional though, those you could potentially generate, maybe, not a lawyer.


Wouldn’t fly in a jurisdiction with actual consumer protections.


Marketer: "But we'll always have the United States."


This seems to be the key reason (from the article): "Because failure carries more potential risk, which may include monetary loss or danger to physical safety, mentioning AI for these types of descriptions may make consumers more wary and less likely to purchase."

That makes sense. Anyone who has used ChatGPT knows that while it is a great tool, it occasionally makes mistakes. In some products, mistakes can have significant consequences, so even infrequent mistakes are not allowed. In other products, infrequent mistakes may not be an issue.


As it should be. The marketing geniuses, as usual, overplay into the hype and devalue the technology through wild promises. In the investment world, most barely pseudo-quantitative strategies, have now been rebranded with AI claims.


In the 2000’s it used to feel sketchy to buy things online. I think this apprehension about “AI” inside products will fade away.

With that said, your whole value proposition can’t be just “we have AI.” You should still talk about the desired outcomes your product helps achieve. The fact that it uses AI is just part of “how it works,” which is secondary.

And sometimes the “how it works” is better left unsaid. For example, Amazon’s logistics operation is out of this world, with robots and AI and incredibly sophisticated supply chains… But people only care that their stuff arrives in 2 days.


> In the 2000’s it used to feel sketchy to buy things online. I think this apprehension about “AI” inside products will fade away.

I'd argue we're coming full circle to online shopping being sketchy again, with sites like Amazon getting overrun with fake shell companies selling dropshipped Aliexpress junk.


I've gotten so much push-back over the years when I've asserted that startups are too quick to describe their products as AI in industries where the target customer considers it undesirable. I think it's done for ego and to impress investors. Examples I've seen include products in clinical diagnosis and financial accounting. Some needs require utmost predictability, observability, and ultimately, understandability.

Of course, there are some industries and markets that desire the capabilities only AI can provide. But that's the point. Analysis should precede the message. We should market the benefits. I've seen a few people at least claim AI isn't a benefit, it's a feature. I'd argue it's not even a feature, it's an implementation detail, like using an object-oriented programming language or a relational database; it has advantages and disadvantages.

Focus on the needs of the customer and industry. Describe the benefits. For customers and investors alike, remove the AI veil of opacity, by describing simply what the AI is doing and how/why.

It's interesting to see a study that seems to corroborate my anecdotal experiences. It's a marketing study though, so it shouldn't be overly generalized until more studies reproduce the results. Studies about human behavior tend to be difficult to reproduce and can yield conflicting conclusions. I wouldn't be surprised to see another study with slightly different questions or methods come to the opposite conclusion, especially if they don't control for consumer segments, industries, or types of products.


Personally when I see AI being implemented in products and sold as a selling point I tend to think that it's wasted resources.

I think there's space for AI generation out there, I see a lot and I do mean a lot, of popular AI content out there. People use it in their YouTube videos to generate images, people generate these silly videos of cats with a meow version of Billie Eilish as music. And that's about as much people want to use AI for.

Now, I do think there's good AI products out there like copilot. And apple's AI on the iPhones seemed interesting. But most AI implementations just feel too jarring and "okay but it's literally chatgpt on your data" to be useful.


At the top level companies and governments want to build AI products so they have an adaptable workforce and they don't miss the boat.

At the middle level managers want to build AI products so they appear innovative and get promoted.

At the low level designers want to build AI products to increase their skill set so they can get a better job.

Consumers could generally care less. There are a few products that work better because they use neural networks, but that's an implementation detail. Does the TV look nice? Great I'll take it, I could care less if it's innovative.


The change in sentiment, even here, on "AI" has been dramatic.

Even six months ago, there were still "AGI" discussions happening, talks about how "over" it was for white-collar jobs etc.

Seems there's an increasingly negative sentiment around "AI" now, especially (or largely) among the non-technical general public.


Wish I could access the study. Curious to learn more.

When was this conducted? Relatively recently, after the commercial boom of LLMs, or prior, when AI was less front and center in the minds of an average consumer?

What are the characteristics of the study participants? Career choices or technical knowledge would be most interesting to pick apart as it relates to findings.

Anecdote Corner:

Prior to the LLM boom and OpenAI, I was real bearish on companies throwing about "AI capabilities," because it was, from my few decades of engineering experience, just marketing mumbo jumbo. I'm more bullish on the direction and viability of "AI" these days, and I'm really excited to see what the future holds, but I'm even more skeptical of "AI" as a product capability, for a few reasons:

1. I honestly don't care if something is powered by AI or not. Solve my problem, don't attempt to distract me with "AI" as a selling point.

2. I've seen what's capable with contemporary LLMs - it's awesome, but as many posts here on HN have shown, the technology is far from perfect. I'm being patient.


I think a lot of people are just suffering from AI fatigue, where the promises are huge and the product is underwhelming. That's still consistent with high risk things being affected more as found here, because people think 'if you can't get this low stakes thing right, why should I trust high stakes things to be any better?'


I use chatgpt and copilot every day and I think they are great products, but I absolutely hate people jamming AI generated results into pages when I'm looking for writing written by and for people, or forcing me to use AI when I want to talk to a person.

Give me AI when I _want_ AI, don't give it to me as a knock-off replacement for something else.


It's kind of like putting a slogan "Factory made!" on pastries and wondering why they don't sell too well. AI is an automation technique. It might make things cheaper or more available, but rarely it will make something better than a dedicated person.


Not only there's a "trust-crisis" looming over the AI world (only tech bro enthusiasts seem to enjoy what AI is doing) it's straight up just a technical detail. Describing the technology the product runs on in the description rarely works because it almost never matters to the customer. For example "Cloud-based" could be good if the product is something for developers and they want to know how it's hosted and where the data is stored, but if a cinema or any kind of e-store or anybody else in the product description stated that they're running in the cloud... that would be quite stupid.

Same happens with the AI. I want to know if you're gonna solve my problem, not how. If AI won't do the job I want you to use something else.


> ...only tech bro enthusiasts seem to enjoy what AI is doing...

You can say that again. The current AI hype train is like the one for blockchain, except it's also fueled by certain tech enthusiasts semi-religious belief in science fiction.


I have to admit, the "cryptocurrency" vibes that a lot of this crop of AI companies and personalities emit is extremely off-putting.

As is the seriously cult-like vibes that the rest of this crop of AI companies and personalities give off.


This could be similar to why many people value handmade products more than mass produced products.


I think there’s more to it than that. There are high quality mass-produced products. There are no high-quality AI products. Consumers are learning that and adapting accordingly.


Missing the AI hype train

I guess many companies worried they would be left out, they oversold it a bit, I think AI tools are great, and will continue to improve and help us improve

But the hype was a bit too much, many companies just rebranded their products as AI without really changing much if anything at all

And chat bots are still horrible on 100% of support site, why aren't chat bots AI already ?


Because AI chatbots cannot really be controled and the companies can be held responsible for their mistakes : https://news.ycombinator.com/item?id=39378235


interestingly this post dropped from position 1 to 40 in under 5 minutes

https://hnrankings.info/41118844/


Yeah, I noticed that too. I’ve never had a post with such a growth in upvotes in the first 10-15 minutes.

That immediately stopped when it was deranked. Made me think it either triggered some kind of circuit breaker, or the fact that the cited research is behind a paywall made someone flag it, but I’m definitely curious to know why.


It apparently can be both algorithmic and mods putting their thumb on the scale. Sometimes, if it's obvious and people start complaining, dang chimes in[1] to explain what happened in a specific case. It's not very transparent, but he's pretty good about answering specific questions about submission moderation.

1: https://news.ycombinator.com/item?id=40720026


Super useful link. I wouldn't be surprised if it got flagged by folks who feel threatened by the idea, and there are clearly quite a few AI startups represented here, most of which are doing exactly what the research says is turning people off.


> I wouldn't be surprised if it got flagged by folks who feel threatened by the idea

yes, such as the VC fund that operates this website

https://www.ycombinator.com/companies?batch=S24


Just call it "AI2000!"


> “When AI is mentioned, it tends to lower emotional trust, which in turn decreases purchase intentions,” he said. “We found emotional trust plays a critical role in how consumers perceive AI-powered products.”

> In the experiments, the researchers included questions and descriptions across diverse product and service categories. For example, in one experiment, participants were presented with identical descriptions of smart televisions, the only difference being the term “artificial intelligence” was included for one group and omitted for the other.

I mean yeah, what consumer is going to jump at the chance to own a TV with "artificial intelligence"?

I'd be interested to know the full range of "diverse product and service categories" they asked about here. Not interested enough to pay for access to the paper though!


Exactly that. On some software or tech products like a connected watch or something I could imagine having AI as a positive but on a TV ? I think it would just make my experience worse, and I'm not talking about potential failures like mentionned in the article, it's just not a use case IMO.


> And now I'm in a clothing store, and the sign says less is more

> More that's tight means more to see, more for them, not more for me

> That can't help me climb a tree in ten seconds flat

-Dar Williams


Idk if it's just me but saying linear algebra in the product description might throw people off


Why would you say either? You should be describing benefits of use to your end user.

Would you rather have a car that says it has driver assists like lane keeping, or a car that has AI? AI is an implementation detail not a feature. AI doesn't help me, AI that can be applied to a problem helps me.

Just like when people here complain about posts where the title is "XYZ written in Lang ABC" instead of a hint of what XYZ does or makes it different.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: