Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Are you seeing AI used in places where it shouldn't be?
21 points by Zelphyr 4 months ago | hide | past | favorite | 44 comments
It's no surprise, really, but I'm starting to see companies sticking AI into places where it just doesn't make any sense. I'm not talking about the use of AI all over marketing materials. That's a given.

I'm talking about things like my CEO wanting me to build an AI sales assistant when a simple web form would actually be faster to develop and use.

What are you seeing?




On a parcel tracking web app there was a “GPT” button next to the courier service so I clicked it and saw a very GPT like description of the courier company.

When I inspected the code for the site I saw it was just hard coded in to the web page and figured someone demanded AI should be used on the app so the dev just faked it.



Upsettingly, I've been in that situation. Sigh. LMAO.


I like this dev


This is the next generation: artificial artificial intelligence


Love it


It is infiltrating healthcare in all of the worst possible ways at present. It is sneaking into diagnosis and treatment in ways that are outright dangerous to patients.


Can you give an example of this?

Where I'm familiar with AI in healthcare is tools to help radiologists and cardiologists detect cancer early. This is a support AI, not the AI doing all of the work.

I'm also hearing about the potential for generative AI to provide a good bedside manner in "discussing" a diagnosis with a patient. Not diagnosing, but having more patience and time than a doctor has to communicate with the patient on known entities.

We use ML for measuring sleep state and finding opportunities to stimulate the brain to improve the efficiency of deep sleep.

Where are these "worst possible ways" that I'm unfamiliar with?


The above examples you've provided sound pretty dreadful to be honest. I can't imagine how awful it would be to be told I've a serious disease and then be stuck talking about it to a chatbot of all things.


The clinicians themselves can be pretty dreadful. The uses being explored are to give coaching to them ahead of delivering news to help boost empathy and increase understanding. “It looks like you had a subdural hematoma cause a status epilepticus seizure” vs “that fall you took broke some blood vessels in your head which caused a longer seizure than we like to see”.


Doctors are using it inappropriately to diagnose and advise treatment on their phones without telling patients. It is being rolled out in all manner of automated classification, huge amounts of behind the scenes healthcare involves manual review of charts for classification purposes that is associated with various streams of funding. Low level employees are turning to general purpose AI to assist with this, leaking data, not understanding how unreliable these tools can be for this. Billing departments are "secretly" using it to "refine" coding. Pretty much all of the dumb irresponsible things you can think of.


I work in an ED and I see doctors and nurses looking things up on Google and yt daily. Medications and doses are the most common thing I see. It’s not much different than how developers or mechanics diagnose a problem.

One instance I observed all the ED providers huddled around a screen watching a yt video for a pediatric patient, we are not a children’s hospital. They all watched then discussed the video before going back out to the pt.

I’m sure there should be better documentation available but so much of their jobs is judgement based already.

Medical specific trained LLMs are likely to be commonplace in near future for providers based on feedback I’ve gathered.


> Billing departments are "secretly" using it to "refine" coding. What does this mean exactly?


When billing any insurance company, all procedures are assigned a "code" to standardize the payment/procedure performed across systems.

Some doctor patient visits may fall under multiple codes, so billers will optimize for the highest paying code.


Ingesting visit notes and having an LLM help determine what billing codes to use seems like a great use of AI in my opinion.


Billing codes are complicated and the rules behind them change frequently. These seem like a case where AI could present a wrong answer with authority—probably a close to correct answer, but one that could cost a patient or practice.


I'm not saying it should go without human intervention. It could do a great job at selecting the most likely codes and displaying them to the biller.


It's also used for visit notes generation and works really well. Doctors are really loving this one in my experience https://www.lyrebirdhealth.com/ It's also assistive of course - it basically generates a draft that you're supposed to edit.


Here's a doctor who made some humorous observations about how AI is being used by their peers in writing suddenly-coherent patient notes: https://youtu.be/WgnWgIOer6s


AI is the new Gold rush at least for next 2-3 more years. So yes, expect it to be everywhere. Reminds me of blockchain days even though AI does have better use cases overall.


And shovels sell very well - GPUs.


Industrial knowledge management.(multi ton press, heated up multiple 100 degrees, furnaces, etc..)

If you push 3-4 documents into a language model and then ask it questions, it creates some dangerous situations


I am seeing AI used in _ways_ that it shouldn't be. More specifically, ways that I believe decrease efficiency and quality rather than improve them.

So many people seem to want to be able to want to contort themselves with prompt "optimizations" to try to get models to do exactly what they want. They think if they can just throw a good-enough prompt into this black box it'll miraculously write entire applications for them or do whatever else they want.

I believe this is a misguided overcorrection. Instead, I think we should strive to use AI to _extend_ (and eventually even build) our non-AI tooling rather than replace it. GPT-4 is not a reliable magic box that will solve all your problems - but it is a powerful way to enhance your existing brain power _and_ other tools. We can likewise build tooling that will help us offload more work to AI (e.g., develop systems that analyze AI output and reflect on themselves). We can build systems that clarify and break instructions down into smaller steps (some of which may involve AI queries and others not). But to spend hours trying to tweak the perfect prompt to get just the right kind of usable output from GPT-4, just so you don't have to open a spreadsheet or write your own code? We're wasting time insetad of saving it.


Well said!

Thank You!


Lots of "how to" articles written by AI with insufficient editorial oversight. For example, I was trying to learn about buffers for polishing things (like buffing the clear coat on a car, or buffing a floor) and one of the articles I found veered off into talking about electronic buffers (like for driving outputs) which I would have expected a Real Human to know was completely out of place, and not to have included it.

That and how pointlessly rambly they are. Maybe good if I'm trying to fall asleep, but totally grinds my gears when I'm looking for information! (Pretty sure they do it that way to increase the number of ads shown to users over the course of the material)


imo AI is just too overhyped and at this point I'm kinda sick of seeing companies update their product with 'AI features'


We need a new buzzword before we move off it


I'm sure we'll grow to hate the next one as well


Law enforcement and war come to mind readily.



> The military claims that the AI system, named "the Gospel,"

Are we living in a badly written sci fi book?


I'm increasingly convinced we're living in a draft of Snow Crash discarded for being too ridiculous.



yeah. Our marketing department tries to find a use for AI everywhere, mostly to get on the bandwagon.

The latest was for a chatbot that 100% needs to give exact information or it could be be a big problem form the company. The idea was shot down, but a traditional chatbot would have been a better idea.


I would say I am more shocked by the lack of seeing AI being used.

Given the LLM hype and over a year in, I was expecting there would be all these "why didn't I think of that" uses by now.

I can't think of a single example. A chat bot no one is going to use seems to be the main use case.


The Weather Network has an IA bot now that you can ask “do I need to wear a coat tomorrow” or “should I bring an umbrella”. It really feels like a “how can we use AI for _anything_”, but only really shows that looking at a weather forecast doesn’t require a lot of intelligence in the first place, so there’s no real problem to be solved by adding the artificial variant.


I think it is used in lots of places where modern probabilistic inference would be a better fit.

In other words, look at methods that are in Murphy's volume 2, rather than 1, especially parts II, V, and VI: https://probml.github.io/pml-book


In the corporate world we’ll see all that shit before it stops. We’re in the peak of the hype cycle.

Don’t worry when this one fades a new one will come right along…


This makes me think of similar questions 30 or so years ago: Are you seeing the internet used in places where it shouldn't be?

If AI is being used in places it shouldn't be, then presumably after a little (or lot) of money has been wasted, firms will figure out that it's a bad fit for the job. I don't see any good reason to rule anything out before there's been time to evaluate it's efficacy.


Waifu generators


That may be the only acceptable use of AI.


Husbandu generators are kinda lacking...


There is a new trend on 4chan called something like "dignif-ai" where they are taking images of scantily clad gym women/OF "performers" and using AI to put clothes back on them and tell them how much nicer they look dressed in a less revealing fashion.


Chatbots




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: