Hacker News new | past | comments | ask | show | jobs | submit login
AI Mistakes Are Different from Human Mistakes (schneier.com)
29 points by sorokod 28 days ago | hide | past | favorite | 8 comments



> The seemingly random inconsistency of LLMs makes it hard to trust their reasoning in complex, multi-step problems. If you want to use an AI model to help with a business problem, it’s not enough to see that it understands what factors make a product profitable; you need to be sure it won’t forget what money is.

Every single so-called “AI company” or “AI Engineer” (Who is of course just using OpenAI’s API in their projects) in this space should read this blog post.


> Every single so-called “AI company” … should read this blog post.

And their prospective users even more so. They should know what they are getting themselves into.


It's going to be like leaded gasoline and subprime mortgage CDOs: Can't talk now, too much money to be made.


> Likewise, we should confine AI decision-making systems to applications that suit their actual abilities—while keeping the potential ramifications of their mistakes firmly in mind.

This is where I see the greatest dangers, because we are boldly applying LLMs where they don't belong. As long as the consequences only affect the experimenter I couldn't care less, but when it impacts others it should be treated as criminal negligence.


They will jam "AI" everywhere they can to increase stock price, there will be no safety regulations, welcome to idiocrasy.


Absolutely.

The danger is that people will think that AI is thinking and reasoning in a way that they are. But it isn't. It's a glorified template generator, at least for now.

Our brains and minds are fat more sophisticated and nuanced than the LLM models we've built in the last few years. It'd be crazy if they weren't.


A bit of a tangent, but I suggest you could care a bit.

Imagine your son/daughter use a LLM to diagnose their medical symptoms with disastrous consequences.

Now imagine it is some other family member.

Now a friend.


AI doesn't make mistakes. You get exactly what you asked for. Junk fitting a learned probability distribution.

A mistake is made when someone has an end-goal in mind and incorrectly executes one or more of the steps required to achieve it. An AI's goal is just to complete the text you give it such that the end result fits the learned probabilities. An AI's goal is not to reply to your question factually, or reliably. As such, it is working exactly as designed, and the only mistake here is calling its output a "mistake" whenever we don't like it.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: