Hacker News new | past | comments | ask | show | jobs | submit login
AI has created a 'fake it till you make it' bubble that could end in disaster (yahoo.com)
29 points by nreece 4 months ago | hide | past | favorite | 48 comments



Isnt the root that "AI" has come to be equated with "generative AI" in the public (and investor) mind? And since genAI shows great promise but is still in its early days, there is a rush to throw more at it in an attempt to stave off the collapse of the inflated expectations bubble.

The risk then is that genAI implodes and takes down all of the other AI disciplines with it.

Ultimately buyers pay for outcomes, and they are much more important than investors, who pay for the promise of buyers and therefore are susceptible to hype (Sequoia fawning over SBF being a juicy example). Therefore, as a seller, I would be drilling down into how my product solves a problem for the buyer, and not leading with how it gets there (AI). "Contract review in 60 seconds means you save $300 an hour on legal fees, and we have a $100M indemnity clause if we get it wrong" rather than "our AI model blah blah".


> The risk then is that genAI implodes and takes down all of the other AI disciplines with it.

The terminology would just change. Really, after the _last_ AI crash, the first time that people uttered the dread word AI (except in the context of games) was with generative AI; before that, anything a bit AI-y just got tagged 'ML'. 'ML' covers a lot of sins.


ML is a lot more accurate to what many of these current 'AI' products do anyway. The AI wave is ultimately about branding more so than any particular threshold of ability.

I think you're right - if there is a large AI crash that sours people on the term for a while, most of these products are just gonna stick around in some more marketable format.

Looking back at industry people pretending like ChatGPT was some frighteningly intelligent very-nearly-general-intelligence type of thing, you kinda wonder if they were deliberately doing a marketing stunt or just bought their own hype.


The issue is more that it might not be true that "genAI shows great promise but is still in its early days"

What if the technique has inherent limitations, and is already delivering diminishing returns.


Yes, what if this "AI" boom cycle turns out to be nothing more than a next gen spellcheck/text generator?

I hope to make generational wealth in shorting Nvidia at just the right time, when everyone realizes all at once, "Wait a minute, this AI chatbot thing kinda sucks..."


This is the AI boom/busy cycle as foretold and played out over and over again since time immemoria.


I think many non technical people have an incorrect understanding of "AI", where they conflate a single example of what neural networks can do, namely chat bots, with an entire field.

Even if chat bots have no significant advances in the foreseeable future, "AI" as in large neural networks have already proven themselves to be extremely useful and able to accomplish a lot of things which are almost impossible to do without them.


I keep seeing this argument that there is value out there, but for normal people all we are seeing is spam, cute ai cat pics on different social medias, can you please give a few examples of fields/problems where AI has been really helpful.


I work on ML networks used for industrial control systems that characterize defects in the product and determine what to do about them. Not what normal people think of when they hear "AI", but it's not really much different from what LLMs are built on.

We've been selling these systems for a decade or so now.


Generative AI chat is incredibly useful for learning, especially tasks that require a lot of inline learning such as software engineering. I use generative AI chat constantly, because it is basically a Stack Overflow that is near instant with it's answers and can debug error messages extremely well.

I don't even have to leave my code editor if I install the extension in my IDE. And with an IDE extension it can be aware of the actual context of my codebase to the point where it can write code samples that reference my own methods and variables.

It's incredibly helpful. I've seen and used a lot of different learning techniques over the years. I started out learning coding from physical books. Then it was online bulletin boards and forums, and Google. But AI agents and chat have replaced almost all of that now. I rarely waste my time on Google anymore when I can get more relevant answers, faster, right in my IDE.

Sure there is the occasional hallucination, but its not that much worse than terrible answers on Stack Overflow, or junk blogs and outdated docs that Google surfaces because someone SEOed the hell out of them.


"Occasional hallucination" is doing a lot of heavy lifting here.


Perhaps if it was a research context sure, but for the most part I've found GPT4 to do a good job on small to medium size chunks of coding


Less hallucination than some of my students in their presentations


Honestly, I have found LLMs to be much inferior to my usual methods for this sort of use case. So much so that I've stopped using them.

I am fascinated by how different people have such different experiences with these systems. A study into what the difference between the "best thing since sliced bread" camp and the "meh" camp is would be very interesting.


>I keep seeing this argument that there is value out there

I am sure that every person with significant hearing problems greatly appreciates auto generated subtitles. Surely some people enjoy having access to voice commands. Having translation tools which are reasonably good at inferring context is a great help if you want to communicate with a person who has no shared language. Having the ability to do some automated screening for abnormality can definitely help in manufacturing, same for medical imaging where a computer might point out to a doctor that something warrants a second look can be helpful. Cars being able to detect pedestrians and cyclists, surely has saved many lives already.

I could go on, but this is what I mean with people conflating "chat bots", with the entire range of applications for neutral networks.

Neural networks are currently the best way for a computer to infer human like knowledge about the real world. To make distinctions and to detect things which might be hard for a human to detect.


Clinical machine learning for cancer diagnosis is one promising area

https://www.cell.com/cell/pdf/S0092-8674(23)00094-6.pdf


Any problem that has a large range of inputs and a large range of outputs with many possible permutations inbetween.

Yes, you could use specialised algoritms but why not let the AI design such thing? It might be better than you.


Machine translation is valuable and it’s all neural these days.


Protein folding. Drug candidate discovery.


You must be kidding, ALL algorithmic trading uses ML models, the VAST MAJORITY of money being made and traded is on the back of ML right now. Are there really people in tech today that don't understand how much ML has taken over?

Lol, you might be talking about chatgpt specifically, which is kinda dumb.


Fancy statistics is nothing new.


A solution in search of a problem


I wonder why so many people are confused about what AI is (and what it's capable of). Could it have anything to do with the snake oil being peddled by Sam Altman and the media hype machine surrounding him?

It was only a couple years ago when Blake Lemoine was (rightly) ridiculed for seeing sentience in a chat bot. Now everyone is Blake Lemoine.

Waiting for the next AI winter to put all this nonsense to rest...


Yes, I think so too. People peddling neural networks as being the end of the world mostly just want an advertising machine for their chat bots.

Neural networks do not need to prove themselves anymore, for a vast amount of problems they are the single best way to approach them. Even if chat bots never get better from here on, neural networks aren't going away.


I agree. Maybe long covid has taken a toll on people's ability to use logic. But seriously, seeing people's confusion about AI is what it was like watching people in 2001 get excited about bombing Iraq for "getting them back."

People in groups are INCREASING and INCREDIBLY dumb.


> “AI still remains, I would argue, completely unproven. And fake it till you make it may work in Silicon Valley, but for the rest of us, I think once bitten twice shy may be more appropriate for AI,” he said. “If AI cannot be trusted…then AI is effectively, in my mind, useless.”

This quote is enough to dismiss the whole article.


“Unproven”, I don’t get how anyone can use LLMs and come away with this opinion. There is simply no better way to do a fuzzy search than by typing a vague prompt into an LLM.

I was trying to find a movie title the other day, only remembered it had Lime in the title and had a Jack-the-Ripper setting. ChatGPT found it easily. Sure you have to fact check the results, but there’s undeniable value there.


I was trying to remember the name of a book series I read as a child in the late 80s/early 90s. I gave ChatGPT part of a title (it had Scorpion and a few other words in the title), a few plot points, and the decade it was published, and asked for an ISBN. It confidently returned a book with Scorpion in the title, a short plot summary, published in 1983, an ISBN, even an author, and it was all entirely made up. It took me a few minutes to realize this when my searches on Amazon and library websites turned up nothing.


FWIW, you might also try searching https://www.worldcat.org/ , which is a large collection of library catalogs.


Which model did you use? I never get hallucinations like that


If you asked a job candidate the same question, does their wrong response make them a bad candidate for the job ?


If I asked a job candidate any question and they confidently replied with a set of facts that were entirely made up, I would not consider them for any position under any circumstances.


Too bad, you were interviewing a warehouse worker were they don't have to think and just follow what the screens says.


Exactly -- people are lazy and want to skip the fact checking. They want to be sold a divine oracle.

Further to your point, I like to try ChatGPT on:

What is the word for a {language, rhetorical device, figure of speech} in which ... <various properties>?"

Then forward-look up the results on more authoritative, accountable sites.


I’m not sure this is the best example of the power of an LLM. Not denying there are actual uses cases, but simply searching Google “Lime movie Jack the Ripper” will display the answer in the first result (and I imagine would have been able to do that for the past 10+ years)


I put "movie title the other day, only remembered it had Lime in the title and had a Jack-the-Ripper" in Google and the first answer is "The Limehouse Golem".

Is that it ? Dunno if behind the scenes it was using an LLM or "classical" search.


Value is there, but it does not match the hype as of today


Nothing ever matches the hype. That's what hype is.

Literally short for hyperbole which means "exaggerated statements or claims not meant to be taken literally."


>a situation in which something is advertised and discussed in newspapers, on television, etc. a lot in order to attract everyone's interest:

>Nothing ever matches the hype. That's what hype is.

Imo thats crazy take, there are definitely things that live up to the hype


there's always someone for whom it doesn't


Who cares about single person?

Just look at the majority and how big that group is


The issue that LLMs are constantly hallucinating and are not capable following long term rules. Since its still niche, its not a problem but what if professionals like Lawyers or doctors start using it day to day, then we are in trouble. I wouldn't go as far as saying its useless but its effectiveness is very close to zero in most fields not related to spamming.


Have you tried to use LLM for work ?


It depends on your line of work. If it involves determining factual accuracy LLMs may not help: https://nondeterministic.computer/@martin/112326227079145176

LLMs are good at synthesis and generation.


If you try to replace your whole job with an LLM, yes you will have problem. I work in IT, I use ChatGPT daily to spit out scripts, ask it to come up with function names, convert code from a technology to another, ask it to do minor refactoring that I don't know how to do.

I can immediately validate the output, learn from it, and even work with techs I'm not familiar with.


So you use it for generation where validation is easy. If validation were harder, the calculus would change.

I also get a lot of mileage out of it, but it is important to recognize its shortcomings.


of course, I don't try to replace my whole job out of it. ChatGPT is 'good' at doing statistical analysis with python on a given dataset, that can help in the harder task you quoted "distinguishing between ideas that are correct, and ideas that are plausible-sounding but wrong". The job itself cannot be easily verified, but you can use LLM on a subset of theses tasks.


There are plenty of use cases where validation is easy.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: