Isnt the root that "AI" has come to be equated with "generative AI" in the public (and investor) mind? And since genAI shows great promise but is still in its early days, there is a rush to throw more at it in an attempt to stave off the collapse of the inflated expectations bubble.
The risk then is that genAI implodes and takes down all of the other AI disciplines with it.
Ultimately buyers pay for outcomes, and they are much more important than investors, who pay for the promise of buyers and therefore are susceptible to hype (Sequoia fawning over SBF being a juicy example). Therefore, as a seller, I would be drilling down into how my product solves a problem for the buyer, and not leading with how it gets there (AI). "Contract review in 60 seconds means you save $300 an hour on legal fees, and we have a $100M indemnity clause if we get it wrong" rather than "our AI model blah blah".
> The risk then is that genAI implodes and takes down all of the other AI disciplines with it.
The terminology would just change. Really, after the _last_ AI crash, the first time that people uttered the dread word AI (except in the context of games) was with generative AI; before that, anything a bit AI-y just got tagged 'ML'. 'ML' covers a lot of sins.
ML is a lot more accurate to what many of these current 'AI' products do anyway. The AI wave is ultimately about branding more so than any particular threshold of ability.
I think you're right - if there is a large AI crash that sours people on the term for a while, most of these products are just gonna stick around in some more marketable format.
Looking back at industry people pretending like ChatGPT was some frighteningly intelligent very-nearly-general-intelligence type of thing, you kinda wonder if they were deliberately doing a marketing stunt or just bought their own hype.
Yes, what if this "AI" boom cycle turns out to be nothing more than a next gen spellcheck/text generator?
I hope to make generational wealth in shorting Nvidia at just the right time, when everyone realizes all at once, "Wait a minute, this AI chatbot thing kinda sucks..."
I think many non technical people have an incorrect understanding of "AI", where they conflate a single example of what neural networks can do, namely chat bots, with an entire field.
Even if chat bots have no significant advances in the foreseeable future, "AI" as in large neural networks have already proven themselves to be extremely useful and able to accomplish a lot of things which are almost impossible to do without them.
I keep seeing this argument that there is value out there, but for normal people all we are seeing is spam, cute ai cat pics on different social medias, can you please give a few examples of fields/problems where AI has been really helpful.
I work on ML networks used for industrial control systems that characterize defects in the product and determine what to do about them. Not what normal people think of when they hear "AI", but it's not really much different from what LLMs are built on.
We've been selling these systems for a decade or so now.
Generative AI chat is incredibly useful for learning, especially tasks that require a lot of inline learning such as software engineering. I use generative AI chat constantly, because it is basically a Stack Overflow that is near instant with it's answers and can debug error messages extremely well.
I don't even have to leave my code editor if I install the extension in my IDE. And with an IDE extension it can be aware of the actual context of my codebase to the point where it can write code samples that reference my own methods and variables.
It's incredibly helpful. I've seen and used a lot of different learning techniques over the years. I started out learning coding from physical books. Then it was online bulletin boards and forums, and Google. But AI agents and chat have replaced almost all of that now. I rarely waste my time on Google anymore when I can get more relevant answers, faster, right in my IDE.
Sure there is the occasional hallucination, but its not that much worse than terrible answers on Stack Overflow, or junk blogs and outdated docs that Google surfaces because someone SEOed the hell out of them.
Honestly, I have found LLMs to be much inferior to my usual methods for this sort of use case. So much so that I've stopped using them.
I am fascinated by how different people have such different experiences with these systems. A study into what the difference between the "best thing since sliced bread" camp and the "meh" camp is would be very interesting.
>I keep seeing this argument that there is value out there
I am sure that every person with significant hearing problems greatly appreciates auto generated subtitles. Surely some people enjoy having access to voice commands. Having translation tools which are reasonably good at inferring context is a great help if you want to communicate with a person who has no shared language. Having the ability to do some automated screening for abnormality can definitely help in manufacturing, same for medical imaging where a computer might point out to a doctor that something warrants a second look can be helpful. Cars being able to detect pedestrians and cyclists, surely has saved many lives already.
I could go on, but this is what I mean with people conflating "chat bots", with the entire range of applications for neutral networks.
Neural networks are currently the best way for a computer to infer human like knowledge about the real world. To make distinctions and to detect things which might be hard for a human to detect.
You must be kidding, ALL algorithmic trading uses ML models, the VAST MAJORITY of money being made and traded is on the back of ML right now. Are there really people in tech today that don't understand how much ML has taken over?
Lol, you might be talking about chatgpt specifically, which is kinda dumb.
I wonder why so many people are confused about what AI is (and what it's capable of). Could it have anything to do with the snake oil being peddled by Sam Altman and the media hype machine surrounding him?
It was only a couple years ago when Blake Lemoine was (rightly) ridiculed for seeing sentience in a chat bot. Now everyone is Blake Lemoine.
Waiting for the next AI winter to put all this nonsense to rest...
Yes, I think so too. People peddling neural networks as being the end of the world mostly just want an advertising machine for their chat bots.
Neural networks do not need to prove themselves anymore, for a vast amount of problems they are the single best way to approach them. Even if chat bots never get better from here on, neural networks aren't going away.
I agree. Maybe long covid has taken a toll on people's ability to use logic. But seriously, seeing people's confusion about AI is what it was like watching people in 2001 get excited about bombing Iraq for "getting them back."
People in groups are INCREASING and INCREDIBLY dumb.
> “AI still remains, I would argue, completely unproven. And fake it till you make it may work in Silicon Valley, but for the rest of us, I think once bitten twice shy may be more appropriate for AI,” he said. “If AI cannot be trusted…then AI is effectively, in my mind, useless.”
This quote is enough to dismiss the whole article.
“Unproven”, I don’t get how anyone can use LLMs and come away with this opinion. There is simply no better way to do a fuzzy search than by typing a vague prompt into an LLM.
I was trying to find a movie title the other day, only remembered it had Lime in the title and had a Jack-the-Ripper setting. ChatGPT found it easily. Sure you have to fact check the results, but there’s undeniable value there.
I was trying to remember the name of a book series I read as a child in the late 80s/early 90s. I gave ChatGPT part of a title (it had Scorpion and a few other words in the title), a few plot points, and the decade it was published, and asked for an ISBN. It confidently returned a book with Scorpion in the title, a short plot summary, published in 1983, an ISBN, even an author, and it was all entirely made up. It took me a few minutes to realize this when my searches on Amazon and library websites turned up nothing.
If I asked a job candidate any question and they confidently replied with a set of facts that were entirely made up, I would not consider them for any position under any circumstances.
I’m not sure this is the best example of the power of an LLM. Not denying there are actual uses cases, but simply searching Google “Lime movie Jack the Ripper” will display the answer in the first result (and I imagine would have been able to do that for the past 10+ years)
I put "movie title the other day, only remembered it had Lime in the title and had a Jack-the-Ripper" in Google and the first answer is "The Limehouse Golem".
Is that it ? Dunno if behind the scenes it was using an LLM or "classical" search.
The issue that LLMs are constantly hallucinating and are not capable following long term rules. Since its still niche, its not a problem but what if professionals like Lawyers or doctors start using it day to day, then we are in trouble. I wouldn't go as far as saying its useless but its effectiveness is very close to zero in most fields not related to spamming.
If you try to replace your whole job with an LLM, yes you will have problem.
I work in IT, I use ChatGPT daily to spit out scripts, ask it to come up with function names, convert code from a technology to another, ask it to do minor refactoring that I don't know how to do.
I can immediately validate the output, learn from it, and even work with techs I'm not familiar with.
of course, I don't try to replace my whole job out of it.
ChatGPT is 'good' at doing statistical analysis with python on a given dataset, that can help in the harder task you quoted "distinguishing between ideas that are correct, and ideas that are plausible-sounding but wrong".
The job itself cannot be easily verified, but you can use LLM on a subset of theses tasks.
The risk then is that genAI implodes and takes down all of the other AI disciplines with it.
Ultimately buyers pay for outcomes, and they are much more important than investors, who pay for the promise of buyers and therefore are susceptible to hype (Sequoia fawning over SBF being a juicy example). Therefore, as a seller, I would be drilling down into how my product solves a problem for the buyer, and not leading with how it gets there (AI). "Contract review in 60 seconds means you save $300 an hour on legal fees, and we have a $100M indemnity clause if we get it wrong" rather than "our AI model blah blah".