Well, this was already pointed out a long time ago during the peak hype of AI earlier this year. Something about we overhype the effect of the technology in the short term and underestimate the impact long term.
One thing that is undeniable is that AIs have actual productive use cases. Some of the previous fads did not actually have any major real applications even years after its introduction. Meanwhile, AI technology, specifically the transformer tech, has already revolutionized biotech, art, and NLP fields. Even if all progress on AI hit a stop today, people will still use alphafold, diffusion, and chatgpt for their intended purposes. So I am sure we can agree that AI isn't going down the road of things like NFT or bitcoins.
Next few years would be the optimization and mass adoption steps. Big tech companies are probably plucking all the low hanging fruit already. The hard parts are probably what make it more useful, i.e. cheaper and in a format that can be used easily. Those take time, but 15 years ago, I would be dreaming if I think I can play Crysis on a handheld device. Now I can for a measly $400. So in the next 10 years, it isn't impossible to have something on the level of chatgpt running in real time on something like a phone or smartglass.
And that is discounting the possibility of any major breakthrough or genius application like the GUI to computer interface or the iphone to personal computer. Something like that can pop up and change the world as we know it.
One other thing is worth mentioning, is also the negative impact of AI to take into account. It helps everything to get done easier, but it includes good things and bad things.
Did you see AI is also trained to make pentests ? Or create malwares ? Or a model with almost no restrictions is released in the wild (Mistral) ? Or any other bad stuff I can't imagine ?
Do people even imagine how it can become easy to do evil things ? No, because they're overhyped by AI.
Not everyone, but still.
Overhyped how?
Overhyped like self-driving, in terms of the timelines?
Or overhyped in terms of the eventual societal impact?
I think people seriously underestimate how much of a struggle driving down the cost of compute/$ is going to be. Every incremental gain in memory and microprocessors is getting exponentially more expensive to develop. This is also a (relatively) high interest rate environment which do not tend to favour long term speculative investments. A lot of people aren't fully aware of how OpenAI and GitHub CoPilot are bleeding red ink from the get-go.
I honestly feel like people are really sleeping on the potential still. Most people I know tried it a few times around last Christmas and got bored with it and quit but I never stopped using it.
Actually I feel that utilization of existing hardware is very poor.
For instance, in LLM land, continuous batching is a thing that just starting to proliferate (at least in open source). Many are still doing training/inference in vanilla pipes in PyTorch eager mode, without even a simple `torch.compile` or aggressive quantization, much less a more advanced ML compiler. Architecture optimizations beyond llama's base architecture (like newer attention schemes used during training) are not really picked up yet.
I can go on and on, and go on about Stable Diffusion as well, but what I am saying is there is tons of unpicked low hanging fruit, that's unexploited because... everyone is in a rush to just publish or deploy something, I guess?
Also, Nvidia's margins are insane. The voltage/clock band they run in is insanely inefficient. I hope that is not sustainable, because sane margins/clocks with no code/hardware advancement would bring costs down a ton.
Basically, power usage increases quadradicaly with clockspeed. Running a little faster takes a lot more power.
Nvidia runs GPUs smaller than an Apple M1 Ultra at 250-450W. Thats crazy for a single chip! They could sacrifice a tiny bit of performance and run with much less power... But they don't.
The Cerebras CS2, for reference, has a much lower power density and runs more efficiently. Unfortunately most other AI hardware is pretty hotly clocked at the moment.
One of the best uses of generative AI is Gihub Copilot. Microsoft loses $20 per month for each Github Copilot user. Copilot should cost $40 - $50 per month to cover the cost of running it.
The tech is great, but it’s expensive to run, and it’s being given away basically for free at the moment.
…and hey, free stuff is great. Enjoy it while you can!
At some point, however, companies have to stop giving things away for free (or at a loss); it’s just a matter of what that looks like under sustainable pricing.
In the end, it's the same story as automating warehouse delivery, where it turns out that it's actually cheaper to have humans do it. AI is no different. Moreover, the recent restrictions on AI-generated content have caused inconveniences even in ordinary use that is not socially harmful, and expectations for AI have been greatly damaged.
CCS Insight predicts a reality check for generative AI in 2024 due to increasing costs and complexity. The hype around this technology is expected to be replaced by a more sober understanding of the challenges it presents, particularly for smaller developers. Additionally, AI regulation in the EU is predicted to stall due to rapid technological advances
Crypto didn't and still doesn't have the same immediate utility. The value proposition just wasn't there to justify the money and attention it was getting. Bitcoin in particular was a prototype that got mythologized into being "digital gold" despite it's many, many technical limitations.
Diffusion models and LLMs work today and make possible things that were science fiction five years ago, and have shown tremendous and exciting progress in the past 18 months.
I haven’t seen any effective uses of the current AI tech that couldn’t have been done for the same cost by humans so far. Images, text, code; I haven’t seen anything but toys built yet. The coding tools might work okay for your average HTTP API, but it’s not going to develop novel algorithms to control building HVAC systems to reduce energy or demand, for example. It’s not going to code much more efficient search algorithms, or faster compression. Maybe someday, but so far everything produced by AI seems to have huge problems, whether it be drawing realistic hands, knowing the factual truth of certain questions, or introducing subtle bugs in complex code.
That is because you are comparing it to the cost of a professional.
I personally look at it in a different way. Now, a rando on the street knowing nothing about everything can pop out arts rivaling an experienced illustrator. A completely clueless wet lab scientist can coerce copilot or GPT4 to cobble together an automated data analysis pipeline in a language that they know nothing about.
To a professional, those applications are toys, easily made and take little effort. But to someone who does not know anything about the work, it is amazingly useful and open up many possibilities. That is the power and the use cases for AI right now. They are tools to augment productivity, not replacing it. And in that regard, it is very successful imo.
Whether it will progress to the point where it can outright handle everything from start to finish or not is another question.
Maybe to a lay observer, but that art will not be new, very creative, or technically perfect in any way, sorry.
> lab scientist… data pipeline
No please, they already mess up statistics and code enough, causing bad papers! They don’t know how to code and thus cannot know if that code is correct.
Edit: (I’m posting “too fast” so here’s my last response here for now:)
I’ll concede on point one there, art doesn’t have to be perfect for most uses.
On point two, I think every HN reader has seen how very smart scientists can mess up stats and data even when they write their own code. I’m not saying they are dumb, I’m saying I don’t trust those same folks to be able to find the mistakes an AI makes. Obviously I’m painting with a broad brush here, not every scientist is bad at that, but a large number are, and the current gen AI isn’t trustworthy enough, in my opinion, to let untrained scientists use it and produce important work based on that data.
I would love to eat my words here someday, but this is a hype cycle and although impressive, most AI today is better for marketing and fund raising than serious use.
>To a professional, those applications are toys, easily made and take little effort.
I acknowledged that much. Nothing the AI produce will be a masterpiece, but it is serviceable. The alternative is hiring a contractor or getting an intern and spend more money trying to get a slightly better result, in rare cases, might be worse. Not many places are willing to pay that extra cost.
> They don’t know how to code and thus cannot know if that code is correct.
A bit elitist there. These are still highly educated scientists. They might not know how to code, but to say they can't evaluate the output of the code is a bit much. You might not know how to edit the genes in a fish but you can tell if the fish is glowing or not, right?
I'm working on integrating AI into a product right now: IMO you want to look at what's happening now as more a shift in cost to develop and maintain - which itself is going to create qualitative differences.
I can now have 1-2 developers stand up ML backed services at a level of quality that a few years ago would have required an ML + engineering team to build along with an ongoing tuning burden. Now that the AI is "good enough" out of the box time-to-value has dropped, which also allows for more exploration.
One area I'm seeing a lot of traction at my company and amongst other developers: onboarding flows for complex products. LLMs are really great at taking a small amount of input from or about a user, walking down a decision tree, and creating some initial dummy data relevant to them to more quickly demonstrate value. You might not ever know chatGPT is involved but it doing wonders for quite a few companies' conversion rates.
That’s great! I hope small uses of AI like this work to make us more efficient, but that doesn’t sound exactly like a societal breakthrough, to be able to sell stuff better. I’m looking for AI that can do things other than make capitalism more efficient at parting people from their money.
(Yes, I’m a negative asshole. I should probably be more open minded.)
> I’m looking for AI that can do things other than make capitalism more efficient at parting people from their money.
I think you're really under-estimating the positive, human value that can come out of what I'm describing.
If you leave the world of software companies you'll find that a lot of humanity is wasting huge amounts of time on tasks that could be easily be automated. My most recent experience was the Electronic Vehicle research space - I was able to rather straightforwardly reduce testing cycles for certain key components from 1 year to 1 month through some straightforward software and collaboration with some scientists.
Most of what I accomplished could have been achieved by the scientists if they had used something like Retool[0], but Retool is too sophisticated a tool for them to ramp up on. If AI could make Retool accessible to someone with the technical sophistication of Material Scientist who can write a little Python, it might greatly speed up the rate at which we advance EV technology.
The point I'm making is that making it easier to make products that are accessible means that it's easier to distribute the positive effects of innovation to the rest of society faster. If anything, there's the potential to lower profits long term because today creating a product that is both valuable and accessible is an incredible moat.
Just last night I was adding and OIDC provider to a website for a friend, and GPT4 did most of the job for me. As in, I mostly cut and pasted code and filled in the actual integration with the login. It saved me time.
Most development is far closer to that than developing more efficient search algorithms, or better compression,something most human developers couldn't do if their lives depended on it.
Could I have hired.someone to do it? Sure, but finding someone and turnaround time would have taken longer than doing it myself, while GPT4 spat out a solution in seconds.
How about alphafold? Prior usage requires quite literally weeks to months on supercomputers with results often not so comparable with the X ray crystallography.
Or the use of AI for real time upscaling that can be done in real time on low to medium GPU.
I think you only see chat gpt which don have its own draw back, but AI does not equal gpt, does not even has to be generative.
Alphafold: there are lots of caveats, from my limited understanding so far the “thing“ it does isn’t a limiting factor for speeding up drug development.
Video cards: I don’t know much about them, it seems impressive but I’m not sold it’s the future of gaming yet; also they’re expensive as Fuck
I am not saying this stuff will never be useful, but we’re at the peak of the hype cycle today, and I expect many, many of the supposed breakthroughs turn out to be a dead end or harder to make reliable than expected. Or, way more expensive than financially viable for that problem.
I hope I’m wrong, tech wise, because it would be amazing to reduce the needed human work output, but also I’m not sold that our society can survive if AI takes over jobs, so that’s another reason that I’m bullish.
One thing that is undeniable is that AIs have actual productive use cases. Some of the previous fads did not actually have any major real applications even years after its introduction. Meanwhile, AI technology, specifically the transformer tech, has already revolutionized biotech, art, and NLP fields. Even if all progress on AI hit a stop today, people will still use alphafold, diffusion, and chatgpt for their intended purposes. So I am sure we can agree that AI isn't going down the road of things like NFT or bitcoins.
Next few years would be the optimization and mass adoption steps. Big tech companies are probably plucking all the low hanging fruit already. The hard parts are probably what make it more useful, i.e. cheaper and in a format that can be used easily. Those take time, but 15 years ago, I would be dreaming if I think I can play Crysis on a handheld device. Now I can for a measly $400. So in the next 10 years, it isn't impossible to have something on the level of chatgpt running in real time on something like a phone or smartglass.
And that is discounting the possibility of any major breakthrough or genius application like the GUI to computer interface or the iphone to personal computer. Something like that can pop up and change the world as we know it.