AI is suffocating everyone else, it's slowing down innovation and productivity if it's not linked to AI. That's going to be a problem for society and the world economy.
Quite frankly, I prefer that to advertisement and App Stores suffocating innovation and productivity. Hopefully being asphyxiated by AI kicks Apple and Google into high gear.
You can intentionally market the use cases without knowing exactly how they work, though. So it's intentional investment and use case targeting, rather than directly designing for purpose. Though, the market also drives the measures...so they iteratively get better at things you pour money into.
If you just met a dog for the first time, you can't :) - my guess is LLMs are somewhere in between. It would be cool to see what happens if somebody tried to make an LLM that somehow has ethical principles (instead of guardrails) and is much less eager to please.
> The stochastic parrot LLM is driven by nothing but eagerness to please. Fix that, and the parrot falls off its perch.
I see some problems with the above comment. First, using the phrase “stochastic parrot” in a dismissive way reflects a misunderstanding of the original paper [1]. The authors themselves do not weaponize the phrase; the paper was about deployment risks, not capability ceilings. I encourage everyone people who use the phase to go re-read the paper and make sure they can articulate what the paper claims and be able to distinguish that from their usage.
Second, what does the comment mean by “fix that, and the parrot falls off the perch.”? I don't know. I think it would need to be reframed in a concrete direction if we want to discuss it productively. If the commenter can make a claim or prediction of the "If-Then" form, then we'd have some basis for discussion.
Third, regarding "eagerness to please"... this comes from fine-tuning. Even without it (RLHP or similar) LLMs have significant prediction capabilities from pretraining (the base model).
All in all, I can't tell if the comment is making a claim I can't parse and/or one I disagree with.
[1]: "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?" (Bender et al., 2021)
The institutions have been doing a fine job of destroying all their credibility and utility all on their own for far longer than this new AI hype cycle.
ZIRP, Covid, Anti-nuclear power, immigration crisis across the west, debt enslavement of future generations to buy votes, socializing losses and privatizing gains... Nancy is a better investor than Warren.
I am not defending billionaires, the vast majority of them are grifting scum. But to put this at their feet is not the right level of analysis when the institutions themselves are actively working to undermine the populace for the benefit of those that are supposed to be stewards of said institutions.
It's hard for me to believe they'll put 100% of their eggs into the AI basket, even if it's insanely more profitable than consumer GPUs at the moment.
AI is simultaneously a bubble and here to stay (a bit like the "Web 1.0" bubble IMO)
Also, importantly, consumer GPUs are still an important on-ramp for developers getting into nVidia's ecosystem via CUDA. Software is their real moat.
There are other ways to provide that on-ramp, and nVidia would rather rent you the hardware than sell it to you anyway, but.... I dunno. Part of me says the rumors are true, part of me says the rumors are not true...
Nice tech demo but in practice utterly annoying and without purpose. I mean don't you think enshitifying Wiki knowledge kind of beats the purpose of acquiring knowledge?
Deep dives with TikTok mechanics? That's not going to work, that TikTok UI was optimized for dopamine release. You want to teach junkies new things? All they learn is how to get their kicks, so outside that settings nothing will stick. When they have to apply the infotainment snippets in a real world setting the won't be these high reward stimuli. I mean if you want to hook them on feeling great while believing they've learned something...
Yeah. Tech companies are coming for our hardware. Next step is OSes with agentic AI turning it from a system with frameworks and libraries with apps seperate from the base system, into a system that only runs AI models that the "owner" of the hardwre has no control over and the lines between the OS and the AI is very blurred.
This totally beats the purpose of owning or using tech. Might as well go off grid and live a non-tech life.
Big tech wants to colonize our hardware completely because data centers alone ain't cutting it.
1$ Trillion has to be paid back to the investors plus interests. They screwed up with AI and we have to pay for it. Or maybe they didn't screw up because big money always gets bailed out by the plebs.
reply