Hacker Newsnew | past | comments | ask | show | jobs | submit | emsign's commentslogin

AI is suffocating everyone else, it's slowing down innovation and productivity if it's not linked to AI. That's going to be a problem for society and the world economy.

Quite frankly, I prefer that to advertisement and App Stores suffocating innovation and productivity. Hopefully being asphyxiated by AI kicks Apple and Google into high gear.

> The affordances of AI systems have the effect of eroding expertise, short-circuiting decision-making, and isolating people from each other.

This affordability is HEAVILY subsidized by billionaires who want to destroy institutions for selfish and ideological reasons.


I think you have misread the word “affordances”. It’s not about affordability [0]. The main text also explains what it means.

[0] https://en.wikipedia.org/wiki/Affordance


This is literally corporate textbook 101. Subsidize your product, become market leader, cause lock-in and make your customers dependant.

Every large enough corporate wants to become the new Oracle.


Given how nobody properly understands LLMs, I doubt that they are intentionally designed like that. But the effect... yeah. I can see that happening.

(By the way, are you confusing affordance, the UX concept, with affordability?)


You can intentionally market the use cases without knowing exactly how they work, though. So it's intentional investment and use case targeting, rather than directly designing for purpose. Though, the market also drives the measures...so they iteratively get better at things you pour money into.

Nobody properly understands dog brains either and yet you can still train a dog to sit.

If you just met a dog for the first time, you can't :) - my guess is LLMs are somewhere in between. It would be cool to see what happens if somebody tried to make an LLM that somehow has ethical principles (instead of guardrails) and is much less eager to please.

The stochastic parrot LLM is driven by nothing but eagerness to please. Fix that, and the parrot falls off its perch.

> The stochastic parrot LLM is driven by nothing but eagerness to please. Fix that, and the parrot falls off its perch.

I see some problems with the above comment. First, using the phrase “stochastic parrot” in a dismissive way reflects a misunderstanding of the original paper [1]. The authors themselves do not weaponize the phrase; the paper was about deployment risks, not capability ceilings. I encourage everyone people who use the phase to go re-read the paper and make sure they can articulate what the paper claims and be able to distinguish that from their usage.

Second, what does the comment mean by “fix that, and the parrot falls off the perch.”? I don't know. I think it would need to be reframed in a concrete direction if we want to discuss it productively. If the commenter can make a claim or prediction of the "If-Then" form, then we'd have some basis for discussion.

Third, regarding "eagerness to please"... this comes from fine-tuning. Even without it (RLHP or similar) LLMs have significant prediction capabilities from pretraining (the base model).

All in all, I can't tell if the comment is making a claim I can't parse and/or one I disagree with.

[1]: "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?" (Bender et al., 2021)


The institutions have been doing a fine job of destroying all their credibility and utility all on their own for far longer than this new AI hype cycle.

ZIRP, Covid, Anti-nuclear power, immigration crisis across the west, debt enslavement of future generations to buy votes, socializing losses and privatizing gains... Nancy is a better investor than Warren.

I am not defending billionaires, the vast majority of them are grifting scum. But to put this at their feet is not the right level of analysis when the institutions themselves are actively working to undermine the populace for the benefit of those that are supposed to be stewards of said institutions.


LLMs can't distinguish between context and prompt. There will always be prompt injections hiding, lurking somewhere.

I don't care about AAA gaming either, it's stale but one day the AI bubble will kill something you cry about though.

Rumors have it they'll stop producing gaming GPUs all together. :(

It's hard for me to believe they'll put 100% of their eggs into the AI basket, even if it's insanely more profitable than consumer GPUs at the moment.

AI is simultaneously a bubble and here to stay (a bit like the "Web 1.0" bubble IMO)

Also, importantly, consumer GPUs are still an important on-ramp for developers getting into nVidia's ecosystem via CUDA. Software is their real moat.

There are other ways to provide that on-ramp, and nVidia would rather rent you the hardware than sell it to you anyway, but.... I dunno. Part of me says the rumors are true, part of me says the rumors are not true...


Sounds to me the actual work was done in the discussions with ChatGPT by the researchers.

It's almost as if humans were optimized to constantly move around all day by evolution. Huh. Who would have thought.

Nice tech demo but in practice utterly annoying and without purpose. I mean don't you think enshitifying Wiki knowledge kind of beats the purpose of acquiring knowledge?

It's a proof of concept to pave the way. I could see benefit in having series which deep dive into material. This felt too shallow to me.

Deep dives with TikTok mechanics? That's not going to work, that TikTok UI was optimized for dopamine release. You want to teach junkies new things? All they learn is how to get their kicks, so outside that settings nothing will stick. When they have to apply the infotainment snippets in a real world setting the won't be these high reward stimuli. I mean if you want to hook them on feeling great while believing they've learned something...

Yeah. Tech companies are coming for our hardware. Next step is OSes with agentic AI turning it from a system with frameworks and libraries with apps seperate from the base system, into a system that only runs AI models that the "owner" of the hardwre has no control over and the lines between the OS and the AI is very blurred.

This totally beats the purpose of owning or using tech. Might as well go off grid and live a non-tech life.

Big tech wants to colonize our hardware completely because data centers alone ain't cutting it.

1$ Trillion has to be paid back to the investors plus interests. They screwed up with AI and we have to pay for it. Or maybe they didn't screw up because big money always gets bailed out by the plebs.


Simple solution: Get a second phone just for banking and all the other enshitifying apps and keep it at home where it doesn't bother you.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: