Hacker News new | past | comments | ask | show | jobs | submit | beginning_end's comments login

There's so much cool stuff these days, like interstellar iron: https://physicsworld.com/a/antarctic-snow-yields-interstella...


The disclaimer really should be much tougher: "Every LLM consistently makes mistakes. The mistakes will often look very plausible. NEVER TRUST ANY LLM OUTPUT."


> NEVER TRUST ANY LLM OUTPUT

that doesn't sound like a helpful attitude. everything you read might be wrong, llm or not - it's just a numbers game. with gpt3 i'll trust the output a certain amount. it's still useful for some tasks but not that many. gpt4 i'll trust the output more


LLMs are impressively good at confidently stating false information as fact though. They use niche terminology from a field, cite made-up sources and events, and speak to the layman as convincingly knowledgable on a subject as anyone else who's actually an expert.

People are trusting LLM output more than they should be. And search engines that people have historically used to find information are trying to replace results with LLM output. Most people don't know how LLMs work, or how their search engine is getting the information it's telling them. Many people won't be able to tell the difference between the scraped web snippets Google has shown for years versus a response from an LLM.

It's not even an occasional bug with LLMs, it's practically the rule. They don't know anything so they'll never say "I don't know" or give any indication of when something they say is trustworthy or not.


at least the llm (for now) doesn't have an agenda

the top result on google is literally just the result of how hard someone worked on their seo. they might not "hallucinate", but a company can certainly use strong seo skills to push whatever product/opinion best suits them.


But it’s correct. Without independent verification, you can never, ever trust anything that the magic robot tells you. Of course this may not matter so much for very low-stakes applications, but it is still the case.


Until this point in history there has been a reasonable correlation between the quality of writing and the "quality" of the underlying work in most text. If effort was put into the writing (and learning to write), that in itself used to be a decent indicator that effort and skill also was put into the data/ideas that the text conveyed.

That rule no longer works. People are still going to rely on it for a while though, and I'm worried it's going to break some stuff over the next few years.


I think probably the solution here is to stop using publishing in peer reviewed journals as a job requirement for academics. Even before ChatGPT, academic publishing was already falling victim to Goodhart's law. Remove the perverse incentives and it should cut down on this by quite a lot.


With any luck the endpoint will be a world where texts are evaluated more on their semantics than their syntax.


Hoping for Kessler syndrome :)


If I understand correctly, generally not from Starlink. Its orbit is too low; the satellites and their debris tend to end up pushed into the atmosphere eventually.


a guy can dream


I've been thinking the same thing. It'll be interesting to see if we end up with prompt-injecting ads


That was a truly horrible web-page. Does anyone have a link to the actual technology?


This perspective on regulation was interesting: https://drafts.interfluidity.com/2023/12/28/how-to-regulate-...

    "Congress should declare that big-data AI models do not infringe copyright, but are inherently in the public domain.

    Congress should declare that use of AI tools will be an aggravating rather than mitigating factor in determinations of civil and criminal liability."


OpenAI and others: AI should be regulated!

Governments starting regulation and companies filinig cipyright lawsuits...

OpenAI: NOT LIKE THAT


1) Sigh 2) Firefox still works great though, I'm a very happy user!


I really liked the interview with the three scientists in the "Decoding the Gurus" podcast: https://podcasts.apple.com/us/podcast/interview-with-worobey...

(Interview starts roughly 33 min into the episode).


Am I the only one who thinks ads (including non-tracking/privacy-respecting ads) are universally bad & should be regulated much harder than today? And mostly helps bad products and large companies?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: