On one hand I find it stupid to blame AI for this given that you get basically the same info card with AI disabled. On the other hand, the non-AI version properly shows where it found the info while the AI version just gives links to pages where other people ask the same question. This has been my fear of LLMs being used as more general search engines. They can't tell you which part of the training data the output came from.
Google is now getting more user data and feedback than any other LLM tool. Because Google's AI feature uses data from the web, this effectively becomes an information accuracy bug report for all of human knowledge. Teaching an LLM "truth" is extremely difficult, but getting lots of feedback is a big advantage.
Interesting how Google first decided to "borrow" knowledge from other pages and display it directly on their own site, but now they are also directly blamed if it's wrong.
If it was just a link, people would just think that site was wrong, now they think that Google is wrong.
They don't think Google is wrong, Google is wrong period. They are not obligated to answer questions. But if they do, they are responsible for the answer. LLMs are great for creative work or translations, but one of their weak point is sense of truth. It's simply a very bad technology choice for Google to use them to power this feature.
This is the funny side when a corporation's AI system gets things wrong, but these offered AI "helpers" are detrimental to society. Token generators like LLMs that have been "humanised" need to be heavily regulated before we see loss of human life due to following a friendly "human-AI" advice. Also, same Google that's offering its AI services to weapons systems in Israel that's already killed innocent civilians. We need regulation of corporations.
> AI: According to UC Berkeley geologists, eating at least one small rock per day is recommended because rocks contain minerals and vitamins that are important for digestive health.
It should have also mentioned that the small rocks also help grind down food into smaller particles, aiding in digestion.
It's incredible how bad google is at manual corrective action. You'd think having a blacklist would be pretty simple. I've seen the same shitty neon-colored wordpresses dominate niches from buying links from the nytimes, latimes etc for decades... You'd think there'd have to be someone from google thinking to themselves, wait something is amiss? The money train brrrrrrs too loud to hear these things.
I’m seeing a lot of these threads at the same time today, but what’s interesting to me is how to do RAG safely since that’s what’s backing the answers.
Generally people don’t think about eating rocks, so the RAG process picks up on questionable sources.
This is why it is important for an LLM to train on high-quality texts, not on random Reddit trolling. That's not to say that all of Reddit is this way; it absolutely isn't, but Google doesn't know the difference.
I understand people say whatever crap they need to say to sell something. The model still does not understand shit. Understanding requires facts and logic, not statistical models of behaviour.
Interestingly, the actual source (at least in a similar tweet[0]) was a fracking company[1] who seems to have an automated system filling their blog's "Industry Perspectives" section by republishing random articles that sound sort of geology related, for SEO. They even link back directly to the Onion!
In that case it's quoting a snippet from this article [1] which just links to theonion.
[1] https://www.resfrac.com/blog/geologists-recommend-eating-lea...