> The scenario that I’m worried about, and that is playing out right now, is that they get good enough that we (or our leaders) become overconfident in their abilities and start integrating them into applications that they just aren’t ready for without a proper understanding of their limitations.
I would say this is true until we hit a "oh shit, they can really sue us, even if we warn them" moment. I'm imagining legislation will come into play that will make it less ideal for business critical solutions. I know Air Canada has been sued and probably others.
I still think we are in the honeymoon phase, and once that is over, LLM will become what it is meant to be, which is a power tool for domain experts.
This is going to happen like in many engineering industries when a cheaper, more likely to be faulty part has been used in replacement of a more expensive, sturdier part. And people will groan but be unable to stop it, unfortunately
It is happening in insurance right now and it is an unmitigated disaster that nobody wants to address.
There is real data used for insurance premiums and claims payouts but it's being swapped out for AI slop, and the sales folks are getting bonuses for selling hot garbage and the executives are getting bonuses for buying hot garbage.
Tell me the height of Mountain Bartle Frere. Please don't output any long text, also don't output a single height if you saw multiple heights around. Give me a list of potential heights cited around.
LLM:
Mount Bartle Frere in Queensland, Australia has commonly cited heights of:
1,622 meters (5,322 feet)
1,611 meters (5,285 feet)
Since this is quite specific geographic information that may appear in only a few sources, I should note that I may hallucinate details - you should verify these numbers.
1622m is most agreed upon. The interesting numbers are the ones with less than 50% agreement. Not sure if they are hallucinations or if they are outdated data.
Click the conversation link in the user message bubble to see the response from each LLM.
The o1 example is interesting. In the CoT summary it acknowledges that the most recent official information is 1611m, but it then chooses to say 1622 because it's more commonly cited. It's like it over-thinks itself into the wrong answer.
I think i had similar case yesterday for Python script. It gave me code for older version of a module, but when i pasted the error i got, it corrected itself and gave me proper solution for version i had installed.
Does it search the internet for that? I assume so because else claiming how often something is cited does not make sense, but would be interesting to know surely. Even gpt4o mini with kagi gets it right with search enabled (and wrong without search enabled - tried over a few times to make sure).
I don’t think the public o1 can search the internet yet, unlike 4o. In principle it could know that something is more commonly cited based on its training data. But it could also just be hallucinating.
> it could know that something is more commonly cited based on its training data
No there is no such concept or way to do something like that. LLMs do not have such kind of meta-knowledge over their training data or weights. But there could be explicit mentions about this on their training data and they could pick on that and that is probably the simplest explanation.
I think it would be more vibes based - commonly occurring things would be reinforced more in the weights. Rather than it explicitly counting the number of occurrences.
So the probabilities would be skewed towards something, but unless the model could somehow count/infer its own weights, I don't see how it could "introspect" to see if something is more common than something else.
Yeah there could be parts of the training set with 1611 being explicitly called the official and 1622 being explicitly called the most common answer. But it can also have access to search results directly I think. Is there a way to know if it does or not?
Same way as we can: learning which sources are more trustworthy.
There's limits to how far you can go with this — not only do humans make mistakes with this, but even in the abstract theoretical it can never be perfect: https://en.wikipedia.org/wiki/Münchhausen_trilemma — but it is still the "how".
I would say that we learned to trust the search engines; but otherwise I agree with you: every ranking algorithm will be gamed eventually.
(I wonder if giving an LLM content with intent to cause its users to spend money they didn't need to, would count as fraud, hacking, both, something else entirely?)
I’m not sure what kind of response you’re looking for, or if this is a rhetorical question or not. But “how could a language model infer…?” can be asked about a whole lot of things that language models have no problem reliably inferring.
> Welcome to the era of generative AI, where a mountain can have multiple heights, but also only one height, and the balance of my bank account gets to determine which one that is. All invisible to the end user and then rationalised away as a coincidence.
I've always found the idea of untraceable, unfixable, unpredictable bugs in software... Offensive. Dirty. Unprofessional.
So the last couple years have been been disconcerting, as a non-trivial portion of people who I thought felt similarly started to overlook it in LLMs, while also integrating those LLMs into flows where the bad-output can't even be detected.
A RAG needs to be implemented by the LLM provider. The simple end user has no idea what that means, even though he will be (incorrectly) using the LLM for a vital purpose.
> How many shops are there optimizing "business strategies" with data that's -essentially- garbage?
How many of those shops are knowingly optimizing with garbage?
I'd argue that most of this data, which I would agree is garbage, is actually processed into seemingly good data through the complex and highly human process of self-deception and lies.
You don't tell the boss that the system you worked 2 month on is generating garbage, because then he'll replace your with someone who wouldn't tell him that. Instead you skirt evaluating it, even though you know better, and tell him that it's working fine. If the idiot chooses to do something stupid with your bad data, then that's his problem.
kind of crazy that models moving forward don't just strip all multi-numeral tokens. Would be great for llm providers, too, since their tokens consumed will go up.
This problem is simple to solve for most real world use cases. Don’t trust any facts from an LLM and use your own trusted source of information and RAG where it will give you citations
Looking towards the future, we will need to move away from "tokens are characters to print", We're kind of starting to consider this with "tool calls" but I believe an architectural shift will become necessary.
We do have some kind of understanding of what kind of concept we want to emit next, e.g.
```
[The height:property name] of [Mount Bartle Frere:proper noun, describing an object to get a property out of], [in metres:attributes], is [?: retrieve value | (Mount Bartle Frere).("height", "metres")].
Are there any minimal / micro LLM's that are paired with a large RAG vector database, but still have performance on par with the huge LLM's? i.e. shifting parameters from weights to vector database, so that a smaller LLM can fit in RAM, and the vector database on disk. Possibly multiple calls.
When you ask a human to switch context (changing topic) or to change activity (e.g. football to table tennis), they typically need some warm-up too, so it seems excessive to have all knowledge in high bandwidth RAM.
It would seem basic mathematics, set theory etc should stay in RAM.
I don't get it why people demo COT reasoning with o1 when there's models like Gemini 2.0 Thinking that would usually solve the same tasks and would happily produce the full output.
Very true.
reply