The LLM can't, thats what makes it relatively difficult. The tokenizer can.
Run it through your head with character level tokenization. Imagine the attention calculations. See how easy it would be? See how few samples would be required? It's a trivial thing when the tokenizer breaks everything down to characters.
Consider the amount and specificity of training data required to learn spelling 'games' using current tokenization schemes. Vocabularies of 100,000 plus tokens, many of which are close together in high dimensional space but spelled very differently. Then consider the various data sets which give phonetic information as a method to spell. They'd be tokenized in ways which confuse a model.
Look, maybe go build one. Your head will spin once you start dealing with the various types of training data and how different tokenization changes things. It screws spelling, math, code, technical biology material, financial material. I specifically build models for financial markets and it's an issue.
> I specifically build models for financial markets and it's an issue.
Well, as you can verify for yourself, LLMs can spell just fine, even if you choose to believe that they are doing so by black magic or tool use rather than learnt prediction.
So, whatever problems you are having with your financial models isn't because they can't spell.
You seem to think that predicting s t -> s t is easier than predicting st (single token) -> s t.
Of all the incredible things that LLMs can do, why do you imagine that something so basic is challenging to them?
In a trillion token training set, how few examples of spelling are you thinking there are?
Given all the specialized data that is deliberately added to training sets to boost performance in specific areas, are you assuming that it might not occur to them to add coverage of token spellings if it was needed ?!
Why are you relying on what you believe to be true, rather than just firing up a bunch of models and trying it for yourself ?
> You seem to think that predicting s t -> s t is easier than predicting st (single token) -> s t.
Yes, it is significantly easier to train a model to do the first than the second across any real vocabulary. If you don't understand why, maybe go back to basics.
No, because it still has to learn what to predict when "spelling" is called for. There's no magic just because the predicted token sequence is the same as the predicting one (+/- any quotes, commas, etc).
And ...
1) If the training data isn't there, it still won't learn it
2) Having to learn that the predictive signal is a multi-token pattern (s t) vs a single token one (st) isn't making things any simpler for the model.
Clearly you've decided to go based on personal belief rather that actually testing for yourself, so the conversation is rather pointless.
You are going to find for 1) with character level tokenization you don't need to have data for every token for it to learn. For current tokenization schemes you do, and it still goes haywire from time to time when tokens which are close in space are spelled very differently.
I don't doubt that training an LLM, and curating a training set, is a black art. Conventional wisdom was that up until a few years ago there were only a few dozen people in the world who knew all the tricks.
However, that is not what we were discussing.
You keep flip flopping on how you think these successfully trained frontier models are working and managing to predict the character level sequences represented by multi-character tokens ... one minute you say it's due to having learnt from an onerous amount of data, and the next you say they must be using a split function (if that's the silver bullet, then why are you not using one yourself, I wonder).
Near the top of this thread you opined that failure to count r's in strawberry is "Because they can't break down a token or have any concept of it". It's a bit like saying that birds can't fly because they don't know how to apply Bernoulli's principle. Wrong conclusion, irrelevant logic. At least now you seem to have progressed to (on occasion) admitting that they may learn to predict token -> character sequences given enough data.
If I happen into a few million dollars of spare cash, maybe I will try to train a frontier model, but frankly it seems a bit of an expensive way to verify that if done correctly it'd be able to spell "strawberry", even if using a penny-pinching tokenization scheme.
Nope, the right analogy is: "it's like saying a model will find it difficult to tell you what's inside a box because it can't see inside it". Shaking it, weighing it, measuring if it produces some magnetic field or whatever is what LLMs are currently doing, and often well.
The discussion was around the difficulty of doing it with current tokenization schemes v character level. No one said it was impossible. It's possible to train an LLM to do arithmetic with decent sized numbers - it's difficult to do it well.
You don't need to spend more than a few hundred dollars to train a model to figure something like this out. In fact, you don't need to spend any money at all. If you are willing to step through small model layer by layer, it obvious.
At the end of the day you're just wrong. You said models fail to count r's in strawberry because they can't "break" the tokens into letters (i.e. predict letters from tokens, given some examples to learn from), and seem entirely unfazed by the fact that they in fact can do this.
Maybe you should tell Altman to put his $500B datacenter plans on hold, because you've been looking at your toy model and figured AGI can't spell.
Maybe go back and read what I said rather than make up nonsense. 'often fail' isn't 'always fail'. And many models fail the strawberry example, that's why it's famous. I even lay out some training samples that are of the type that enable current models to succeed at spelling 'games' in a fragile way.
Problematic and fragile at spelling games compared to using character or byte level 'tokenization' isn't a giant deal. These are largely "gotchas" that don't reduce the value of the product materially. Everyone in the field is aware. Hyperbole isn't required.
Someone linked you to one of the relevant papers above... and you still contort yourself into a pretzel. If you can't intuitively get the difficulty posed by current tokenization, and how character/byte level 'tokenization' would make those things trivial (albeit with a tradeoff that doesn't make it worth it) maybe you don't have the horsepower required for the field.
"""
While current LLMs with BPE vocabularies lack
direct access to a token’s characters, they perform
well on some tasks requiring this information, but
perform poorly on others. The models seem to
understand the composition of their tokens in direct probing, but mostly fail to understand the concept of orthographic similarity. Their performance
on text manipulation tasks at the character level
lags far behind their performance at the word level.
LLM developers currently apply no methods which
specifically address these issues (to our knowledge), and so we recommend more research to
better master orthography. Character-level models
are a promising direction. With instruction tuning, they might provide a solution to many of the
shortcomings exposed by our CUTE benchmark
"""
Run it through your head with character level tokenization. Imagine the attention calculations. See how easy it would be? See how few samples would be required? It's a trivial thing when the tokenizer breaks everything down to characters.
Consider the amount and specificity of training data required to learn spelling 'games' using current tokenization schemes. Vocabularies of 100,000 plus tokens, many of which are close together in high dimensional space but spelled very differently. Then consider the various data sets which give phonetic information as a method to spell. They'd be tokenized in ways which confuse a model.
Look, maybe go build one. Your head will spin once you start dealing with the various types of training data and how different tokenization changes things. It screws spelling, math, code, technical biology material, financial material. I specifically build models for financial markets and it's an issue.