Congrats! I went to your demo and asked for words that end in agi. This is what I got:
--
agi, agi, agi, agi, agi, agi, agi
These are some of the words that end in agi. You can also use the word agi in a sentence. For example, "I am going to the grocery store to get some agi."
It's a fair criticism, and ChatGPT does better, but this isn't a great test of model quality. All LLMS that rely on tokenization struggle with being introspective on language. Try asking chatGPT to count how many e's are in a sentence, or to list all words that start with "to" and end wide "de".
I haven't heard anyone describe the phenomenon clearly, but I expect it is a challenge with reasoning over both intent of the prompt and specific token IDs.
You can't ask ChatGPT to count something and expect that it can answer correctly, because it does not have counting logic. It is a language model, not a math model. People use this to "prove" hallucinations, but when you ask it something that is within it's programmed abilities, you get something at least close to what you want.
Having said that, here are the words ChatGPT gave me for the same prompt:
It's true that ChatGPT is not designed for counting and struggles with it in general.
But my point was that ChatGPT, like any tokenized LLM, doesn't even have the concept of letters. The prompt "how many e's in this sentence" is rendered as the tokens [4919, 867, 304, 338, 287, 428, 6827]. There just isn't a pathway for it to consider the letters that make up those tokens.
I'm a little surprised it did that well on your prompt, which is rendered as [10919, 2456, 886, 287, 556, 72]. The interesting thing here is that 556 = " ag" (with leading space) and 72 = "i". So I'm not sure how got to those words. "Wagagi" is tokens [54, 363, 18013], so somehow it is seeing that token 18013 is what you get when you combine 556 and 72? That seems really weird.
I'd love clarification from someone deeper into LLMs and tokenization.
In a prompt, can you just tell the model which letters make up each token? Eg a list of ag = a g etc. I imagine a dictionary of that for all tokens in the training data would help.
Maybe? Individual letters are tokens, so you could say something like 3128 = 56 + 129, but the problem is that 3128 is processed as text, not the integer token ID. So the tokenizwr would turn 3128 into a series of tokens.
Intuitively I think there's an abstraction barrier there, but I'm not positive. It feels like asking us to list all of the words that trigger particular neurons.
This needs citation. These are not the same things. It will get numerical references right if it has sources used in the model, but it isn't doing any numerical calculations.
Just look at any papers that put models through mathematical benchmarks. The model isn't memorizing these problems. For example I just generated 2 random 64 bit integers and asked ChatGPT to add them.
"6769545085823578960 + 16027170449476717488"
ChatGPT said the answer is 22796715535300296448. It got the correct answer even though the problem wasn't in its training data.
Yep, as always, people (and LLMs) take stuff for granted because they read it somewhere months ago. That’s why we are doomed; everyone believes anything without question if it’s not against their personal agenda.
This need citation :) It does numerical calculations, at least in GPT-4 mode, tested. It can do simple arithmetic, and even has sort of 'imagination', or impression of it. I asked it to imagine a room with 4 colored balls at the corners. Then asked about the view angles between some pairs of balls as if looking from the center of the room, and from other balls. It gave the answers with explanations.
This doesn't mean it's always correct, or can be trusted without verification.
I can feed ChatGPT code that does calculations (and have) and have it calculate the right answers. It also gets it wrong a lot, so it's not good at that, but any notion that it can't do numerical calculations is easy to disprove.
On the last list, the only word that does not comply with the constraints (having 3 'e's) is "Demeanor", which has only 2. Not great but also not as horrible as you make it sound.
It's not a character based model (likely - although it's closed source so anything is technically possible behind the scenes) so this makes some sense.
The system can infer some relationships, which may be why 'agy' is conflated with 'agi' interestingly, but the tokenization process yields sequences of 'symbols' or indexes that are decided to English - so the system has a more difficult task when asked about 'e's (probably something like token 4893) and has to determine which tokens (e.g. [358,284840, 58292, 4830104, 57282, 4829193, 58282, 384, 24945] contain 'e's or token 4893).
None of them do directly it seems - but 58292 may be 'ee' - so you would get this wrong as well.
The problem is that these models do not have any working memory they could use to carry out such tasks, which are on a meta-level when seen from a language perspective. They can only go with their 'gut instinct' for selecting the next word, they can't 'consider and ponder the problem internally' first.
The problem is that the input is tokenized before the model gets it as input. It does not see the individual letters "t" + "o". It gets one single token, #1462. The word "toe" is another single token, #44579. Maybe over time it could learn from context that inputs that start with #44579 also satisfy the constraint of starting with #1462, but that's a lot of work and it's not going to happen for all combinations of letters.
Perhaps prompting the model to first describe its approach to answering the question. This type of chain-of-thought technique can yield better results.
yeah as usual these model can barely sustain a conversation and fall apart the moment actual instructions are given. typical prompt they fail to udnerstand:
"what is pistacchio? explain the question, not the answer."
all these toy llm: "pistacchio is..."
gpt is the only one that consistently understand these instructions: "The question "what is pistachio?" is asking for an explanation or description of the food item..."
this makes these llm basically useless for obtaining anything but hallucinated data.
Asking LLM from things they learned in training mostly result in hallucinations and in general makes you unable to detect by which amount they are hallucinating: these models are unable to reflect on their output, and average output token probability is a lousy proxy for confidence scoring their results.
On the other hand, no amount of prompt engineering seems to make these LLM able to do question and answer over source documents which is the only realistic way by which factual information can be retrieved
You're welcome to bring examples of it tho if you're so confident.
I've had ChatGPT build a fire nctioning website, write a DNA server, fill in significant portions of specs, all without the problems you describe. I'm never going back to doing things from scratch - it's saving me immense amounts of time every single day. The only reasonable conclusion is that the way you're promoting it is counterproductive.
You can get useful results out of a whole lot of them as long as you actually prompt them in a way suitable for the models. The point I made originally was that if you just feed them an ambiguous question, then sure, you will get extremely variable and mostly useless results out. Ironically,
And I mentioned ChatGPT because from context of your comments here it was unclear on first read-through what you meant. Maybe consider that it's possible your prompting is not geared for the models you've tried.
Not least, specifically given that if you expect a model to know how to follow instructions, when most of them have not been through RLHF you're using them wrong. A lot of them needs prompt shaped as a completion, not a conversation.
I have nothing to gain from spending time testing models for you because whatever I pick will just seem like cherry picking to you, and it doesn't matter to me whether or not you agree on the usability of these models. They work for me, and that's all that matters to me. Try a a few completions instead of a question. Or don't
Proper nouns are words. That answers the prompt, right? There was no mention of "common" in the prompt. If there was, the list of words I got with the same prompt on ChatGPT would have been a lot shorter.
Then the word rhamanagagi (which I just made up) is a word that would technically belong to the list just fine, it definitely not answered to the implicit intent of the question.
The strength of LLM is their ability to answer to unprecisely specified questions, being able to guess the speaker's intent, but in this particular case, it's failing the test.
I’m super excited to announce Lamini, the LLM engine that gives every developer the superpowers that took the world from GPT-3 to ChatGPT!
I’ve seen a lot of developers get stuck after prompt-tuning for a couple days or after fine-tuning an LLM and it just gets worse—there’s no good way to debug it. I have a PhD in AI from Stanford, and don’t think anyone should need one to build an LLM as good as ChatGPT. A world full of LLMs as different & diverse as people would be even more creative, productive, and inspiring.
That’s why I’m building Lamini, the LLM engine for developers to rapidly customize models from amazing foundation models from a ton of institutions: OpenAI, EleutherAI, Cerebras, Databricks, HuggingFace, Meta, and more.
Here’s what Lamini does for you:
Your LLM outperforms general-purpose models on your specific use case
You own the model, weights and all, not us (if foundation model allows it, of course!)
Your data helps the LLM, and build you an AI moat
Any developer can do it today in just a few lines of code
Commercial-use-friendly with a CC-BY license
We’re also releasing several tools on Github:
Today, you can try out our hosted data generator for training your own LLMs, weights and all, without spinning up any GPUs, in just a few lines of code from the Lamini library. https://github.com/lamini-ai/lamini/
Sign up for early access to the training module that took the generated data and trained it into this LLM, including enterprise features like virtual private cloud (VPC) deployments. https://lamini.ai/contact
Im confused, what are you actually offering? Does my fine tuning data get shared with your platform’? Does the model get fine tuned on your end or my own system? Do you host the model?
I've been playing a bit with stacking transformer adapters to add knowledge to models and so far it has met my needs. It doesn't have the same illusion of intelligence, but so far it's just as good as a multitasking intern, so I am still having fun with it. I wonder if this is basically doing the same thing.
Interesting. Do you know if this can be done with Sentence Transformers, too? Picking a good performing one from HF. Then training an adapter for the domain (unsupervised). Then adding another one using actual training triplets (base, similar, non-similar)?
Thank you. Peft and adapters seem to be two different things though, no? AFAIK there are other libraries for adapters (forgot the name). Is peft what you were talking about when you said adapters in your original comment?
I was of the understanding that Lora was one flavor of adapters, but I am still learning so I may be wrong. I yet gotten too deep into other transformer adapters yet (still reading).
GPT at this point is more than an LLM, it is a baseline layer of logic using the underlying transformer technology. This will be challenging to replicate without the same size of data sets
--
agi, agi, agi, agi, agi, agi, agi
These are some of the words that end in agi. You can also use the word agi in a sentence. For example, "I am going to the grocery store to get some agi."
These are some of words that end in agi.
These are some words that end in agi.
maximize, maximize, maximize, maximize, maximize, maximize, maximize, maximize
These are some words that ends in agi
--
So I think this needs more work to get to "as good as ChatGPT". But having said that, congrats on the landing