So, as a contrived example, with RAG you make some queries, in some format, like “Who is Sauron?” And then start feeding in what books he’s mentioned in, paragraphs describing him from Tolkien books, things he has done.
Then you start making more specific queries? How old is he, how tall is he, etc.
And the game is you run a “questionnaire AI” that can look at a blob of text, and you ask it “what kind of questions might this paragraph answer”, and then turn around and feed those questions and text back into the system.
Is that a 30,000 foot view really of how this works?
The 3rd paragraph missed the mark but previous ones are in the right ballpark.
You take the users question either embed it directly or augment it for embedding (you can for example use LLM to extract keywords form the question), query the vector db containing the data related to the question and then feed it all of LLM as: here is question form the user and here is some data that might be related to it.
Essentially you take any decent model trained on factual information regurgitation, or well any decently well rounded model, a llama 2 variant or something.
Then you craft a prompt for the model along the lines of "you are a helpful assistant, you will provide an answer based on the provided information. If no information matches simply respond with 'I don't know that'".
Then, you take all of your documents and divide them into meaningful chunks, ie by paragraph or something. Then you take these chunks and create embeddings for them. An embedding model is another type (not an llm) that generates vectors for strings of text often based on how similar the words are in _meaning_. Ie if I generate embeddings for the phrase "I have a dog" it might (simplified) be a vector like [0.1,0.2,0.3,0.4]. This vector can be seen as representing a point in a multidimensional space. What an embedding model does with the word meaning is something like if I want to search for "cat" that might embed as a vector [0.42]. Now, say we want to search for the query "which pets do I have" first we generate embeddings for this phrase, the word "pet" might be embedded as [0.41] in the vector. Because it's based on trained meaning, the vectors for "pet" and for "dog" will be close together in our multidimensional space. We can choose how strict we want to be with this search (basically a limit to how close the vectors need to be together in space to count as a match).
Next step is to put this into a vector database, a db designed with vector search operations in mind. We store each chunk, the part of the file it's from and that chunks embedding vector in the database.
Then, when the LLM is queried, say "which pets do I have?", we first generate embeddings for the query, then we use the embedding vector to query our database for things that match close enough in space to be relevant but loose enough that we get "connected" words. This gives us a bunch of our chunks ranked by how close that chunks vector is to our query vector in the multidimensional space. We can then take the n highest ranked chunks, concatenate their original text and prepend this to our original LLM query. The LLM then digests this information and responds in natural language.
So the query sent to the LLM might be something like: "you are a helpful assistant, you will provide an answer based on the provided information. If no information matches simply respond with 'I don't know that'
Information:I have a dog,my dog likes steak,my dog's name is Fenrir
User query: which pets do I have?"
All under "information" is passed in from the chunked text returned from the vector db. And the response from that LLM query would ofc be something like "You have a dog, its name is Fenrir and it likes steak."
Stupid Question: Eli5; Can/Does/Would it make sense to 'cache' (for lack of a better term) a 'memory' of having answered that question.... and so if that question is asked again, it knows that it has answered it in the past, and can/does better?
(Seems like this is what reinforcement training is, but I am just not sure? Everything seems to mush together when talking about gpts logic)
You can decide to store whatever you like in the vector database.
For example you can have a table of "knowledge" as I described earlier, but you can just add easily have a table of the conversation history, or have both.
In fact it's quite popular afaik to store the conversation this way because then if you query on a topic you've queried before, even if the conversation history has gone behind the size of the context, it can still retrieve that history. So yes, what you describe is a good idea/would work/is being done.
It really all comes down to the non model logic/regular programming of how your vector db is queried and how you mix those query results in with the user's query to the LLM.
For example you could embed their query as I described, then search the conversation history + general information storage in the vector db and mix the results. You can even feed it back into itself in a multi step process a la "agents" where your "thought process" takes the user query and breaks it down further by querying the LLM with a different prompt; instead of "you are a helpful assistant" it can be "you have x categories of information in the database, given query {query} specify what data to be extracted for further processing" obv that's a fake general idea prompt but I hope you understand.
Well there's technically no model training involved here but I guess you consider the corpus of conversation data a kind of training, and yeah that would be RLHF based which LLMs learn pretty heavily on afaik (I've not fine tuned my own yet).
You can fine tune models to be better at certain things or respond in certain ways, this is usually done via a kind of reinforcement learning (with human feedback...idk why it's called this, any human feedback is surely just supervised learning right?) this is useful for example, to take a model trained on all kinds of text from everywhere, then fine tune it on text from scifi novels, to make it particularly good at writing scifi fiction.
A fine tune I would say is more the "personality" of the underlying LLM. Saying this, you can ask an LLM to play a character, but the underlying "personality" of the LLM is still manufacturing said character.
Vector databases are more for knowledge store, as if your LLM personality had a table full off open books in front of them; world atlases, a notebook of the conversation you've been having, etc.
Eg, personality: LLM fine tune on all David Attenborough narration = personality like a biologist/natural historian
Knowledge base = chunks of text from scientific papers on chemistry + chunks of the current conversation
Which with some clever vector db queries/feeding back into model = bot that talks like Attenboroughish but knows about chemistry.
Tbf the feedback model it's better to use something strict, ie instruct based model, bc your internal thought steps are heavily goal orientated, all of the personality can be added with the final step using your fine tune.
Then you start making more specific queries? How old is he, how tall is he, etc.
And the game is you run a “questionnaire AI” that can look at a blob of text, and you ask it “what kind of questions might this paragraph answer”, and then turn around and feed those questions and text back into the system.
Is that a 30,000 foot view really of how this works?