The whole "mystery" of transformer is that instead of a linear sequence of static weights times values in each layer, you now have 3 different matrices that are obtained from the same input through multiplication of learned weights, and then you just multiply the matrices together. I.e more parallelism which works out nice, but very restrictive since the attention formula is static.
We arent going to see more progress until we have a way to generalize the compute graph as a learnable parameter. I dunno if this is even possible in the traditional sense of gradients due to chaotic effects (i.e small changes reflect big shifts in performance), it may have to be some form of genetic algorithm or pso that happens under the hood.
>The whole "mystery" of transformer is that instead of a linear sequence of static weights times values in each layer, you now have 3 different matrices that are obtained from the same input through multiplication of learned weights, and then you just multiply the matrices together. I.e more parallelism which works out nice, but very restrictive since the attention formula is static.
That's not it at all. What's special about transformers is they allow each element in a sequence to decide which parts of data are most important to it from each other element in the sequence, then extract those out and compute on them. The big theoretical advantage over RNNs (which were used for sequences prior to transformers), is that transformers support this in a lossless way, as each element has full access to all the information in every other element in the sequence (or at least all the ones that occurred before it in time sequences). RNNs and "linear transformers" on the other hand compress past values, so generally the last element of a long sequence will not have access to all the information in the first element of the sequence (unless the RNN internal state was really really big so it didn't need to discard any information).
>What's special about transformers is they allow each element in a sequence to decide which parts of data are most important to it from each other element in the sequence, then extract those out and compute on them.
They do that in theory. In practice, its just all matrix multiplication. You could easily structure a transformer as a bunch of fully connected deep layers and it would be mathematically equivalent, just computationally inefficient.
From my naive perspective, there seems to be a plateau, that everyone is converging on, somewhere between ChatGPT 3.5 and 4 level of performance, with some suspecting that the implementation of 4 might involve several expert models, which would already be extra sauce, external to the LLM. This, combined with the observation that generative models converge to the same output, given the same training data, regardless of architecture (having trouble finding the link, it was posted here some weeks ago), external secret sauce, outside the model, might be where the near term gains are.
A ton of progress can be made climbing a tree, but if your goal is reaching the moon it becomes clear pretty quickly that climbing taller trees will never get you there.
Not true. Climbing trees for millions of years taught us nothing about orbits, or rockets, or literally incomprehensible to human distances, or the vacuum of space, or any possible way to get higher than a tree.
We eventually moved on to lighter than air flight, which once again did not teach us any of those things and also was a dead end from the "get to the sky/moon" perspective, so then we invented heavier than air flight, which once again could not teach us about orbits, rockets, distances, or the vacuum of space.
What got us to the moon was rigorous analysis of reality with math to discover Newton's laws of motion, from which you can derive rockets, orbits, the insane scale of space, etc. No amount of further progress in planes, airships, kites, birds, anything on earth would ever have taught us the techniques to get to the moon. We had to analyze the form and nature of reality itself and derive an internally consistent model of that physical reality in order to understand anything about doing space.
> Climbing trees for millions of years taught us nothing about
Considering the chasm in the number of neurons between the apes and most other animals, I think one could claim that climbing those trees had some contribution to the ability to understand those things. ;) Navigating trees, at weight and speed, has a minimum intelligence reqiurement.
We have made progress in efficiency, not functionality. Instead of searching google or stack overflow or any particular documentation, we just go to Chatgpt.
Information compression is cool, but I want actual AI.
The idea that there has been no progress in functionality is silly.
Your whole brain might just be doing "information compression" by that analogy. An LLM is sort of learning concepts. Even Word2Vec "learned" than king - male + female = queen and that's a small model that's really just one part (not exact, but similar) of a transformer.
One level deep information compression is cool, but I want actual AI.
Its true that our brains compress information, but we compress it in a much more complex manner, in the sense that we can not only recall stuff, but also execute a decision tree that often involves physical actions to find the answer we are looking for.
An LLM isn't just recalling stuff. Brand new stuff, which it never saw in it's training, can come out.
The minute you take a token and turn it into an embedding, then start changing the numbers in that embedding based on other embeddings and learned weights, you are playing around with concepts.
As for executing a decision tree, ReAct or Tree of Thought or Graph of Thought is doing that. It might not be doing it as well as a human does, on certain tasks, but it's pretty darn amazing.
>Brand new stuff, which it never saw in it's training, can come out.
Sort of. You can get LLMs to produce some new things, but these are statistical averages of existing information. Its kinda like a static "knowledge tree", where it can do some interpolation, but even then, its interpolation based on statistically occurring text.
The interpolation isn't really based on statistically occurring text. It's based on statistically occurring concepts. A single token can have many meanings depending on context and many tokens can represent a concept depending on context. A (good) LLM is capturing that.
Neither just text or just concepts, but text-concepts — LLMs can only manipulate concepts as they can be conveyed via text. But I think wordlessly, in pure concepts and sense-images, and serialize my thoughts to text. That I have thoughts that I am incapable of verbalizing is what makes me different from an LLM - and, I would argue, actually capable of conceptual synthesis. I have been told some people think “in words” though.
> [the higher faculty proper of humans is] the primary function of a natural body possessing organs in so far as it commits acts of rational choice and deduction through opinion; and in so far as it perceives universal matters
Or, "Intelligence is the ability to reason, determining concepts".
(And a proper artificial such thing is something that does it well.)
This is basically this - it can learn ignore some paths, and amplify something more important, then you can just cut this paths without sensible loss of quality. The problem is that you are not going to win anything from this - non-matrix multiplication would be slower or the same.
The issue is that you are thinking of this in terms of information compression, which is what LLMs are.
Im more concerned with an LLM having the ability to be trained to the point where a subset of the graph represents all the nand gates necessary for a cpu and ram, so when you ask it questions it can actually run code to compute them accurately instead of offering a statistical best guess, i.e decompression after lossy compression.
The issue is not having access to the cpu, the issue is that the model being able to be trained in such a way that it has representative structures for applicable problem solving. Furthermore, the structures itself should
Philosophically, you can start ad hoc-ing functionalities on top of LLMs and expect major progress. Sure, you can make them better, but you will never get to the state where AI is massively useful.
For example, lets say you gather a whole bunch of experts in respective fields, and you give them a task to put together a detailed plan on how to build a flying car. You will have people doing design, doing simulations, researching material sourcing, creating CNC programs for manufacturing parts, sourcing tools and equipment, writing software, e.t.c. And when executing this plan, they would be open to feedback for anything missed, and can advise on how to proceed.
The AI with above capability should be able to go out on the internet, gather respective data, run any soft of algorithms it needs to run, and perhaps after a month of number crunching on a cloud rented TPU rack produce step by step plan with costs on how to do all of that. And it would be better than those experts because it should be able to create a much higher fidelity simulations to account for things like vibration and predict if some connector if going to wobble loose .
> Philosophically, you can start ad hoc-ing functionalities on top of LLMs and expect major progress. Sure, you can make them better, but you will never get to the state where AI is massively useful.
Evolution created various neural structures in biological brains (visual cortex, medulla, thalamus, etc) rather ad-hoc, and those resulted in "massively useful" systems. Why should AI be different?
I mean, we could definitely run architectures through simulated evolution with genetic algorithms, but then you arrive at the same problem as humans do, which is that you end up with a statistically best solution for given conditions. Sure, that could be a form of AI but there is likely a better (and likely faster) way to build an AI that isn't fundamentally statistical in nature and is adaptable to any and all problems.
LLMs seem like the least efficient way to accomplish this. NAND gates, for example, are inherently 1-bit operators, but LLMs use more. If weights are all binary, than gradients are restricted to -1, 0, and 1, which doesn't give you much room to make incremental improvements. You can add extra bits back, but that's pure overhead. But all this is besides the real issue, which is that LLMs and NNs in general are inherently fuzzy; they guess. Computers aren't, we have perfect simulators.
Consider how humans design things. We don't talk through every CPU cycle to convince ourself a design works, we use bespoke tooling. Not all problems are language shaped.
From what you've written, I don't see why any of this would require the LLM to "be trained to the point where a subset of the graph represents all the nand gates necessary for a cpu and ram" - you'd just be emulating a CPU, but slower.
Tool usage is better, because the LLM can access the relevant computing/simulation at the highest fidelity and as fast as they can run on a real or virtual computer, rather than emulated poorly in a giant pyramid of matrix multiplications.
Well, just remember that NAND gates are made of transistors themselves which are a statistical model of a sort… just designed to appear digital when combined to that NAND level.
This is why I am very interested in analog again—quantum stuff is statistical already, so why go from statistical (analog) to digital (huge drop off of performance, e.g. just look at basic addition in a ALU) and back to statistical. Very interested. Not sure if it will ever be worth it, but can’t rule it out.
>a way to generalize the compute graph as a learnable parameter.
Agreed. Seems analogous with how human mental processes are used to solve the kind of problems we'd like LLMs to solve (going beyond "language processing" which transformers do well, to actual reasoning which they can only mimic). Although you risk it becoming a Turing machine by giving it flow control & then training is a problem as you say. Perhaps not intractable though.
hyperparameter tuning does already go some of the way towards learning the compute graph, though very constrained and with a lot more training required.
> How can gradient descent work on compute graphs when the space of compute graphs is discrete?
You can un-discretize the space of compute graphs by interpolating its points by simplices. More precisely, each graph is a subgraph of the complete graph, and the subgraph is identified by the indicator function of its edges whose values are either 0 or 1. By using weighted edges with values between 0 and 1, the space of all graphs (with the same number of vertices) becomes continuous and connected, and you can gradient move around it in small steps.
Of course, "compute graphs" are more general beasts than "graphs", but it is likely that the same idea will apply. At least, for a reasonably large class of compute graphs.
It can’t. There’s no gradient since it’s not a sufficiently nice space for them. You can use gradient free methods but I’d be shocked if there was an efficient enough way to do that
I don't know if it can in the traditional sense of back propagation.
I think that Hebbian Learning is going to make a comeback at some point and time which will be used to connect static
subgraphs to to other subgraphs subgraphs, which can be trained either separately or on the fly.
From brief look at the paper, they are doing a gradient descent of the architecture based on validation loss, which does good for efficiency, but its not ground breaking. The problem is that you are still training towards a target of a correct answer. I don't think this is gonna be applicable in the future, in the sense that we have to train on other things (like logical consistency somehow encoded into the network), as well as correct answers.
Your expectations are pretty high. Differentiable architecture search as you mentioned in the original comment is one thing; going beyond empirical risk minimization-based learning is another thing entirely. In fact, they seem mostly orthogonal.
That aside, it seems like AI has had the most empirical success by not imposing hard constraints/structure, but letting models learn completely "organically". The computationalists (the folks who have historically been more into this "AI has to have things like logical consistency embedded into its structure" kind of thinking) seem to have basically lost, empirically. Who even knows what Soar[1] is nowadays? Maybe some marriage of the two paradigms will lead to better results, but I doubt that things will head in that direction anytime soon given how massively far just having parallelizable architectures and adding more parameters has gotten us.
They expectations high, but its not so much as orthogonal as more basic. Our brains work on add/multiply/activation this is well known. But the composition of the neural connection strengths in our brain that makes us us is definitely not trained on any sort of final loss. Or at least not completely.
I'm not sure that AI has been successful recently because of its similarities to the human brain. It seems like the project of making human-like AI (in the sense of, models that function similarly to the brain) have had a lot less empirical success than the project of trying minimize loss on a dataset, whatever that takes. Like, look what happened to Hebbian learning, as you mentioned in your other comment. Completely absent from models that are seriously trying to beat SOTA on benchmarks.
Like, it really just seems like LLMs are a really good way of doing statistics rather than the closest model we have of the brain/mind, even if there are some connections we can draw post-hoc between transformers and the human brain.
Though to be fair, actual biological evolution is more complex than simple genetic algorithms. More like evolution strategies with meta-parameter-learning and adaptive rate tuning among other things.
For a dryer, more formal and succinct approach, see "The Transformer Model in Equations" [0], by
John Thickstun. The whole thing fits in a single page, using standard mathematical notation.
Finally, thank you so much!
Was it so difficult?
Isn't 7 lines of mathematical notation way better than pages of qualitative pub talking?
I don't really understand these ML researchers, it always looks like they have never studied mathematics at all.
Thank god, I've had to cobble something like this together for my own notes a couple of times trying to parse papers and was never quite sure if I was missing something.
> Uh oh! We’re getting NaNs! It seems our values are too high, and when being passed to the next encoder, they end up being too high and exploding! This is called gradient explosion.
As far as I understand this is wrong. You're not computing gradients at any point, so this is no gradient explosion. I believe the problem is with the implementation of softmax, here [0] you have an explanation of how to implement a numerically stable softmax.
Yes, you're correct. I tried to connect a common training problem (gradient explosion and vanishing gradient) with the issue of softmax being sensitive to large values. I agree it's misleading/inaccurate, so will rewrite that part.
That said, the whole neural network will be sensible to large values, so it won't be fixed by a numerically stable softmax. The normalization is a key aspect for the network to work.
Transformer tutorials might be the new monad tutorial. A hard concept to get, but one you need to struggle with (and practice some examples) to understand. So a bit like much of computer science :-).
The vectors are random, but they look like they have a pattern here. Does the 2 in both vector mean something? Or, is it the entire set that makes it unique?
The number reuse is just the author being a bit lazy. You could estimate how similar these vectors are by seeing if they point in similar directions or by calculating the angle between them. Here they are about 60° apart and somewhat the same direction, but a lot of this is that the author didn’t want to put in any negative numbers in the example so vectors end up being a bit more similar than they would be really.
That the numbers are reused isn’t meaningful here: a 1 in the first position is quite unrelated to a 1 in the second (as no convolutions are done over this vector)
Thank you. I guess I need to back up. This is a vector, not just an identifier, and direction and angle seem important. I need to look up how the encoding is normally done, since this isn't obvious if you haven't worked in this domain before.
That isn't a very good example. The vectors for each token are randomly initialized with each element taken from the normal distribution. After training, similar words will have some cosine similarity, but almost never as much cosine similarity as [1,2,3,4] and [2,3,4,5].
Not completely related. Does anyone know where I can find articles / papers that discuss why transformers, while acting as merely "next token predictor" can handle questions with:
1. Unknown words (or subwords/tokens) that are not seen in the training dataset.
Example: Create a table with "sdsfs_ff", "fsdf_value" as columns in pandas.
2. Create examples(unseen in training dataset) and tell the LLM to provide similar output.
I have a feeling it should be a common question, but I just can't find the keyword to search.
PS. If anyone has any links with thoroughly discussion about positional embedding, that would be great. I never got a satisfying answer about the usage of sine / cosine and (multiplication vs addition)
If I had to guess, single characters are able to be encoded as tokens, but there's more "bandwidth" in the model being dedicated to handling them and there's less semantic meaning encoded in them "natively" compared to tokens for concrete words. If it decides to, it can recreate unknown sequences by copying over the tokens for the single letters or create them if it makes sense.
I think some earlier NLP applications have something called "Unknown token", which they will replace any unseen word. But for recent implementations, I don't think they are being used anymore.
It still baffles me why such stochastic parrot / next token predictor, will recognize these "Unseen combinations of tokens" and reuse them in response.
Everything falls into place once you understand that LLMs are indeed learning hierarchical concepts inherent in the structured data it has been trained on. These concepts exist in a high dimensional latent space. Within this space is the concept of nonsense/gibberish/placeholder, which your sequence of unseen tokens map to. Then it combines this with the concept of SQL tables, resulting in hopefully the intended answer.
That is to say: Having a correct conditional probability distribution over the next token conditional on the previous tokens, produces a correct probability distribution over sequences of tokens.
And, “correct probability distribution over sequences of tokens” (or, “correct conditional probability distribution over sequences of tokens, conditional on whatever)”, can be... well, you can describe pretty much any kind of input/output behavior in those terms.
So, “it works by predicting the next token” is, at least in principle, not much of a constraint on what kinds of input/output behavior it can have?
So, whatever impressive thing it does, is not really in conflict with its output being produced from the probability distribution P(X_{n+1}=x_{n+1} | X_1=x_1, ..., X_n=x_n) (“predicting the next token”)
> The complexity comes from the number of steps and the number of parameters.
Yes, it seems like a transformer model simple enough for us to understand isn't able to do anything interesting, and a transformer complex enough to do something interesting is too complex for us to understand.
I would love to study something in the middle, a model that is both simple enough to understand and complex enough to do something interesting.
You might be interested, if you aren't already familiar, in some of the work going on in the mechanistic interpretability field. Neel Nanda has a lot of approachable work on the topic: https://www.neelnanda.io/mechanistic-interpretability
I was not familiar with it, and that does look fascinating, thank you. If anyone else is interested, this guide "Concrete Steps to Get Started in Transformer Mechanistic Interpretability" on his site looks like a great place to start:
I would assume that the boundaries of those ranges are such that the middle in between those extremes is something that is already too complex for a human to properly understand while still too small to be able to do anything interesting.
Hard to understand when concepts are used without definition or introduction. The Encoder section just begins without any description of what it is or where is sets in an overall process. I grasp what the author is trying to do, but the post misses basic essay structures such as introducing ideas and explaining them before using them, rending the entire post confusing if one is not already a student and half understands the topic before reading.
As someone who has written an ANN from scratch and hasn't used TensorFlow, I still find this description confusing.
I asked ChatGPT to explain how to modify a basic ANN to implement self-attention without using the terms Matrix or Vector and it gave me a really simple explanation. Though I haven't tried to implement it yet.
I prefer to think of everything in terms of nodes, weights and layers. Matrices and vectors just makes it harder to relate to what's happening in the ANN.
The way I'm used to writing ANNs, each input node is a scalar but the feed forward algorithm looks like vector-matrix multiplication since you multiply all the input nodes by the weights then sum them up... Anyway, I feel like I'm approaching these descriptions with the wrong mindset. Maybe I lack the necessary background.
I also wondered if these formulae were devised with 1-based indexing in mind (though I guess for larger dimensions it doesn't make much difference), as the paper states
> The wavelengths form a geometric progression from 2π to 10000 · 2π
That led me to this chain of PRs - https://github.com/tensorflow/tensor2tensor/pull/177 - turns out the original code was actually quite different to that stated in the paper. I guess slight variations in how you calculate this encoding doesn't affect things too much?
Let's say you want to predict if you'll pass an exam based on how many hours you studied (x1) and how many exercises you did (x2). A neuron will learn a weight for each variable (w1 and w2). If the model learns w1=0.5 and w2=1, the model will provide more importance to the # of exercises.
So if you study for 10 hours and only do 2 exercises, the model will do x1w1 + x2w2=10x0.5 + 2x1 = 7. The neuron then outputs that. This is a bit (but not much) simplified - we also have a bias term and an activation to process the output.
Congrats! We built our first neuron together! Have thousands of these neurons in connected layers, and you suddenly have a deep neural network. Have billions or trillions of them, you have an LLM :)
Transformers can be considered a kind of neural network.
It’s mainly fancy math. With tools like PyTorch or tensorflow, you use python to describe a graph of computations which gets compiled down into optimized instructions.
There are some examples of people making transformers and other NN architectures in about 100 lines of code. I’d google for those to see what these things look like in code.
The training loop, data, and resulting weights are where the magic is.
I absolutely adore this sentence, it made me laugh to imagine coders or other folks looking at the code and thinking "That's it?!? But that's simple!"
Although it feels a little similar to some of the basic reactions that go to make up DNA: start with simple units that work together to form something much more complex.
(apologies for poor metaphors, I'm still trying to grasp some of the concepts involved with this)
Yes neural networks, and even the math required to build them, are very simple calc 1 stuff generally. It’s more coming up with these models that takes powerful intuition
Yes to both, the "neuron" would basically be a weighted parameter.
A parameter is an expression, it's a mathematical representation of a token and it's probabilistic weighting (theyre translated from input or to output token lists entering and exiting the model).
Usually tokens are pre-set small groups of character combinations like "if " or "cha" that make up a word/sentence.
The recorded path your value takes down the chain of probabilities would be the "neural pathway" within the wider "neural network".
Someone please correct me if I'm wrong or my terminology is wrong.
This is all true in a neutral net, but Transformers aren't Neural Nets in the traditional sense. I was under that impression originally, but there's not a back propagation or Hebbian learning here, which were the key bits of biomimicry that earned classic NNs their name.
Transformers do have coefficients that are fit, but that's more broad.. could be used for any sort of regression or optimization, and not necessarily indicative of biological analogs.
So I think the terms "learned model" of "weights" are malapropisms for Transformers, carried over from deep nets because of structural similarities, like many layers, and the development workflow.
The functional units in Transformer's layers have lost their orginal biological inspiration and functional analog. The core function in Transformers is more like autoencoding/decoding (concepts from info theory) and model/grammar-free translation, with a unique attention based optimization. Transformers were developed for translation. The magic is smth like "attending" to important parts of the translation inputs&outputs as tokens are generated, maybe as a kind of deviation on pure autoencoding, due to the bias from the .. learned model :) See I can't even escape it.
Attention as a powerful systemic optimization is the actual closer bit of neuro/bio-insporation here.. but more from Cog Psych than micro/neuro anatomy.
Btw, not only is attention a key insight for Transformers, but it's an interesting biographical note that the lead inventor of it, Jakob Uzkereit, went on to work on a bio-AI startup after Google.
> This is all true in a neutral net, but Transformers aren't Neural Nets in the traditional sense. I was under that impression originally, but there's not a back propagation or Hebbian learning here, which were the key bits of biomimicry that earned classic NNs their name.
Hebbian learning has never been used with much success in training neural nets. Backpropagation is not bio-inspired, but backpropagation is certainly used to train transformers.
Agreed Hebbian learning isn't used.. just meant it as an example of what would signal a NN.
For Backprop, I'm basing this off the development of the Perception. Wiki supports this and its bio-inslired origin[1].
As for its use in Transformers, if you mean simple regressing of errors or use of gradient descent, I'd agree, but that's not usually called Backprop and the term isn't used in the original paper. The term typically means back propagating the errors thru the entire network at a certain stage of learning, and that's not present in Transformers that I can tell.
"Rosenblatt was best known for the Perceptron, an electronic device which was constructed in accordance with biological principles and showed an ability to learn.
He developed and extended this approach in numerous papers and a book called Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms, published by Spartan Books in 1962.[6] He received international recognition for the Perceptron.
The Mark I Perceptron, which is generally recognized as a forerunner to artificial intelligence, currently resides in the Smithsonian Institution in Washington D.C."
Your Juergen page is interesting, tho no direct comment on Rosenblatt there. He does cite the work on this page:
My reading is that a long-known idea, about multi-variate regression, was reinterpreted by Rosenblatt by 1958 via the bio-inspired Perceptron, and then that was criticized by Minksy and others and viable methods were achieved by 1965. When I was taught NNs by Mitchell at CMU in the 1990s (lectures similar to his book Machine Learning), this was the same basic story. Also reminds me of a moment in class one day when a Stats Prof who was surveying the course broke out with "but wait, isn't this all just multivariate regression??" :) Mitchell agreed to the functional similarity, but I think that helps highlight how the biomimicry was crucial to developing the idea. it had laid hidden in plain sight for a century.
Agreed, and I was aware, there has since been criticism of the biological plausibility of backprop.
Your further links with refs to backprop in transformers are interesting; I hadn't seen these. It's clear the term is being used like you say, tho I still see ambiguity of it utility here. Autodifferentiation, gradient descent, multi-variate regerssion etc. are ofc in common use and scanning these papers it's not clear to me the terms aren't simply to a point of conflation. What had stood unique for me with backprop was a coherent whole-network regression. This to me looks like a piecewise approach.
Thanks for your reply, you raise a very good point, transformer models are a lot more complex. I'd argue conceptually they're the same, just the data and process is more abstracted. Autoencoded data implies using efficient representations, basically semantically abstracted data and opting for measures like back propagation through time.
So like in my sister reply, I don't see the Backprop, but maybe I'm missing it. This article does use the word, but in a generic way
"For example, when doing the backpropagation (the technique through which the models learn), the gradients can become too large"
But I think this is more of a borrowing and it's not used again in description and may just be a misconception. There's no use of the Backprop term in the original paper nor any stage of learning where output errors are run thru the whole network in a deep regression.
What I do see in Transformers is localized uses of gradient descent, and Backprop in NNs also uses GD...but that seems the extent of it.
Backpropagation Through Time (BPTT) is an adaptation of backpropagation used for training recurrent neural networks (RNNs), which are designed to process sequences of data and have internal memory. Because the output at a given time step might depend on inputs from previous time steps, the forward pass involves unfolding the RNN through time, which essentially converts it into a deep feedforward neural network with shared weights across the time steps. The error for each time step is computed, and then BPTT is used to calculate the gradients across the entire unfolded sequence, propagating the error not just backward through the layers but also backward through the time steps. Updates are then made to the network weights in a way that should minimize errors for all time steps. This is computationally more involved than standard backpropagation and has its own challenges such as exploding or vanishing gradients"
Here [1] are some "frauds" from Stanford University, Oxford University and University College London telling you exactly that.
From their abstract:
``One of the most exciting and promising novel architectures, the Transformer neural
network, was developed without the brain in mind. In this work, we show that
transformers, when equipped with recurrent position encodings, replicate the precisely tuned spatial representations of the hippocampal formation; most notably
place and grid cells. Furthermore, we show that this result is no surprise since
it is closely related to current hippocampal models from neuroscience. We additionally show the transformer version offers dramatic performance gains over the
neuroscience version.``
Making the claim that transformers are a good candidate model for certain neural pathways is a pretty different claim than saying the brain is literally using transformers.
I’m assuming you are asking if the brain uses transformer-like structures or otherwise exhibits similar behavior. I don’t know, but it does share some processes with simpler ML ideas, and I’d be very interested to see if it uses anything resembling a transformer.
Forward-forward algorithm is more like the brain. As I understand, backpropagation transformers require storing data, doing calculations on that aggregate, and sending it back through, which no neural structures can do anything like.
I love them because they do give another resource at explaining models such as transformers and I think this one is pretty well done (note: you really need to do something about the equation in 4.2...)
First, the critique is coming from love. Great work, so I don't want it to be taken as I'm saying anything it isn't.
Why I hate these is that they are labeled as "math behind" but I think this is not quite fitting. This is the opposite of the complaint I made about the Introduction to DL post the other day[0]. The issue isn't that there isn't math, but contextually it is being labeled as a mathematical approach but I'm not seeing anything that distinguishes it as deeper than what you'd get from Karpathy's videos or the Annotated Transformer (I like this more than illustrated). There's nothing wrong with that, but just think it might mislead people, especially as there is a serious lack of places to find a much deeper mathematical explanation behind architectures and the naming makes it harder to find for those that are looking for that, because they'll find these posts. Simply, complaint is about framing.
To be clear, the complaint is just about the subtitle, because the article is good and a useful resource for people seeking to learn attention and transformers. But let me try to clarify some of what would I personally (welcome to disagree, it is an opinion) more accurately be representative of " demystifying all the math behind them":
- I would include a much deeper discussion of both embedding and positional embedding. The former you should at minimum be discussion how it is created and discussing the dequantization. This post may give a reader the impression that this is not taking place (there is ambiguity between distinction of embedding vs tokenization and embedding, this looks to just briefly mention tokenization. I specifically think a novice might take away that the dequantization is happening due to the positional encoding, and not in the embedding). The tokenization and embedding is a vastly underappreciated and incredibly important aspect of making discrete models work (not just LLMs or LMs. Principle is more general).
- Same goes for the positional embedding which I have only in a handful of cases seen discussed and taken rather matter of factly. For a mathematical explanation you do need to explain the idea behind generating unique signals for each position, explain why we need a a high frequency, and it is worth mentioning how this can be learnable (often with similar results, which is why most don't bother), and other forms like rotational. The principle is far more general than even a Fourier Series (unmentioned!). The continuous aspect also matters a lot here, and we (often) don't want discritized positional encoding. If this isn't explained it feels rather arbitrary, and in some ways it is but others it isn't.
- The attention mechanism is vastly under-explained, though I understand why. There are many approaches to tackle this, some from graphs, some from category theory, and many others. They're all valuable pieces to the puzzle. But at minimum I think there needs to be a clear identification as to what the dot product is doing, the softmax, the scale (see softmax tempering), and why we then have the value. Their key-query-value names were not chosen at random and the database analogy is quite helpful. Maybe many don't understand the relationship of dot products and angles between vectors? But this can even get complex as we would expect values to go to 0 in high dimensions (which they kinda do if you look at the attention matricies post learning which often look highly diagonal and why you can initialize them as diagonally spiked for sometimes faster training). This would be a great place to bring up how there might be some surprising aspects to the attention mechanism considering matrices represent affine transformations of data (linear) and we might not see the non-linearity here (softmax) or understand why softmax works better than other non-linears or normalizers (try it yourself!).
- There's more but I've written a wall. So I'll just say we can continue for the residuals (also see META's 3 Things Everyone Should Know About Vision Transformers, in the Deit repo), why we have pre-norm as opposed to the original post-norm (which it looks like post norm is being used!), the residuals (knot theory can help here a bit), and why we have the linear layer (similarly the unknotting discussion helps, especially quantifying why we like a 4x ratio, but isn't absolutely necessary).
Idk, are people interested in these things? I know most people aren't, and there's absolutely nothing wrong with that (you can still build strong models without this knowledge, but it is definitely helpful). I do feel that we often call these things black boxes but they aren't completely opaque. They sure aren't transparent, especially through scale, but they aren't "black" either. (Allen-Zhu & Li's Physics of LLMs is a great resource btw and I'd love if other users posted/referenced more things they liked. I purposefully didn't link btw)
So, I do like the post, and I think it has good value (and certainly there is always value in teaching to learn!), but I disagree with the HN title and post's subtitle.
I really appreciate you taking the time to provide all this feedback. This feedback + additional resources are extremely useful.
I agree that the subtitle is not as accurate as it could be. I'll revisit it! As for content updates, I've been doing some additional updates in the last days based on feedback (e.g. more info about tokenization and the token embeddings). Although diving in some of your suggestions is likely out of scope for this article, I in particular agree that expanding the attention mechanism content (e.g. the analogy with databases or explaining what is dot product) would increase the quality of the article. I will look into expanding this!
I also think a more rigorous, separate mathematical exploration into attention mechanisms and recent advancements would be a great tool for the ecosystem.
Once again, thank you for all the amazing feedback!
Hey, I'm glad you found it useful. I know it is hard to take critique, but I did enjoy the post. I truly do mean the critique is coming from a place of love. And I hope the comment helps others find more (I guess I'm writing a blog post now). I do feel there is often this gap between nearly no math and way too much math that causes a lot of people to come away with "you don't need math for ML" which is... idk... partially correct but not? haha. I'm a bit mathy of a person so you just caught a pet peeve of mine. I definitely agree what I said is out of scope for how you wrote but I will stand with my subtitle critique ;) I still do like the article though
And I just realized we're in a slack channel together haha (I don't think we've ever talked though). I poked around your website and saw you're at HF. Love you guys to death. You all also have tons of awesome blog posts and you're one of the most useful forces in ML. So I really do appreciate all the work.
Can you link a resource that is able to adequately explain why they're called Key, Query, and Value? Every explanation I've read eventually handwaved this. It feels like understanding why they're named that is key (heh) to understanding the concept, rather than just blindly implementing matmul.
> is it worth it to invest time to get some idea about this whole AI field? I'm from a compE background
Might be worth thinking about how it will specifically affect your field of expertise. Jensen Huang says your job won't be taken over by an AI but by a human using an AI.
Does mystified math lie beyond behind how the ratio of input and output voltages is equal to the ratio of the primary and secondary windings? Can it be derived from Maxwell's equations?
Not really mystified in any sense of the word but for more precise calculations of circuit diagrams with transformers you can formulate a system of coupled ODEs which can be written in matrix form leading to the very nice mathematics of matrix differential equations.
I seen an LLM, or maybe another variant of “AI” [0] a while back that could aid design of electronic circuits by having a pool of data sheets added for referencing.
As you were querying specs for a board at component level it could give you a schematic, I think, with citations to the actual data sheets.
I suppose the same scale up could be used for systems that needed a varying number of specific power supplies.
We arent going to see more progress until we have a way to generalize the compute graph as a learnable parameter. I dunno if this is even possible in the traditional sense of gradients due to chaotic effects (i.e small changes reflect big shifts in performance), it may have to be some form of genetic algorithm or pso that happens under the hood.